Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Das, Soumya Kanti"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Computational Learning Theory aspects of Piecewise Polynomial and Sigmoidal Neural Networks
    (Indian Statistical Institute,Kolkata, 2019-07) Das, Soumya Kanti
    VC (Vapnik Chervonenkis) Dimension is a useful tool for measuring the power of a neural network or some other types of classi ers. In the eld of learning theory VC dimension represents the generalized power of a neural network. From mid 20th century researchers have been interested in this work and have provided a vast horizon of upper and lower bounds for VC dimension of a neural network. Most of the published work assumes feed forward neural network with no skip connections to establish the upper and lower bounds of VC dimension. In this work we establish that the upper bound of VC Dimension for neural network with piece wise polynomial activation functions can be tighter. Along with this we proposed some other methods for calculating VC Dimension upper bound for RVFLN (neural network with skip connections). Most of the relevant work on VC Dimension upper bound for neural network with sigmoidal activation functions are based on model theoretic approach or number of operations on a basic computing model. Later in this work we give a di erent approach for calculation of VC Dimension upper bound for neural network with sigmoidal activation functions. Moreover on top this we give an idea about how a theoretical test error rate and practical test error rate depend upon on the number of layers and the number of parameters for a feed forward neural network.

DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify