[1910.09763] Stochastic Feedforward Neural Networks - arXiv

文章推薦指數: 80 %
投票人數:10人

In contrast to deterministic networks, which represent mappings from a set of inputs to a set of outputs, stochastic networks represent mappings ... GlobalSurvey Injust3minutes,helpusbetterunderstandhowyouperceivearXiv. Takethesurvey TAKESURVEY ComputerScience>MachineLearning arXiv:1910.09763(cs) [Submittedon22Oct2019] Title:StochasticFeedforwardNeuralNetworks:UniversalApproximation Authors:ThomasMerkh,GuidoMontúfar DownloadPDF Abstract:Inthischapterwetakealookattheuniversalapproximationquestionfor stochasticfeedforwardneuralnetworks.Incontrasttodeterministicnetworks, whichrepresentmappingsfromasetofinputstoasetofoutputs,stochastic networksrepresentmappingsfromasetofinputstoasetofprobability distributionsoverthesetofoutputs.Inparticular,evenifthesetsof inputsandoutputsarefinite,theclassofstochasticmappingsinquestionis notfinite.Moreover,whileforadeterministicfunctionthevaluesofall outputvariablescanbecomputedindependentlyofeachothergiventhevalues oftheinputs,inthestochasticsettingthevaluesoftheoutputvariablesmay needtobecorrelated,whichrequiresthattheirvaluesarecomputedjointly.A prominentclassofstochasticfeedforwardnetworkswhichhasplayedakeyrole intheresurgenceofdeeplearningaredeepbeliefnetworks.The representationalpowerofthesenetworkshasbeenstudiedmainlyinthe generativesetting,asmodelsofprobabilitydistributionswithoutaninput,or inthediscriminativesettingforthespecialcaseofdeterministicmappings. Westudytherepresentationalpowerofdeepsigmoidbeliefnetworksintermsof compositionsoflineartransformationsofprobabilitydistributions,Markov kernels,thatcanbeexpressedbythelayersofthenetwork.Weinvestigate differenttypesofshallowanddeeparchitectures,andtheminimalnumberof layersandunitsperlayerthataresufficientandnecessaryinorderforthe networktobeabletoapproximateanygivenstochasticmappingfromthesetof inputstothesetofoutputsarbitrarilywell. Subjects: MachineLearning(cs.LG);NeuralandEvolutionaryComputing(cs.NE);MachineLearning(stat.ML) Citeas: arXiv:1910.09763[cs.LG]   (or arXiv:1910.09763v1[cs.LG]forthisversion)   https://doi.org/10.48550/arXiv.1910.09763 Focustolearnmore arXiv-issuedDOIviaDataCite SubmissionhistoryFrom:ThomasMerkh[viewemail] [v1] Tue,22Oct201904:49:43UTC(66KB) Full-textlinks: Download: PDF Otherformats (license) Currentbrowsecontext:cs.LG new | recent | 1910 Changetobrowseby: cs cs.NE stat stat.ML References&Citations NASAADSGoogleScholar SemanticScholar DBLP-CSBibliography listing|bibtex ThomasMerkhGuidoMontúfar a exportbibtexcitation Loading... Bibtexformattedcitation × loading... Dataprovidedby: Bookmark BibliographicTools BibliographicandCitationTools BibliographicExplorerToggle BibliographicExplorer(WhatistheExplorer?) LitmapsToggle Litmaps(WhatisLitmaps?) scite.aiToggle sciteSmartCitations(WhatareSmartCitations?) Code&Data CodeandDataAssociatedwiththisArticle arXivLinkstoCodeToggle arXivLinkstoCode&Data(WhatisLinkstoCode&Data?) Demos Demos ReplicateToggle Replicate(WhatisReplicate?) RelatedPapers RecommendersandSearchTools ConnectedPapersToggle ConnectedPapers(WhatisConnectedPapers?) Corerecommendertoggle CORERecommender(WhatisCORE?) AboutarXivLabs arXivLabs:experimentalprojectswithcommunitycollaborators arXivLabsisaframeworkthatallowscollaboratorstodevelopandsharenewarXivfeaturesdirectlyonourwebsite. BothindividualsandorganizationsthatworkwitharXivLabshaveembracedandacceptedourvaluesofopenness,community,excellence,anduserdataprivacy.arXiviscommittedtothesevaluesandonlyworkswithpartnersthatadheretothem. HaveanideaforaprojectthatwilladdvalueforarXiv'scommunity?LearnmoreaboutarXivLabsandhowtogetinvolved. Whichauthorsofthispaperareendorsers?| DisableMathJax(WhatisMathJax?)



請為這篇文章評分?