How to Configure the Number of Layers and Nodes in a ...

文章推薦指數: 80 %
投票人數:10人

A single-layer artificial neural network, also called a single-layer, has a single layer of nodes, as its name suggests. Each node in the single ... Navigation Home MainMenuGetStarted Blog Topics DeepLearning(keras) ComputerVision NeuralNetTimeSeries NLP(Text) GANs LSTMs BetterDeepLearning Calculus IntrotoAlgorithms CodeAlgorithms IntrotoTimeSeries Python(scikit-learn) EnsembleLearning ImbalancedLearning DataPreparation R(caret) Weka(nocode) LinearAlgebra Statistics Optimization Probability XGBoost EBooks FAQ About Contact ReturntoContent ByJasonBrownleeonJuly27,2018inDeepLearningPerformance Tweet Tweet Share Share LastUpdatedonAugust6,2019 Artificialneuralnetworkshavetwomainhyperparametersthatcontrolthearchitectureortopologyofthenetwork:thenumberoflayersandthenumberofnodesineachhiddenlayer. Youmustspecifyvaluesfortheseparameterswhenconfiguringyournetwork. Themostreliablewaytoconfigurethesehyperparametersforyourspecificpredictivemodelingproblemisviasystematicexperimentationwitharobusttestharness. Thiscanbeatoughpilltoswallowforbeginnerstothefieldofmachinelearning,lookingforananalyticalwaytocalculatetheoptimalnumberoflayersandnodes,oreasyrulesofthumbtofollow. Inthispost,youwilldiscovertherolesoflayersandnodesandhowtoapproachtheconfigurationofamultilayerperceptronneuralnetworkforyourpredictivemodelingproblem. Afterreadingthispost,youwillknow: Thedifferencebetweensingle-layerandmultiple-layerperceptronnetworks. Thevalueofhavingoneandmorethanonehiddenlayersinanetwork. Fiveapproachesforconfiguringthenumberoflayersandnodesinanetwork. Kick-startyourprojectwithmynewbookBetterDeepLearning,includingstep-by-steptutorialsandthePythonsourcecodefilesforallexamples. Let’sgetstarted. HowtoConfiguretheNumberofLayersandNodesinaNeuralNetworkPhotobyRyan,somerightsreserved. Overview Thispostisdividedintofoursections;theyare: TheMultilayerPerceptron HowtoCountLayers? WhyHaveMultipleLayers? HowManyLayersandNodestoUse? TheMultilayerPerceptron Anode,alsocalledaneuronorPerceptron,isacomputationalunitthathasoneormoreweightedinputconnections,atransferfunctionthatcombinestheinputsinsomeway,andanoutputconnection. Nodesarethenorganizedintolayerstocompriseanetwork. Asingle-layerartificialneuralnetwork,alsocalledasingle-layer,hasasinglelayerofnodes,asitsnamesuggests.Eachnodeinthesinglelayerconnectsdirectlytoaninputvariableandcontributestoanoutputvariable. Single-layernetworkshavejustonelayerofactiveunits.Inputsconnectdirectlytotheoutputsthroughasinglelayerofweights.Theoutputsdonotinteract,soanetworkwithNoutputscanbetreatedasNseparatesingle-outputnetworks. —Page15,NeuralSmithing:SupervisedLearninginFeedforwardArtificialNeuralNetworks,1999. Asingle-layernetworkcanbeextendedtoamultiple-layernetwork,referredtoasaMultilayerPerceptron.AMultilayerPerceptron,orMLPforshort,isanartificialneuralnetworkwithmorethanasinglelayer. Ithasaninputlayerthatconnectstotheinputvariables,oneormorehiddenlayers,andanoutputlayerthatproducestheoutputvariables. Thestandardmultilayerperceptron(MLP)isacascadeofsingle-layerperceptrons.Thereisalayerofinputnodes,alayerofoutputnodes,andoneormoreintermediatelayers.Theinteriorlayersaresometimescalled“hiddenlayers”becausetheyarenotdirectlyobservablefromthesystemsinputsandoutputs. —Page31,NeuralSmithing:SupervisedLearninginFeedforwardArtificialNeuralNetworks,1999. WecansummarizethetypesoflayersinanMLPasfollows: InputLayer:Inputvariables,sometimescalledthevisiblelayer. HiddenLayers:Layersofnodesbetweentheinputandoutputlayers.Theremaybeoneormoreoftheselayers. OutputLayer:Alayerofnodesthatproducetheoutputvariables. Finally,therearetermsusedtodescribetheshapeandcapabilityofaneuralnetwork;forexample: Size:Thenumberofnodesinthemodel. Width:Thenumberofnodesinaspecificlayer. Depth:Thenumberoflayersinaneuralnetwork. Capacity:Thetypeorstructureoffunctionsthatcanbelearnedbyanetworkconfiguration.Sometimescalled“representationalcapacity“. Architecture:Thespecificarrangementofthelayersandnodesinthenetwork. HowtoCountLayers? Traditionally,thereissomedisagreementabouthowtocountthenumberoflayers. Thedisagreementcentersaroundwhetherornottheinputlayeriscounted.Thereisanargumenttosuggestitshouldnotbecountedbecausetheinputsarenotactive;theyaresimplytheinputvariables.Wewillusethisconvention;thisisalsotheconventionrecommendedinthebook“NeuralSmithing“. Therefore,anMLPthathasaninputlayer,onehiddenlayer,andoneoutputlayerisa2-layerMLP. ThestructureofanMLPcanbesummarizedusingasimplenotation. Thisconvenientnotationsummarizesboththenumberoflayersandthenumberofnodesineachlayer.Thenumberofnodesineachlayerisspecifiedasaninteger,inorderfromtheinputlayertotheoutputlayer,withthesizeofeachlayerseparatedbyaforward-slashcharacter(“/”). Forexample,anetworkwithtwovariablesintheinputlayer,onehiddenlayerwitheightnodes,andanoutputlayerwithonenodewouldbedescribedusingthenotation:2/8/1. IrecommendusingthisnotationwhendescribingthelayersandtheirsizeforaMultilayerPerceptronneuralnetwork. WhyHaveMultipleLayers? Beforewelookathowmanylayerstospecify,itisimportanttothinkaboutwhywewouldwanttohavemultiplelayers. Asingle-layerneuralnetworkcanonlybeusedtorepresentlinearlyseparablefunctions.Thismeansverysimpleproblemswhere,say,thetwoclassesinaclassificationproblemcanbeneatlyseparatedbyaline.Ifyourproblemisrelativelysimple,perhapsasinglelayernetworkwouldbesufficient. Mostproblemsthatweareinterestedinsolvingarenotlinearlyseparable. AMultilayerPerceptroncanbeusedtorepresentconvexregions.Thismeansthatineffect,theycanlearntodrawshapesaroundexamplesinsomehigh-dimensionalspacethatcanseparateandclassifythem,overcomingthelimitationoflinearseparability. Infact,thereisatheoreticalfindingbyLippmanninthe1987paper“Anintroductiontocomputingwithneuralnets”thatshowsthatanMLPwithtwohiddenlayersissufficientforcreatingclassificationregionsofanydesiredshape.Thisisinstructive,althoughitshouldbenotedthatnoindicationofhowmanynodestouseineachlayerorhowtolearntheweightsisgiven. AfurthertheoreticalfindingandproofhasshownthatMLPsareuniversalapproximators.Thatwithonehiddenlayer,anMLPcanapproximateanyfunctionthatwerequire. Specifically,theuniversalapproximationtheoremstatesthatafeedforwardnetworkwithalinearoutputlayerandatleastonehiddenlayerwithany“squashing”activationfunction(suchasthelogisticsigmoidactivationfunction)canapproximateanyBorelmeasurablefunctionfromonefinite-dimensionalspacetoanotherwithanydesirednon-zeroamountoferror,providedthatthenetworkisgivenenoughhiddenunits. —Page198,DeepLearning,2016. Thisisanoften-citedtheoreticalfindingandthereisatonofliteratureonit.Inpractice,weagainhavenoideahowmanynodestouseinthesinglehiddenlayerforagivenproblemnorhowtolearnorsettheirweightseffectively.Further,manycounterexampleshavebeenpresentedoffunctionsthatcannotdirectlybelearnedviaasingleone-hidden-layerMLPorrequireaninfinitenumberofnodes. Evenforthosefunctionsthatcanbelearnedviaasufficientlylargeone-hidden-layerMLP,itcanbemoreefficienttolearnitwithtwo(ormore)hiddenlayers. Sinceasinglesufficientlylargehiddenlayerisadequateforapproximationofmostfunctions,whywouldanyoneeverusemore?Onereasonhangsonthewords“sufficientlylarge”.Althoughasinglehiddenlayerisoptimalforsomefunctions,thereareothersforwhichasingle-hidden-layer-solutionisveryinefficientcomparedtosolutionswithmorelayers. —Page38,NeuralSmithing:SupervisedLearninginFeedforwardArtificialNeuralNetworks,1999. HowManyLayersandNodestoUse? WiththepreambleofMLPsoutoftheway,let’sgetdowntoyourrealquestion. HowmanylayersshouldyouuseinyourMultilayerPerceptronandhowmanynodesperlayer? Inthissection,wewillenumeratefiveapproachestosolvingthisproblem. 1)Experimentation Ingeneral,whenI’maskedhowmanylayersandnodestouseforanMLP,Ioftenreply: Idon’tknow.Usesystematicexperimentationtodiscoverwhatworksbestforyourspecificdataset. Istillstandbythisanswer. Ingeneral,youcannotanalyticallycalculatethenumberoflayersorthenumberofnodestouseperlayerinanartificialneuralnetworktoaddressaspecificreal-worldpredictivemodelingproblem. Thenumberoflayersandthenumberofnodesineachlayeraremodelhyperparametersthatyoumustspecify. Youarelikelytobethefirstpersontoattempttoaddressyourspecificproblemwithaneuralnetwork.Noonehassolveditbeforeyou.Therefore,noonecantellyoutheanswerofhowtoconfigurethenetwork. Youmustdiscovertheanswerusingarobusttestharnessandcontrolledexperiments.Forexample,seethepost: HowtoEvaluatetheSkillofDeepLearningModels Regardlessoftheheuristicsyoumightencounter,allanswerswillcomebacktotheneedforcarefulexperimentationtoseewhatworksbestforyourspecificdataset. 2)Intuition Thenetworkcanbeconfiguredviaintuition. Forexample,youmayhaveanintuitionthatadeepnetworkisrequiredtoaddressaspecificpredictivemodelingproblem. Adeepmodelprovidesahierarchyoflayersthatbuildupincreasinglevelsofabstractionfromthespaceoftheinputvariablestotheoutputvariables. Givenanunderstandingoftheproblemdomain,wemaybelievethatadeephierarchicalmodelisrequiredtosufficientlysolvethepredictionproblem.Inwhichcase,wemaychooseanetworkconfigurationthathasmanylayersofdepth. Choosingadeepmodelencodesaverygeneralbeliefthatthefunctionwewanttolearnshouldinvolvecompositionofseveralsimplerfunctions.Thiscanbeinterpretedfromarepresentationlearningpointofviewassayingthatwebelievethelearningproblemconsistsofdiscoveringasetofunderlyingfactorsofvariationthatcaninturnbedescribedintermsofother,simplerunderlyingfactorsofvariation. —Page201,DeepLearning,2016. Thisintuitioncancomefromexperiencewiththedomain,experiencewithmodelingproblemswithneuralnetworks,orsomemixtureofthetwo. Inmyexperience,intuitionsareofteninvalidatedviaexperiments. 3)GoForDepth Intheirimportanttextbookondeeplearning,Goodfellow,Bengio,andCourvillehighlightthatempirically,onproblemsofinterest,deepneuralnetworksappeartoperformbetter. Specifically,theystatethechoiceofusingdeepneuralnetworksasastatisticalargumentincaseswheredepthmaybeintuitivelybeneficial. Empirically,greaterdepthdoesseemtoresultinbettergeneralizationforawidevarietyoftasks.[…]Thissuggeststhatusingdeeparchitecturesdoesindeedexpressausefulprioroverthespaceoffunctionsthemodellearns. —Page201,DeepLearning,2016. Wemayusethisargumenttosuggestthatusingdeepnetworks,thosewithmanylayers,maybeaheuristicapproachtoconfiguringnetworksforchallengingpredictivemodelingproblems. ThisissimilartotheadviceforstartingwithRandomForestandStochasticGradientBoostingonapredictivemodelingproblemwithtabulardatatoquicklygetanideaofanupper-boundonmodelskillpriortotestingothermethods. 4)BorrowIdeas Asimple,butperhapstimeconsumingapproach,istoleveragefindingsreportedintheliterature. FindresearchpapersthatdescribetheuseofMLPsoninstancesofpredictionproblemssimilarinsomewaytoyourproblem.Notetheconfigurationofthenetworksusedinthosepapersandusethemasastartingpointfortheconfigurationstotestonyourproblem. Transferabilityofmodelhyperparametersthatresultinskillfulmodelsfromoneproblemtoanotherisachallengingopenproblemandthereasonwhymodelhyperparameterconfigurationismoreartthanscience. Nevertheless,thenetworklayersandnumberofnodesusedonrelatedproblemsisagoodstartingpointfortestingideas. 5)Search Designanautomatedsearchtotestdifferentnetworkconfigurations. Youcanseedthesearchwithideasfromliteratureandintuition. Somepopularsearchstrategiesinclude: Random:Tryrandomconfigurationsoflayersandnodesperlayer. Grid:Tryasystematicsearchacrossthenumberoflayersandnodesperlayer. Heuristic:TryadirectedsearchacrossconfigurationssuchasageneticalgorithmorBayesianoptimization. Exhaustive:Tryallcombinationsoflayersandthenumberofnodes;itmightbefeasibleforsmallnetworksanddatasets. Thiscanbechallengingwithlargemodels,largedatasetsandcombinationsofthetwo.Someideastoreduceormanagethecomputationalburdeninclude: Fitmodelsonasmallersubsetofthetrainingdatasettospeedupthesearch. Aggressivelyboundthesizeofthesearchspace. Parallelizethesearchacrossmultipleserverinstances(e.g.useAmazonEC2service). Irecommendbeingsystematiciftimeandresourcespermit. More Ihaveseencountlessheuristicsofhowtoestimatethenumberoflayersandeitherthetotalnumberofneuronsorthenumberofneuronsperlayer. Idonotwanttoenumeratethem;I’mskepticalthattheyaddpracticalvaluebeyondthespecialcasesonwhichtheyaredemonstrated. Ifthisareaisinterestingtoyou,perhapsstartwith“Section4.4CapacityversusSize”inthebook“NeuralSmithing“.Itsummarizesatonoffindingsinthisarea.Thebookisdatedfrom1999,sothereareanothernearly20yearsofideastowadethroughinthisareaifyou’reupforit. Also,seesomeofthediscussionslinkedintheFurtherReadingsection(below). DidImissyourfavoritemethodforconfiguringaneuralnetwork?Ordoyouknowagoodreferenceonthetopic? Letmeknowinthecommentsbelow. FurtherReading Thissectionprovidesmoreresourcesonthetopicifyouarelookingtogodeeper. Papers Anintroductiontocomputingwithneuralnets,1987. Howmanyhiddenlayersandnodes?,2009. Books NeuralSmithing:SupervisedLearninginFeedforwardArtificialNeuralNetworks,1999. DeepLearning,2016. Articles ArtificialneuralnetworkonWikipedia UniversalapproximationtheoremonWikipedia HowmanyhiddenlayersshouldIuse?,comp.ai.neural-netsFAQ Discussions Howtochoosethenumberofhiddenlayersandnodesinafeedforwardneuralnetwork? Numberofnodesinhiddenlayersofneuralnetwork multi-layerperceptron(MLP)architecture:criteriaforchoosingnumberofhiddenlayersandsizeofthehiddenlayer? Indeeplearning,howdoIselecttheoptimalnumberoflayersandneurons? Summary Inthispost,youdiscoveredtheroleoflayersandnodesandhowtoconfigureamultilayerperceptronneuralnetwork. Specifically,youlearned: Thedifferencebetweensingle-layerandmultiple-layerperceptronnetworks. Thevalueofhavingoneandmorethanonehiddenlayersinanetwork. Fiveapproachesforconfiguringthenumberoflayersandnodesinanetwork. Doyouhaveanyquestions? AskyourquestionsinthecommentsbelowandIwilldomybesttoanswer. DevelopBetterDeepLearningModelsToday! TrainFaster,ReduceOverftting,andEnsembles ...withjustafewlinesofpythoncode DiscoverhowinmynewEbook: BetterDeepLearning Itprovidesself-studytutorialsontopicslike:weightdecay,batchnormalization,dropout,modelstackingandmuchmore... Bringbetterdeeplearningtoyourprojects! SkiptheAcademics.JustResults. SeeWhat'sInside Tweet Tweet Share Share MoreOnThisTopicHowtoControlNeuralNetworkModelCapacityWith…HowtoUseGreedyLayer-WisePretraininginDeep…HowtoCodeaNeuralNetworkwithBackpropagationIn…HowtoImplementProgressiveGrowingGANModelsinKerasTensorFlow2Tutorial:GetStartedinDeepLearning…HowtoUsetheKerasFunctionalAPIforDeepLearning AboutJasonBrownlee JasonBrownlee,PhDisamachinelearningspecialistwhoteachesdevelopershowtogetresultswithmodernmachinelearningmethodsviahands-ontutorials. ViewallpostsbyJasonBrownlee→ HowtoCalculateMcNemar’sTesttoCompareTwoMachineLearningClassifiers HowtoCodetheStudent’st-TestfromScratchinPython 68ResponsestoHowtoConfiguretheNumberofLayersandNodesinaNeuralNetwork MariusLindauer July27,2018at6:45pm # Thanksfortheblogpost. Thereisindeedalargenumberofrecentresearchgoingtoanswerthisquestionautomatically,dubbed(neural)architecturesearch.HerealistofpaperswhichImaintain: https://www.automl.org/automl/literature-on-neural-architecture-search/ Reply JasonBrownlee July28,2018at6:32am # Thanks. Reply Salim July29,2018at7:56am # Great!Thanksfortheblogpost🙂Thereisalsoaninterestingpostherewhichtriestoaddressthesamequestion. https://towardsdatascience.com/beginners-ask-how-many-hidden-layers-neurons-to-use-in-artificial-neural-networks-51466afa0d3e Reply JasonBrownlee July30,2018at5:42am # Thanksforsharing. Reply AditiMachine July29,2018at8:26pm # Thisisgrateblogandniceinformation Reply JasonBrownlee July30,2018at5:46am # Thanks. Reply Adam November6,2018at11:01am # Hi,Verynicesummary.Thankyouverymuch!I’madeeplearningresearcherworkinginaninter-disciplinaryteaminUnivEdi.MayIaskaboutthetemplateyouusedtocreatethissite?Itlooksquiteprofessionalandgreat! Reply JasonBrownlee November6,2018at2:17pm # Thanks,youcanlearnmoreaboutthesoftwareIuseforthesitehere: https://machinelearningmastery.com/faq/single-faq/what-software-do-you-use-to-run-your-website Reply Hooman February16,2019at9:38pm # Thankyouforsuchagreatpost,butIhaveaquestion; Let’ssayourimagesizeis64*64*3sowhatwouldbethenumberofnodesinourinputlayer? Reply JasonBrownlee February17,2019at6:32am # IwouldrecommendusingaCNNandperhapstry32filters? Reply Hooman February18,2019at3:48am # Sorryforambiguityinmyquestion….SupposeI’musingaCNNandIhaveapictureofsize64*64*3andmyquestioniswhatwouldbethenumberofnodesinmyinputlayer? Reply JasonBrownlee February18,2019at6:31am # Itwouldbeinput_shape=(64,64,3)ifusingchannelslastformat. Reply Hooman February18,2019at4:22pm # Thanks Kamal July22,2019at1:42am # WillthenumberofneuroninhiddenlayermentionhereworkforRNN/LSTMaswell? Reply JasonBrownlee July22,2019at8:27am # Perhapstestandcompareresultswithdifferentconfigurations? Reply WilliamArmstrong February25,2019at8:52am # HiJason, Thereispracticallynowaytoknowaheadoftimehowmanylayersornodesyouwillneedforacertainneuralnetworklearningtask.Ihaveasolution:aneuralnetwork,calledALNfitDeep,whichcanautomatically*grow*duringtrainingtofittheproblem.Softwaretodothisisathttps://github.com/Bill-Armstrong/.Thereisanewexecutablereleaseavailable,whichyoucangetbyclickingwhereitsays“4releases”nearthetopofthemainpage.Youcanforgetthesourcecodefornow.UsetheHelpbutton.Sincethereleaseisnew,Iwouldappreciateanyfeedbackonproblemsyouencounter.Frommypointofview,neuralnetswhichcan’tlearnautomaticallybasedontheproblemareatotalwasteoftime.Alsohavingtousealotofvaluabledataforvalidationisawaste.Mynetsmeasurethenoisevarianceandthentrainonallofthedatanotusedintesting.Mynetscangrowtotensofthousandsofnodes,yettheexecutionofthelearnedfunctionremainsveryfast(becauseverylittlehastoactuallybecomputedforagiveninput).Thesecretisthatallcomputationoflinearfunctionsisinthefirstlayer.Insteadofaone-inputsquashingfunction,therearetwo-inputnon-linearities:max,andmin.Peoplehavetohavethecouragetotryit.Iwillhelp. Reply JasonBrownlee February25,2019at2:11pm # Thanksforthenote. Ihaveplayedwith“growing”and“pruning”netssincethelate1990s,Iremainskeptical. Asensitivityanalysisofmodelcapacityvsskillisreliableandrepeatableforme. Reply John March30,2019at12:04pm # HiJason, Ihave2questions. 1.WhatifIwanttopredictfinancialtimeseries(e.g.Forex,stockprice)andIhavedecidedtousemlpwith2hiddenlayers.Ihavealsodecidedtouse4neuronsforeachofthehiddenlayers.HowexactlydoIsplitupmydatasetincludingtheinput.I’massumingthatthe4neuronswillbeOpen,Close,HighandLowValues. Willmyinputvaluesbemajorityofmydatasetintotal?andthenforthefirsthiddenlayer,asubsetofthedatasetincluding,sosomeofthe‘Highvaluesforoneoftheneurons’,someofthelowvaluesforthesecondneuronandsoon,andthenthesameintheotherhiddenlayer? 2.Doyouhaveapythonscriptexampleforiteratingthroughdifferentexamplelayers. Thankyouverymuch! Reply JasonBrownlee March31,2019at9:26am # Ifyouhave4classesasoutput,thenthedatamustbepreparedtomatchthisexpectationwithaonehotencoding: https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/ Datawouldnotbesplitbasedonthenumberofnodes,I’mnotsureIfollow,sorry. Here’sanexampleofamodelformulti-classclassification: https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Reply james April29,2019at9:31pm # Hi,MayIsaythatlayersandnodesarerelevanttohowmanytraininginputsnumbers? Ifmoreinputs,morenodes? Thanks Reply JasonBrownlee April30,2019at6:55am # Notreally.Theyareunrelated. Reply MohamadJaber July12,2021at9:46pm # HiJason, Thanksagainforyourwonderfultutorials. Rephrasingthequestionby“James”:Doeshavingmoretrainingsamplesrequiresanadditiontothenumberofneurons? Ihavedesignedaneuralnetworkmodelbasedonthe“inputshapes”whichisalsoadvisedbymanyempiricalrulesofthumb.Myquestionis,ifIhavedatasetsof100or100ksamples(eachisrepresentativeenough),shallIleavethemodelshapesfixed?orgrowit(perhapslinearly)withtheincreaseofthesamples? BecauseI’venoticedsomecomplexitiesarisesinthedatasetasitgrows. Thanksagain. Reply JasonBrownlee July13,2021at5:18am # Itmayoritmaynot.Wecannotknowforsureforagivendatasetandmodelcombination. Reply SHABBEERBASHA June18,2019at11:01am # Thereisaninterrelationbetweenthenumberoflayers(nodesperlayer).Pleasehavealookintoourpaperhttps://arxiv.org/abs/1902.02771.Thankyou. Reply JasonBrownlee June18,2019at2:22pm # Thanksforsharing. Reply Saad May2,2019at8:30am # Thankyou,Jason,forthepost. Istherearuleofthumbforthenumberofunitswhenyouwanttoincreasethenumberofhiddenlayers?Let’ssayforexamplethatyourmodelhasadecentperformancefor1hiddenlayerand30units,wouldchoosing2hiddenlayersmeansyouwoulddecreasethenumberofunitsforeachoftheselayersoryoucanevenincreaseit? Reply JasonBrownlee May2,2019at2:02pm # Notreally,sorry. Testandusearobusttestharnesssothattheresultsarereliable. Reply carlos July8,2019at8:07pm # letssaywewanttodifferentiatebetweenclearandblurimages,canCNNtrainamodeltodothatandhowdoyougoaboutit Reply JasonBrownlee July9,2019at8:08am # Yes,perhapsaclassificationproblemwithabinaryprediction(blurvsno-blur). Reply DanielJ.Dick July27,2019at6:49pm # MLP?OrMLFFN?Isthereawaytousethesimplerperceptronupdatealgorithmwithoutusingderivativeorbackproporwithoutseparatingthelayerswithanon-linearityactivationwithouthavingthewholethingcollapseintotheequivslentofasinglelinearlayerasMinskypointedoutwayback? Reply JasonBrownlee July28,2019at6:42am # Theremaybe,Idon’thavematerialonit,sorry. WemovedawayfromsimplePerceptronbecausebackproponaMLPworksreallywellingeneral. Reply erfanbasiri October14,2019at12:22am # Hi.jason.ihaveanetworkwith4layers(4hiddenlayers)whicheachlayerhas32nodes.iuseadamoptimizersandleakyrelu.iwanttoknowwhatisthenameofmynetwork?isitsimpleMLP?caninamethatasadeepnetwork? Reply JasonBrownlee October14,2019at8:09am # ItisanMLP,adeepMLPifyoulike. Reply Bayangmbe October23,2019at9:45am # Hellotoyou,Jason, Ihaveaforecastingprojectusingmachinelearningtopredictagriculturalcrops. Ineedtobuildanalgorithmtopredictagriculturalcropsbasedonfieldsize,localclimate,seasonandsoilchemicalcomponents(suchasmineralsalts,phosphorusions,potassiumandnitrates,moistureandgasesintheair)attheinput.Andtheoutputwillbealistofoptimizedcropsgenerated.WhichmethodwillIuse?Anylinktoguidemewillbeusefultome. Reply JasonBrownlee October23,2019at1:47pm # Thatsoundslikeafunproject! Irecommendfollowingthisprocessasafirststep: https://machinelearningmastery.com/start-here/#process Reply Bayangmbe October23,2019at4:23pm # Thanksomuch! Iwillgiveyouafeedback. Iwillmakethisprototypetopresentitatpanamacityincompetition.Amselectedasafinalist. Iwouldn’twantustoseethisprojectasafunproject!Ifyoucanredirectitbettertomakeitlookinteresting,itwouldbegreat! Reply Kamil November20,2019at8:36am # “Asingle-layerneuralnetworkcanonlybeusedtorepresentlinearlyseparablefunctions.”Ithinkthisstatementiswrong.Iunderstandthat“Afeedforwardnetworkwithasinglelayerissufficienttorepresentanyfunction,butthelayermaybeinfeasiblylargeandmayfailtolearnandgeneralizecorrectly.”,doyoumeansingle-layerwithonlyonesingleneuron? Reply JasonBrownlee November20,2019at1:50pm # No,anetworkwithasinglelayerofnodes. Reply Kamil November24,2019at2:20am # oh,yes,Icheckeditandeverythingiscorrect,Ishouldadd“Afeedforwardnetworkwithasingle**hidden**layer”whichwouldbeuniversalapproximationtheorem.Butasingle-layerneuralnetworkhasnohiddenlayersatall,thenitcan’tmakeanythingmorethanlinearseparation,thesimplesexamplewouldbeitcan’tcomputexor. Reply HabibKedir March21,2020at12:16am # HiJason,amworkingonneuralmachinemachinetranslation,oneofmyexaminerasksmehowmanyinputlayer,hiddenlayerandoutputlayerInmyexperiment.Nothingtoanswer.theparallelsentenceusedis7050lengthoflongsentenceininput25andoutput20. Iused100dimension.wouldyouhelpme?vocabulary(uniquewordsininput)is12700. Reply JasonBrownlee March21,2020at8:26am # Perhapstestdifferentconfigurationsanddiscoverwhatworksbestforyourmodelanddataset? Reply MikeJanson April8,2020at2:58am # ThereferencetoDeepLearningandtheuniversalapproximationtheoremisincorrect–whiletheabovereferencestates“p.198”,it’sactuallyonp.192ofthe2016edition. Reply JasonBrownlee April8,2020at7:58am # ThanksMike. Reply RobinScott April14,2020at1:25pm # HiJason, Thanksfortheinforegardinghiddenlayerstructureselection. Iwantedtoaskifyouwerefamiliarwithusingmetaheuristicstotrainanetworkandwhetherdifferenttrainingstrategiesneeddifferingmodelstructures.Forexample,ifyouwereusingtheIrisdatasetwith5hiddenneurons(onelayer)whentrainingwithbackpropagation,doyouthinkitwouldbeappropriatetousethesamenumberofhiddenlayersandneuronsifyouweretotrainusingPSOorSA? Inotherwords,doesthetrainingtechniqueinfluencethenumberofhiddenneuronsorlayers? Cheers, Rob Reply JasonBrownlee April14,2020at1:38pm # Greatquestion! Backpropremainsthemostefficienttrainingalgorithm,regardlessofchoiceofarchitecture. Metaheuristicscouldbeusefulinfindingthearchitecturetotrainthough.IhaveseenmanyautomlandNAS(networkarchitecturesearch)algorithmsthatuseanevolutionaryalgorithmattheircore. Reply RobinScott April14,2020at2:17pm # It’sfunnybecausethere’salotofresourcesonusingmetaheuristicstofindoptimalnetworkhyperparametersorhiddenstructure,notawholelotontraining.PartofmycurrentinterestisinthatareaandIcanseewhyBPisgenerallypreferredfortraining.I’veimplementedaGAtrainedNNinlieuofBPandwhileitseemstoconvergenicely,itsureisslow(I’mtalking50xslowerforequivalentnetworks).I’vestillyettoimplementaPSO-NNbutit’sstillinterestingtothinkabout.Abioinspirednetworktrainedbyabioinspiredmetaheuristichasaniceringtoit. Reply SomeDude July5,2020at9:09pm # here’smyquestion:ifwehaveasummationfunctionthattakesthesumoftheweightedinputsandforwardsittotheactivationfunction,howdowecountthelayers?ex:asinglelayerprecepetronwith2inputsand2weights,andthequestionspecificallymentionsthatwehaveasummationfunctionandanactivationfunction.dowecountbothsummationandactivationas1layeror2layers? Reply JasonBrownlee July6,2020at6:32am # Typicallyyoucounthiddenlayersonly. Reply Usama July5,2020at10:35pm # constructadatasethaving4inputsagainsttwoinputvariables.youalsohavetoassumetargetoutputagainsteachinput. 2.constructatopologyforneuralnetworkhavingatleast5neurons(numberofhiddenlayersandnumberofneuronsineachlayerwillbeofyourownchoice) 3.assumeinitialweightsofyourownchoiceandrunacompleteiteration(forallfourinputs) Iahvetosubmitthisassignmentplzanyonecanhelp?? Reply JasonBrownlee July6,2020at6:36am # Perhapsstarthere: https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/ Orhere: https://machinelearningmastery.com/implement-backpropagation-algorithm-scratch-python/ Perhapscontactyourteacherdirectly,afterall,youhavealreadypaidfortheirhelp. Reply mohamed August10,2020at7:43am # Howmanyminimumenumberoflayersindeeplaerning Reply JasonBrownlee August10,2020at11:03am # Theminimumnumberoflayerswouldbe0hiddenlayers,e.g.connectinputs/visiblelayerdirectlytotheoutputlayer. Reply GoonaFaramarzi October29,2020at3:31am # hello.thanksforyourgoodtutorial.Imworkonbreastcancerdetectionusingdeeplearning.Imbeginnierbutstudytoomuchanddiffrentarticle.buticantimprovemyCNNperformance.whatshouldido? Reply JasonBrownlee October29,2020at8:06am # You’rewelcome. Youcandiscover100softutorialsonhowtoimproveneuralnetsonthisblog,perhapsstarthere: https://machinelearningmastery.com/start-here/#better Reply SeevenAmic November3,2020at10:03pm # DearJason GreatTutorial! Shouldhiddenlayershavesamenumberofneurons?Ifyes,why? AlinktowardsresourcesisOK!Thanks. Reply JasonBrownlee November4,2020at6:40am # Thanks. No,youcanhaveanynumberofnodesineachlayer. Seethe“FurtherReading”sectionforresources. Reply SriHarsha December14,2020at4:12am # HiJason,Thecontentisgreat.Ihaveadoubtiwanttouse2outputregressionmodel,withtheinputsizeof5.Howmanyhiddenlayersandnumberofnodesineedtouse?? Reply JasonBrownlee December14,2020at6:24am # Thereisnostandardwaytoconfigurethemodel,usesometrialanderroranddiscoverwhatworksbestforyourdataset. Reply SriHarsha December16,2020at8:12pm # Yesthetraininglosswas2.99andValidationloss=4.7itwasnotdecreasingfurther.Ihaveused2hiddenlayers4neuronseach(1sthiddenlayer=Relu,2ndhiddenlayer=Exponential).4inputnodeseacharenormalisedto(0,1)and2outputnodes.Anysuggestionsandmodificationsofnetworksothatboththelossescancomebelow1.5orso.Thanksinadvance Reply JasonBrownlee December17,2020at6:34am # Yes,thesuggestionsherewillgetyoustarted: https://machinelearningmastery.com/start-here/#better Reply AndrewHoerner February15,2021at7:26am # Twosimplemindedquestions: Isupdatingofneuronweightsdonelocallybyimpulsesthatpropagatebackwardsfromoutcomesuccess,orbyaseparateprocessrunningalongsidetheneuralnet? Istheflowofneuronoutputsignalsbetweenlayersnecessarilyone-way?Ifnot,whatcanwesayaboutthedesirabilityandconfigurationofsuchfeedbackloopconnections? Reply JasonBrownlee February15,2021at8:13am # Yes,fromoutputlayerbasedonerror,thenonbackthroughthenettoinputlayer. Flowisforwardforinferencebackforerrorcorrection/weightupdates. Othernettypescanhaveloopsand/orinternalandthingsgetharderfortraining,e.g.RNNslikeLSTMsusingbackpropthroughtime: https://machinelearningmastery.com/gentle-introduction-backpropagation-time/ Reply Fatima March1,2021at10:07am # HiDr.Jason,I’mworkinginMLPandLSTMdeeplearningalgorithms,totunethebeststructureforthesealgorithmsIstartedbytunnedthenumberofhiddenneuronsineachhiddenlayer,Iselectedthreehiddenlayerstostartwith,thenIsubmittedthebestneuronthatworkswithmygoal(highspecificity),thenItunnedthenumberofhiddenlayersfrom3to8andsubmittedthebestnumberofhiddenlayersthatworkswithmygoalandcontinuetheotherhyper-parameters. Isthiswayofchoosingthenumberofhiddenneuronsandthenthenumberofhiddenlayerscorrect!!! anddoyouhavepapersthatsupportthisflowofchoosingthisway? Regards Reply JasonBrownlee March1,2021at1:45pm # Ideallywewouldoptimizeallaspectsofthemodelatonce,bititisverycomputationallyexpensive. Instead,inpracticeweoftenhavetooptimizeonethingatatime. Reply ivan July17,2021at3:43pm # hi,ihaveaquestion,howmanynodestheoutputlayercanhave?itisnecesaryjust1node,oricanhavemore? Reply JasonBrownlee July18,2021at5:20am # Ifyouarepredictingonevalue,thenitmusthaveonenode.Predictingmultiplevalues,thenmultiplenodes. Reply DanielBlanck July28,2021at3:53am # Ireallyappreciatetheinformation,itwasveryclearandunderstandable.Iwouldliketociteyourworkinoneofmyprojects.HowshouldIdothat? Reply JasonBrownlee July28,2021at5:29am # Thanks,seethis: https://machinelearningmastery.com/faq/single-faq/how-do-i-reference-or-cite-a-book-or-blog-post Reply LeaveaReplyClickheretocancelreply.CommentName(required) Email(willnotbepublished)(required) Δ Welcome! I'mJasonBrownleePhD andIhelpdevelopersgetresultswithmachinelearning. Readmore Nevermissatutorial:                 Pickedforyou: HowtouseLearningCurvestoDiagnoseMachineLearningModelPerformanceStackingEnsembleforDeepLearningNeuralNetworksinPythonHowToImproveDeepLearningPerformanceHowtouseDataScalingImproveDeepLearningModelStabilityandPerformanceGentleIntroductiontotheAdamOptimizationAlgorithmforDeepLearning LovingtheTutorials? TheBetterDeepLearningEBookiswhereyou'llfindtheReallyGoodstuff. >>SeeWhat'sInside



請為這篇文章評分?