For example, let's take the facial recognition example. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with ...
THENATUREOFCODE
byDanielShiffman
BuythisbookinprintBuythisbookasPDF
Welcome
Acknowledgments
Dedication
Preface
Introduction
1.Vectors
2.Forces
3.Oscillation
4.ParticleSystems
5.PhysicsLibraries
6.AutonomousAgents
7.CellularAutomata
8.Fractals
9.TheEvolutionofCode
10.NeuralNetworks
FurtherReading
Index
TheNatureofCode
DanielShiffman
Chapter10.NeuralNetworks
“Youcan’tprocessmewithanormalbrain.”
—CharlieSheen
We’reattheendofourstory.Thisisthelastofficialchapterofthisbook(thoughIenvisionadditionalsupplementalmaterialforthewebsiteandperhapsnewchaptersinthefuture).Webeganwithinanimateobjectslivinginaworldofforcesandgavethoseobjectsdesires,autonomy,andtheabilitytotakeactionaccordingtoasystemofrules.Next,weallowedthoseobjectstoliveinapopulationandevolveovertime.Nowweask:Whatiseachobject’sdecision-makingprocess?Howcanitadjustitschoicesbylearningovertime?Canacomputationalentityprocessitsenvironmentandgenerateadecision?
Thehumanbraincanbedescribedasabiologicalneuralnetwork—aninterconnectedwebofneuronstransmittingelaboratepatternsofelectricalsignals.Dendritesreceiveinputsignalsand,basedonthoseinputs,fireanoutputsignalviaanaxon.Orsomethinglikethat.Howthehumanbrainactuallyworksisanelaborateandcomplexmystery,onethatwecertainlyarenotgoingtoattempttotackleinrigorousdetailinthischapter.
Figure10.1
Thegoodnewsisthatdevelopingengaginganimatedsystemswithcodedoesnotrequirescientificrigororaccuracy,aswe’velearnedthroughoutthisbook.Wecansimplybeinspiredbytheideaofbrainfunction.
Inthischapter,we’llbeginwithaconceptualoverviewofthepropertiesandfeaturesofneuralnetworksandbuildthesimplestpossibleexampleofone(anetworkthatconsistsofasingleneuron).Afterwards,we’llexaminestrategiesforcreatinga“Brain”objectthatcanbeinsertedintoourVehicleclassandusedtodeterminesteering.Finally,we’llalsolookattechniquesforvisualizingandanimatinganetworkofneurons.
10.1ArtificialNeuralNetworks:IntroductionandApplication
Computerscientistshavelongbeeninspiredbythehumanbrain.In1943,WarrenS.McCulloch,aneuroscientist,andWalterPitts,alogician,developedthefirstconceptualmodelofanartificialneuralnetwork.Intheirpaper,"Alogicalcalculusoftheideasimminentinnervousactivity,”theydescribetheconceptofaneuron,asinglecelllivinginanetworkofcellsthatreceivesinputs,processesthoseinputs,andgeneratesanoutput.
Theirwork,andtheworkofmanyscientistsandresearchersthatfollowed,wasnotmeanttoaccuratelydescribehowthebiologicalbrainworks.Rather,anartificialneuralnetwork(whichwewillnowsimplyrefertoasa“neuralnetwork”)wasdesignedasacomputationalmodelbasedonthebraintosolvecertainkindsofproblems.
It’sprobablyprettyobvioustoyouthatthereareproblemsthatareincrediblysimpleforacomputertosolve,butdifficultforyou.Takethesquarerootof964,324,forexample.Aquicklineofcodeproducesthevalue982,anumberProcessingcomputedinlessthanamillisecond.Thereare,ontheotherhand,problemsthatareincrediblysimpleforyouormetosolve,butnotsoeasyforacomputer.Showanytoddlerapictureofakittenorpuppyandthey’llbeabletotellyouveryquicklywhichoneiswhich.Sayhelloandshakemyhandonemorningandyoushouldbeabletopickmeoutofacrowdofpeoplethenextday.Butneedamachinetoperformoneofthesetasks?Scientistshavealreadyspententirecareersresearchingandimplementingcomplexsolutions.
Themostcommonapplicationofneuralnetworksincomputingtodayistoperformoneofthese“easy-for-a-human,difficult-for-a-machine”tasks,oftenreferredtoaspatternrecognition.Applicationsrangefromopticalcharacterrecognition(turningprintedorhandwrittenscansintodigitaltext)tofacialrecognition.Wedon’thavethetimeorneedtousesomeofthesemoreelaborateartificialintelligencealgorithmshere,butifyouareinterestedinresearchingneuralnetworks,I’drecommendthebooksArtificialIntelligence:AModernApproachbyStuartJ.RussellandPeterNorvigandAIforGameDevelopersbyDavidM.BourgandGlennSeemann.
Figure10.2
Aneuralnetworkisa“connectionist”computationalsystem.Thecomputationalsystemswewriteareprocedural;aprogramstartsatthefirstlineofcode,executesit,andgoesontothenext,followinginstructionsinalinearfashion.Atrueneuralnetworkdoesnotfollowalinearpath.Rather,informationisprocessedcollectively,inparallelthroughoutanetworkofnodes(thenodes,inthiscase,beingneurons).
Herewehaveyetanotherexampleofacomplexsystem,muchliketheonesweexaminedinChapters6,7,and8.Theindividualelementsofthenetwork,theneurons,aresimple.Theyreadaninput,processit,andgenerateanoutput.Anetworkofmanyneurons,however,canexhibitincrediblyrichandintelligentbehaviors.
Oneofthekeyelementsofaneuralnetworkisitsabilitytolearn.Aneuralnetworkisnotjustacomplexsystem,butacomplexadaptivesystem,meaningitcanchangeitsinternalstructurebasedontheinformationflowingthroughit.Typically,thisisachievedthroughtheadjustingofweights.Inthediagramabove,eachlinerepresentsaconnectionbetweentwoneuronsandindicatesthepathwayfortheflowofinformation.Eachconnectionhasaweight,anumberthatcontrolsthesignalbetweenthetwoneurons.Ifthenetworkgeneratesa“good”output(whichwe’lldefinelater),thereisnoneedtoadjusttheweights.However,ifthenetworkgeneratesa“poor”output—anerror,sotospeak—thenthesystemadapts,alteringtheweightsinordertoimprovesubsequentresults.
Thereareseveralstrategiesforlearning,andwe’llexaminetwooftheminthischapter.
SupervisedLearning—Essentially,astrategythatinvolvesateacherthatissmarterthanthenetworkitself.Forexample,let’stakethefacialrecognitionexample.Theteachershowsthenetworkabunchoffaces,andtheteacheralreadyknowsthenameassociatedwitheachface.Thenetworkmakesitsguesses,thentheteacherprovidesthenetworkwiththeanswers.Thenetworkcanthencompareitsanswerstotheknown“correct”onesandmakeadjustmentsaccordingtoitserrors.Ourfirstneuralnetworkinthenextsectionwillfollowthismodel.
UnsupervisedLearning—Requiredwhenthereisn’tanexampledatasetwithknownanswers.Imaginesearchingforahiddenpatterninadataset.Anapplicationofthisisclustering,i.e.dividingasetofelementsintogroupsaccordingtosomeunknownpattern.Wewon’tbelookingatanyexamplesofunsupervisedlearninginthischapter,asthisstrategyislessrelevantforourexamples.
ReinforcementLearning—Astrategybuiltonobservation.Thinkofalittlemouserunningthroughamaze.Ifitturnsleft,itgetsapieceofcheese;ifitturnsright,itreceivesalittleshock.(Don’tworry,thisisjustapretendmouse.)Presumably,themousewilllearnovertimetoturnleft.Itsneuralnetworkmakesadecisionwithanoutcome(turnleftorright)andobservesitsenvironment(yumorouch).Iftheobservationisnegative,thenetworkcanadjustitsweightsinordertomakeadifferentdecisionthenexttime.Reinforcementlearningiscommoninrobotics.Attimet,therobotperformsataskandobservestheresults.Diditcrashintoawallorfalloffatable?Orisitunharmed?We’lllookatreinforcementlearninginthecontextofoursimulatedsteeringvehicles.
Thisabilityofaneuralnetworktolearn,tomakeadjustmentstoitsstructureovertime,iswhatmakesitsousefulinthefieldofartificialintelligence.Herearesomestandardusesofneuralnetworksinsoftwaretoday.
PatternRecognition—We’vementionedthisseveraltimesalreadyandit’sprobablythemostcommonapplication.Examplesarefacialrecognition,opticalcharacterrecognition,etc.
TimeSeriesPrediction—Neuralnetworkscanbeusedtomakepredictions.Willthestockriseorfalltomorrow?Willitrainorbesunny?
SignalProcessing—Cochlearimplantsandhearingaidsneedtofilteroutunnecessarynoiseandamplifytheimportantsounds.Neuralnetworkscanbetrainedtoprocessanaudiosignalandfilteritappropriately.
Control—Youmayhavereadaboutrecentresearchadvancesinself-drivingcars.Neuralnetworksareoftenusedtomanagesteeringdecisionsofphysicalvehicles(orsimulatedones).
SoftSensors—Asoftsensorreferstotheprocessofanalyzingacollectionofmanymeasurements.Athermometercantellyouthetemperatureoftheair,butwhatifyoualsoknewthehumidity,barometricpressure,dewpoint,airquality,airdensity,etc.?Neuralnetworkscanbeemployedtoprocesstheinputdatafrommanyindividualsensorsandevaluatethemasawhole.
AnomalyDetection—Becauseneuralnetworksaresogoodatrecognizingpatterns,theycanalsobetrainedtogenerateanoutputwhensomethingoccursthatdoesn’tfitthepattern.Thinkofaneuralnetworkmonitoringyourdailyroutineoveralongperiodoftime.Afterlearningthepatternsofyourbehavior,itcouldalertyouwhensomethingisamiss.
Thisisbynomeansacomprehensivelistofapplicationsofneuralnetworks.Buthopefullyitgivesyouanoverallsenseofthefeaturesandpossibilities.Thethingis,neuralnetworksarecomplicatedanddifficult.Theyinvolveallsortsoffancymathematics.Whilethisisallfascinating(andincrediblyimportanttoscientificresearch),alotofthetechniquesarenotverypracticalintheworldofbuildinginteractive,animatedProcessingsketches.Nottomentionthatinordertocoverallthismaterial,wewouldneedanotherbook—ormorelikely,aseriesofbooks.
Soinstead,we’llbeginourlasthurrahinthenatureofcodewiththesimplestofallneuralnetworks,inanefforttounderstandhowtheoverallconceptsareappliedincode.Thenwe’lllookatsomeProcessingsketchesthatgeneratevisualresultsinspiredbytheseconcepts.
10.2ThePerceptron
Inventedin1957byFrankRosenblattattheCornellAeronauticalLaboratory,aperceptronisthesimplestneuralnetworkpossible:acomputationalmodelofasingleneuron.Aperceptronconsistsofoneormoreinputs,aprocessor,andasingleoutput.
Figure10.3:Theperceptron
Aperceptronfollowsthe“feed-forward”model,meaninginputsaresentintotheneuron,areprocessed,andresultinanoutput.Inthediagramabove,thismeansthenetwork(oneneuron)readsfromlefttoright:inputscomein,outputgoesout.
Let’sfolloweachofthesestepsinmoredetail.
Step1:Receiveinputs.
Saywehaveaperceptronwithtwoinputs—let’scallthemx1andx2.
Input0:x1=12
Input1:x2=4
Step2:Weightinputs.
Eachinputthatissentintotheneuronmustfirstbeweighted,i.e.multipliedbysomevalue(oftenanumberbetween-1and1).Whencreatingaperceptron,we’lltypicallybeginbyassigningrandomweights.Here,let’sgivetheinputsthefollowingweights:
Weight0:0.5
Weight1:-1
Wetakeeachinputandmultiplyitbyitsweight.
Input0*Weight0⇒12*0.5=6
Input1*Weight1⇒4*-1=-4
Step3:Suminputs.
Theweightedinputsarethensummed.
Sum=6+-4=2
Step4:Generateoutput.
Theoutputofaperceptronisgeneratedbypassingthatsumthroughanactivationfunction.Inthecaseofasimplebinaryoutput,theactivationfunctioniswhattellstheperceptronwhetherto“fire”ornot.YoucanenvisionanLEDconnectedtotheoutputsignal:ifitfires,thelightgoeson;ifnot,itstaysoff.
Activationfunctionscangetalittlebithairy.Ifyoustartreadingoneofthoseartificialintelligencetextbookslookingformoreinfoaboutactivationfunctions,youmaysoonfindyourselfreachingforacalculustextbook.However,withourfriendthesimpleperceptron,we’regoingtodosomethingreallyeasy.Let’smaketheactivationfunctionthesignofthesum.Inotherwords,ifthesumisapositivenumber,theoutputis1;ifitisnegative,theoutputis-1.
Output=sign(sum)⇒sign(2)⇒+1
Let’sreviewandcondensethesestepssowecanimplementthemwithacodesnippet.
ThePerceptronAlgorithm:
Foreveryinput,multiplythatinputbyitsweight.
Sumalloftheweightedinputs.
Computetheoutputoftheperceptronbasedonthatsumpassedthroughanactivationfunction(thesignofthesum).
Let’sassumewehavetwoarraysofnumbers,theinputsandtheweights.Forexample:
ShowRaw
float[]inputs={12,4};
float[]weights={0.5,-1};
float[]inputs={12,4};
float[]weights={0.5,-1};
“Foreveryinput”impliesaloopthatmultiplieseachinputbyitscorrespondingweight.Sinceweneedthesum,wecanadduptheresultsinthatveryloop.
ShowRaw
//[full]Steps1and2:Addupalltheweightedinputs.
floatsum=0;
for(inti=0;i0)return1;
elsereturn-1;
//[end]
}
Step3:Passingthesum
throughanactivationfunctionfloatoutput=activate(sum);
Theactivationfunctionintactivate(floatsum){
Returna1ifpositive,-1ifnegative.if(sum>0)return1;
elsereturn-1;
}
10.3SimplePatternRecognitionUsingaPerceptron
Nowthatweunderstandthecomputationalprocessofaperceptron,wecanlookatanexampleofoneinaction.Westatedthatneuralnetworksareoftenusedforpatternrecognitionapplications,suchasfacialrecognition.Evensimpleperceptronscandemonstratethebasicsofclassification,asinthefollowingexample.
Figure10.4
Consideralineintwo-dimensionalspace.Pointsinthatspacecanbeclassifiedaslivingoneitheronesideofthelineortheother.Whilethisisasomewhatsillyexample(sincethereisclearlynoneedforaneuralnetwork;wecandetermineonwhichsideapointlieswithsomesimplealgebra),itshowshowaperceptroncanbetrainedtorecognizepointsononesideversusanother.
Let’ssayaperceptronhas2inputs(thex-andy-coordinatesofapoint).Usingasignactivationfunction,theoutputwilleitherbe-1or1—i.e.,theinputdataisclassifiedaccordingtothesignoftheoutput.Intheabovediagram,wecanseehoweachpointiseitherbelowtheline(-1)orabove(+1).
Theperceptronitselfcanbediagrammedasfollows:
Figure10.5
Wecanseehowtherearetwoinputs(xandy),aweightforeachinput(weightxandweighty),aswellasaprocessingneuronthatgeneratestheoutput.
Thereisaprettysignificantproblemhere,however.Let’sconsiderthepoint(0,0).Whatifwesendthispointintotheperceptronasitsinput:x=0andy=0?Whatwillthesumofitsweightedinputsbe?Nomatterwhattheweightsare,thesumwillalwaysbe0!Butthiscan’tberight—afterall,thepoint(0,0)couldcertainlybeaboveorbelowvariouslinesinourtwo-dimensionalworld.
Toavoidthisdilemma,ourperceptronwillrequireathirdinput,typicallyreferredtoasabiasinput.Abiasinputalwayshasthevalueof1andisalsoweighted.Hereisourperceptronwiththeadditionofthebias:
Figure10.6
Let’sgobacktothepoint(0,0).Hereareourinputs:
0*weightforx=0
0*weightfory=0
1*weightforbias=weightforbias
Theoutputisthesumoftheabovethreevalues,0plus0plusthebias’sweight.Therefore,thebias,onitsown,answersthequestionastowhere(0,0)isinrelationtotheline.Ifthebias’sweightispositive,(0,0)isabovetheline;negative,itisbelow.It“biases”theperceptron’sunderstandingoftheline’spositionrelativeto(0,0).
10.4CodingthePerceptronWe’renowreadytoassemblethecodeforaPerceptronclass.Theonlydatatheperceptronneedstotrackaretheinputweights,andwecoulduseanarrayoffloatstostorethese.
ShowRaw
classPerceptron{
float[]weights;
classPerceptron{
float[]weights;
Theconstructorcouldreceiveanargumentindicatingthenumberofinputs(inthiscasethree:x,y,andabias)andsizethearrayaccordingly.
ShowRaw
Perceptron(intn){
weights=newfloat[n];
for(inti=0;i0)return1;
elsereturn-1;
}
//[end]
//[full]Trainthenetworkagainstknowndata.
voidtrain(float[]inputs,intdesired){
intguess=feedforward(inputs);
floaterror=desired-guess;
for(inti=0;i0)return1;
elsereturn-1;
}
Trainthenetworkagainstknowndata.voidtrain(float[]inputs,intdesired){
intguess=feedforward(inputs);
floaterror=desired-guess;
for(inti=0;i0)noFill();
elsefill(0);
//[end]
ellipse(training[i].inputs[0],training[i].inputs[1],8,8);
}
}
ThePerceptronPerceptronptron;
2,000trainingpointsTrainer[]training=newTrainer[2000];
intcount=0;
Theformulaforalinefloatf(floatx){
return2*x+1;
}
voidsetup(){
size(640,360);
ptron=newPerceptron(3);
Make2,000trainingpoints.for(inti=0;i0)noFill();
elsefill(0);
ellipse(training[i].inputs[0],training[i].inputs[1],8,8);
}
}
Exercise10.1Insteadofusingthesupervisedlearningmodelabove,canyoutraintheneuralnetworktofindtherightweightsbyusingageneticalgorithm?
Exercise10.2Visualizetheperceptronitself.Drawtheinputs,theprocessingnode,andtheoutput.
10.5ASteeringPerceptron
Whileclassifyingpointsaccordingtotheirpositionaboveorbelowalinewasausefuldemonstrationoftheperceptroninaction,itdoesn’thavemuchpracticalrelevancetotheotherexamplesthroughoutthisbook.Inthissection,we’lltaketheconceptsofaperceptron(arrayofinputs,singleoutput),applyittosteeringbehaviors,anddemonstratereinforcementlearningalongtheway.
Wearenowgoingtotakesignificantcreativelicensewiththeconceptofaneuralnetwork.Thiswillallowustostickwiththebasicsandavoidsomeofthehighlycomplexalgorithmsassociatedwithmoresophisticatedneuralnetworks.Herewe’renotsoconcernedwithfollowingrulesoutlinedinartificialintelligencetextbooks—we’rejusthopingtomakesomethinginterestingandbrain-like.
RememberourgoodfriendtheVehicleclass?Youknow,thatoneformakingobjectswithalocation,velocity,andacceleration?ThatcouldobeyNewton’slawswithanapplyForce()functionandmovearoundthewindowaccordingtoavarietyofsteeringrules?
WhatifweaddedonemorevariabletoourVehicleclass?
ShowRaw
classVehicle{
//Givingthevehicleabrain!
Perceptronbrain;
PVectorlocation;
PVectorvelocity;
PVectoracceleration;
//etc...
classVehicle{
Givingthevehicleabrain!Perceptronbrain;
PVectorlocation;
PVectorvelocity;
PVectoracceleration;
//etc...
Here’sourscenario.Let’ssaywehaveaProcessingsketchwithanArrayListoftargetsandasinglevehicle.
Figure10.9
Let’ssaythatthevehicleseeksallofthetargets.AccordingtotheprinciplesofChapter6,wewouldnextwriteafunctionthatcalculatesasteeringforcetowardseachtarget,applyingeachforceoneatatimetotheobject’sacceleration.AssumingthetargetsareanArrayListofPVectorobjects,itwouldlooksomethinglike:
ShowRaw
voidseek(ArrayListtargets){
for(PVectortarget:targets){
//Foreverytarget,applyasteeringforcetowardsthetarget.
PVectorforce=seek(targets.get(i));
applyForce(force);
}
}
voidseek(ArrayListtargets){
for(PVectortarget:targets){
Foreverytarget,applyasteeringforcetowardsthetarget.PVectorforce=seek(targets.get(i));
applyForce(force);
}
}
InChapter6,wealsoexaminedhowwecouldcreatemoredynamicsimulationsbyweightingeachsteeringforceaccordingtosomerule.Forexample,wecouldsaythatthefartheryouarefromatarget,thestrongertheforce.
ShowRaw
voidseek(ArrayListtargets){
for(PVectortarget:targets){
PVectorforce=seek(targets.get(i));
floatd=PVector.dist(target,location);
floatweight=map(d,0,width,0,5);
//Weightingeachsteeringforceindividually
force.mult(weight);
applyForce(force);
}
}
voidseek(ArrayListtargets){
for(PVectortarget:targets){
PVectorforce=seek(targets.get(i));
floatd=PVector.dist(target,location);
floatweight=map(d,0,width,0,5);
Weightingeachsteeringforceindividuallyforce.mult(weight);
applyForce(force);
}
}
Butwhatifinsteadwecouldaskourbrain(i.e.perceptron)totakeinalltheforcesasaninput,processthemaccordingtoweightsoftheperceptroninputs,andgenerateanoutputsteeringforce?Whatifwecouldinsteadsay:
ShowRaw
voidseek(ArrayListtargets){
//Makeanarrayofinputsforourbrain.
PVector[]forces=newPVector[targets.size()];
for(inti=0;itargets){
Makeanarrayofinputsforourbrain.PVector[]forces=newPVector[targets.size()];
for(inti=0;itargets){
PVector[]forces=newPVector[targets.size()];
for(inti=0;ineurons;
PVectorlocation;
Network(floatx,floaty){
location=newPVector(x,y);
neurons=newArrayList();
}
Wecanaddanneurontothenetwork.voidaddNeuron(Neuronn){
neurons.add(n);
}
Wecandrawtheentirenetwork.voiddisplay(){
pushMatrix();
translate(location.x,location.y);
for(Neuronn:neurons){
n.display();
}
popMatrix();
}
}
Nowwecanprettyeasilymakethediagramabove.
ShowRaw
Networknetwork;
voidsetup(){
size(640,360);
//MakeaNetwork.
network=newNetwork(width/2,height/2);
//[full]MaketheNeurons.
Neurona=newNeuron(-200,0);
Neuronb=newNeuron(0,100);
Neuronc=newNeuron(0,-100);
Neurond=newNeuron(200,0);
//[end]
//[full]AddtheNeuronstothenetwork.
network.addNeuron(a);
network.addNeuron(b);
network.addNeuron(c);
network.addNeuron(d);
//[end]
}
voiddraw(){
background(255);
//Showthenetwork.
network.display();
}
Networknetwork;
voidsetup(){
size(640,360);
MakeaNetwork.network=newNetwork(width/2,height/2);
MaketheNeurons.Neurona=newNeuron(-200,0);
Neuronb=newNeuron(0,100);
Neuronc=newNeuron(0,-100);
Neurond=newNeuron(200,0);
AddtheNeuronstothenetwork.network.addNeuron(a);
network.addNeuron(b);
network.addNeuron(c);
network.addNeuron(d);
}
voiddraw(){
background(255);
Showthenetwork.network.display();
}
Theaboveyields:
What’smissing,ofcourse,istheconnection.WecanconsideraConnectionobjecttobemadeupofthreeelements,twoneurons(fromNeuronatoNeuronb)andaweight.
ShowRaw
classConnection{
//[full]Aconnectionisbetweentwoneurons.
Neurona;
Neuronb;
//[end]
//Aconnectionhasaweight.
floatweight;
Connection(Neuronfrom,Neuronto,floatw){
weight=w;
a=from;
b=to;
}
//Aconnectionisdrawnasaline.
voiddisplay(){
stroke(0);
strokeWeight(weight*4);
line(a.location.x,a.location.y,b.location.x,b.location.y);
}
}
classConnection{
Aconnectionisbetweentwoneurons.Neurona;
Neuronb;
Aconnectionhasaweight.floatweight;
Connection(Neuronfrom,Neuronto,floatw){
weight=w;
a=from;
b=to;
}
Aconnectionisdrawnasaline.voiddisplay(){
stroke(0);
strokeWeight(weight*4);
line(a.location.x,a.location.y,b.location.x,b.location.y);
}
}
OncewehavetheideaofaConnectionobject,wecanwriteafunction(let’sputitinsidetheNetworkclass)thatconnectstwoneuronstogether—thegoalbeingthatinadditiontomakingtheneuronsinsetup(),wecanalsoconnectthem.
ShowRaw
voidsetup(){
size(640,360);
network=newNetwork(width/2,height/2);
Neurona=newNeuron(-200,0);
Neuronb=newNeuron(0,100);
Neuronc=newNeuron(0,-100);
Neurond=newNeuron(200,0);
//Makingconnectionsbetweentheneurons
network.connect(a,b);
network.connect(a,c);
network.connect(b,d);
network.connect(c,d);
network.addNeuron(a);
network.addNeuron(b);
network.addNeuron(c);
network.addNeuron(d);
}
voidsetup(){
size(640,360);
network=newNetwork(width/2,height/2);
Neurona=newNeuron(-200,0);
Neuronb=newNeuron(0,100);
Neuronc=newNeuron(0,-100);
Neurond=newNeuron(200,0);
Makingconnectionsbetweentheneuronsnetwork.connect(a,b);
network.connect(a,c);
network.connect(b,d);
network.connect(c,d);
network.addNeuron(a);
network.addNeuron(b);
network.addNeuron(c);
network.addNeuron(d);
}
TheNetworkclassthereforeneedsanewfunctioncalledconnect(),whichmakesaConnectionobjectbetweenthetwospecifiedneurons.
ShowRaw
voidconnect(Neurona,Neuronb){
//Connectionhasarandomweight.
Connectionc=newConnection(a,b,random(1));
//ButwhatdowedowiththeConnectionobject?
}
voidconnect(Neurona,Neuronb){
Connectionhasarandomweight.Connectionc=newConnection(a,b,random(1));
//ButwhatdowedowiththeConnectionobject?
}
Presumably,wemightthinkthattheNetworkshouldstoreanArrayListofconnections,justlikeitstoresanArrayListofneurons.Whileuseful,inthiscasesuchanArrayListisnotnecessaryandismissinganimportantfeaturethatweneed.Ultimatelyweplanto“feedforward"theneuronsthroughthenetwork,sotheNeuronobjectsthemselvesmustknowtowhichneuronstheyareconnectedinthe“forward”direction.Inotherwords,eachneuronshouldhaveitsownlistofConnectionobjects.Whenaconnectstob,wewantatostoreareferenceofthatconnectionsothatitcanpassitsoutputtobwhenthetimecomes.
ShowRaw
voidconnect(Neurona,Neuronb){
Connectionc=newConnection(a,b,random(1));
a.addConnection(c);
}
voidconnect(Neurona,Neuronb){
Connectionc=newConnection(a,b,random(1));
a.addConnection(c);
}
Insomecases,wealsomightwantNeuronbtoknowaboutthisconnection,butinthisparticularexampleweareonlygoingtopassinformationinonedirection.
Forthistowork,wehavetoaddanArrayListofconnectionstotheNeuronclass.ThenweimplementtheaddConnection()functionthatstorestheconnectioninthatArrayList.
ShowRaw
classNeuron{
PVectorlocation;
//Theneuronstoresitsconnections.
ArrayListconnections;
Neuron(floatx,floaty){
location=newPVector(x,y);
connections=newArrayList();
}
//[full]Addingaconnectiontothisneuron
voidaddConnection(Connectionc){
connections.add(c);
}
//[end]
classNeuron{
PVectorlocation;
Theneuronstoresitsconnections.ArrayListconnections;
Neuron(floatx,floaty){
location=newPVector(x,y);
connections=newArrayList();
}
AddingaconnectiontothisneuronvoidaddConnection(Connectionc){
connections.add(c);
}
Theneuron’sdisplay()functioncandrawtheconnectionsaswell.Andfinally,wehaveournetworkdiagram.
Yourbrowserdoesnotsupportthecanvastag.
RESET
PAUSE
Example10.3:Neuralnetworkdiagram
ShowRaw
voiddisplay(){
stroke(0);
strokeWeight(1);
fill(0);
ellipse(location.x,location.y,16,16);
//[full]Drawingalltheconnections
for(Connectionc:connections){
c.display();
}
//[end]
}
}
voiddisplay(){
stroke(0);
strokeWeight(1);
fill(0);
ellipse(location.x,location.y,16,16);
Drawingalltheconnectionsfor(Connectionc:connections){
c.display();
}
}
}
10.8AnimatingFeedForward
Aninterestingproblemtoconsiderishowtovisualizetheflowofinformationasittravelsthroughoutaneuralnetwork.Ournetworkisbuiltonthefeedforwardmodel,meaningthataninputarrivesatthefirstneuron(drawnonthelefthandsideofthewindow)andtheoutputofthatneuronflowsacrosstheconnectionstotherightuntilitexitsasoutputfromthenetworkitself.
Ourfirststepistoaddafunctiontothenetworktoreceivethisinput,whichwe’llmakearandomnumberbetween0and1.
ShowRaw
voidsetup(){
//Allouroldnetworksetupcode
//Anewfunctiontosendinaninput
network.feedforward(random(1));
}
voidsetup(){
Allouroldnetworksetupcode
Anewfunctiontosendinaninputnetwork.feedforward(random(1));
}
Thenetwork,whichmanagesalltheneurons,canchoosetowhichneuronsitshouldapplythatinput.Inthiscase,we’lldosomethingsimpleandjustfeedasingleinputintothefirstneuronintheArrayList,whichhappenstobetheleft-mostone.
ShowRaw
classNetwork{
//[full]Anewfunctiontofeedaninputintotheneuron
voidfeedforward(floatinput){
Neuronstart=neurons.get(0);
start.feedforward(input);
}
//[end]
classNetwork{
Anewfunctiontofeedaninputintotheneuronvoidfeedforward(floatinput){
Neuronstart=neurons.get(0);
start.feedforward(input);
}
Whatdidwedo?Well,wemadeitnecessarytoaddafunctioncalledfeedforward()intheNeuronclassthatwillreceivetheinputandprocessit.
ShowRaw
classNeuron
voidfeedforward(floatinput){
//Whatdowedowiththeinput?
}
classNeuron
voidfeedforward(floatinput){
Whatdowedowiththeinput?
}
Ifyourecallfromworkingwithourperceptron,thestandardtaskthattheprocessingunitperformsistosumupallofitsinputs.SoifourNeuronclassaddsavariablecalledsum,itcansimplyaccumulatetheinputsastheyarereceived.
ShowRaw
classNeuron
intsum=0;//[bold]
voidfeedforward(floatinput){
//Accumulatethesums.
sum+=input;//[bold]
}
classNeuron
intsum=0;
voidfeedforward(floatinput){
Accumulatethesums.sum+=input;
}
Theneuroncanthendecidewhetheritshould“fire,”orpassanoutputthroughanyofitsconnectionstothenextlayerinthenetwork.Herewecancreateareallysimpleactivationfunction:ifthesumisgreaterthan1,fire!
ShowRaw
voidfeedforward(floatinput){
sum+=input;
//Activatetheneuronandfiretheoutputs?
if(sum>1){
fire();
//Ifwe’vefiredoffouroutput,
//wecanresetoursumto0.
sum=0;
}
}
voidfeedforward(floatinput){
sum+=input;
Activatetheneuronandfiretheoutputs?if(sum>1){
fire();
Ifwe’vefiredoffouroutput,
wecanresetoursumto0.sum=0;
}
}
Now,whatdowedointhefire()function?Ifyourecall,eachneuronkeepstrackofitsconnectionstootherneurons.Soallweneedtodoisloopthroughthoseconnectionsandfeedforward()theneuron’soutput.Forthissimpleexample,we’lljusttaketheneuron’ssumvariableandmakeittheoutput.
ShowRaw
voidfire(){
for(Connectionc:connections){
//TheNeuronsendsthesumout
//throughallofitsconnections
c.feedforward(sum);
}
}
voidfire(){
for(Connectionc:connections){
TheNeuronsendsthesumout
throughallofitsconnectionsc.feedforward(sum);
}
}
Here’swherethingsgetalittletricky.Afterall,ourjobhereisnottoactuallymakeafunctioningneuralnetwork,buttoanimateasimulationofone.Iftheneuralnetworkwerejustcontinuingitswork,itwouldinstantlypassthoseinputs(multipliedbytheconnection’sweight)alongtotheconnectedneurons.We’dsaysomethinglike:
ShowRaw
classConnection{
voidfeedforward(floatval){
b.feedforward(val*weight);
}
classConnection{
voidfeedforward(floatval){
b.feedforward(val*weight);
}
Butthisisnotwhatwewant.WhatwewanttodoisdrawsomethingthatwecanseetravelingalongtheconnectionfromNeuronatoNeuronb.
Let’sfirstthinkabouthowwemightdothat.WeknowthelocationofNeurona;it’sthePVectora.location.Neuronbislocatedatb.location.WeneedtostartsomethingmovingfromNeuronabycreatinganotherPVectorthatwillstorethepathofourtravelingdata.
ShowRaw
PVectorsender=a.location.get();
PVectorsender=a.location.get();
Oncewehaveacopyofthatlocation,wecanuseanyofthemotionalgorithmsthatwe’vestudiedthroughoutthisbooktomovealongthispath.Here—let’spicksomethingverysimpleandjustinterpolatefromatob.
ShowRaw
sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
Alongwiththeconnection’sline,wecanthendrawacircleatthatlocation:
ShowRaw
stroke(0);
line(a.location.x,a.location.y,b.location.x,b.location.y);
fill(0);
ellipse(sender.x,sender.y,8,8);//[bold]
stroke(0);
line(a.location.x,a.location.y,b.location.x,b.location.y);
fill(0);
ellipse(sender.x,sender.y,8,8);
Thisresemblesthefollowing:
Figure10.16
OK,sothat’showwemightmovesomethingalongtheconnection.Buthowdoweknowwhentodoso?WestartthisprocessthemomenttheConnectionobjectreceivesthe“feedforward”signal.Wecankeeptrackofthisprocessbyemployingasimplebooleantoknowwhethertheconnectionissendingornot.Before,wehad:
ShowRaw
voidfeedforward(floatval){
b.feedforward(val*weight);
}
voidfeedforward(floatval){
b.feedforward(val*weight);
}
Now,insteadofsendingthevalueonstraightaway,we’lltriggerananimation:
ShowRaw
classConnection{
booleansending=false;
PVectorsender;
floatoutput;
voidfeedforward(floatval){
//Sendingisnowtrue.
sending=true;
//StarttheanimationatthelocationofNeuronA.
sender=a.location.get();
//Storetheoutputforwhenitisactuallytimetofeeditforward.
output=val*weight;
}
classConnection{
booleansending=false;
PVectorsender;
floatoutput;
voidfeedforward(floatval){
Sendingisnowtrue.sending=true;
StarttheanimationatthelocationofNeuronA.sender=a.location.get();
Storetheoutputforwhenitisactuallytimetofeeditforward.output=val*weight;
}
NoticehowourConnectionclassnowneedsthreenewvariables.Weneedaboolean“sending”thatstartsasfalseandthatwilltrackwhetherornottheconnectionisactivelysending(i.e.animating).WeneedaPVector“sender”forthelocationwherewe’lldrawthetravelingdot.Andsincewearen’tpassingtheoutputalongthisinstant,we’llneedtostoreitinavariablethatwilldothejoblater.
Thefeedforward()functioniscalledthemomenttheconnectionbecomesactive.Onceit’sactive,we’llneedtocallanotherfunctioncontinuously(eachtimethroughdraw()),onethatwillupdatethelocationofthetravelingdata.
ShowRaw
voidupdate(){
if(sending){
//[full]Aslongaswe’resending,interpolateourpoints.
sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
//[end]
}
}
voidupdate(){
if(sending){
Aslongaswe’resending,interpolateourpoints.sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
}
}
We’remissingakeyelement,however.Weneedtocheckifthesenderhasarrivedatlocationb,andifithas,feedforwardthatoutputtothenextneuron.
ShowRaw
voidupdate(){
if(sending){
sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
//Howfararewefromneuronb?
floatd=PVector.dist(sender,b.location);//[bold]
//Ifwe’recloseenough(withinonepixel)passontheoutput.Turnoffsending.
if(d<1){//[bold]
b.feedforward(output);//[bold]
sending=false;//[bold]
}
}
}
voidupdate(){
if(sending){
sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
Howfararewefromneuronb?floatd=PVector.dist(sender,b.location);
Ifwe’recloseenough(withinonepixel)passontheoutput.Turnoffsending.if(d<1){
b.feedforward(output);
sending=false;
}
}
}
Let’slookattheConnectionclassalltogether,aswellasournewdraw()function.
Yourbrowserdoesnotsupportthecanvastag.
RESET
PAUSE
Example10.4:Animatinganeuralnetworkdiagram
ShowRaw
voiddraw(){
background(255);
//TheNetworknowhasanewupdate()methodthatupdatesalloftheConnectionobjects.
network.update();//[bold]
network.display();
if(frameCount%30==0){//[bold]
//Wearechoosingtosendinaninputevery30frames.
network.feedforward(random(1));//[bold]
}//[bold]
}
classConnection{
//TheConnection’sdata
floatweight;
Neurona;
Neuronb;
//Variablestotracktheanimation
booleansending=false;
PVectorsender;
floatoutput=0;
Connection(Neuronfrom,Neuronto,floatw){
weight=w;
a=from;
b=to;
}
//TheConnectionisactivewithdatatravelingfromatob.
voidfeedforward(floatval){
output=val*weight;
sender=a.location.get();
sending=true;
}
//Updatetheanimationifitissending.
voidupdate(){
if(sending){
sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
floatd=PVector.dist(sender,b.location);
if(d<1){
b.feedforward(output);
sending=false;
}
}
}
//Drawtheconnectionasalineandtravelingcircle.
voiddisplay(){
stroke(0);
strokeWeight(1+weight*4);
line(a.location.x,a.location.y,b.location.x,b.location.y);
if(sending){
fill(0);
strokeWeight(1);
ellipse(sender.x,sender.y,16,16);
}
}
}
voiddraw(){
background(255);
TheNetworknowhasanewupdate()methodthatupdatesalloftheConnectionobjects.network.update();
network.display();
if(frameCount%30==0){
Wearechoosingtosendinaninputevery30frames.network.feedforward(random(1));
}
}
classConnection{
TheConnection’sdatafloatweight;
Neurona;
Neuronb;
Variablestotracktheanimationbooleansending=false;
PVectorsender;
floatoutput=0;
Connection(Neuronfrom,Neuronto,floatw){
weight=w;
a=from;
b=to;
}
TheConnectionisactivewithdatatravelingfromatob.voidfeedforward(floatval){
output=val*weight;
sender=a.location.get();
sending=true;
}
Updatetheanimationifitissending.voidupdate(){
if(sending){
sender.x=lerp(sender.x,b.location.x,0.1);
sender.y=lerp(sender.y,b.location.y,0.1);
floatd=PVector.dist(sender,b.location);
if(d<1){
b.feedforward(output);
sending=false;
}
}
}
Drawtheconnectionasalineandtravelingcircle.voiddisplay(){
stroke(0);
strokeWeight(1+weight*4);
line(a.location.x,a.location.y,b.location.x,b.location.y);
if(sending){
fill(0);
strokeWeight(1);
ellipse(sender.x,sender.y,16,16);
}
}
}
Exercise10.5Thenetworkintheaboveexamplewasmanuallyconfiguredbysettingthelocationofeachneuronanditsconnectionswithhard-codedvalues.Rewritethisexampletogeneratethenetwork’slayoutviaanalgorithm.Canyoumakeacircularnetworkdiagram?Arandomone?Anexampleofamulti-layerednetworkisbelow.
Yourbrowserdoesnotsupportthecanvastag.
RESET
PAUSE
Exercise10.6Rewritetheexamplesothateachneuronkeepstrackofitsforwardandbackwardconnections.Canyoufeedinputsthroughthenetworkinanydirection?
Exercise10.7Insteadoflerp(),usemovingbodieswithsteeringforcestovisualizetheflowofinformationinthenetwork.
TheEcosystemProjectStep10Exercise:
Tryincorporatingtheconceptofa“brain”intoyourcreatures.
Usereinforcementlearninginthecreatures’decision-makingprocess.
Createacreaturethatfeaturesavisualizationofitsbrainaspartofitsdesign(evenifthebrainitselfisnotfunctional).
Cantheecosystemasawholeemulatethebrain?Canelementsoftheenvironmentbeneuronsandthecreaturesactasinputsandoutputs?
TheendIfyou’restillreading,thankyou!You’vereachedtheendofthebook.Butforasmuchmaterialasthisbookcontains,we’vebarelyscratchedthesurfaceoftheworldweinhabitandoftechniquesforsimulatingit.It’smyintentionforthisbooktoliveasanongoingproject,andIhopetocontinueaddingnewtutorialsandexamplestothebook’swebsiteaswellasexpandandupdatetheprintedmaterial.Yourfeedbackistrulyappreciated,sopleasegetintouchviaemailat([email protected])orbycontributingtotheGitHubrepository,inkeepingwiththeopen-sourcespiritoftheproject.Shareyourwork.Keepintouch.Let’sbetwowithnature.
Licenses
Thebook'stextandillustrationsarelicensedunderaCreativeCommonsAttribution-NonCommercial3.0UnportedLicense.
Allofthebook'ssourcecodeislicensedundertheGNULesserGeneralPublicLicenseaspublishedbytheFreeSoftwareFoundation;eitherversion2.1oftheLicense,or(atyouroption)anylaterversion.
Colophon
ThisbookwasgeneratedwithTheMagicBookProject.
ThisbookwouldnothavebeenpossiblewithoutthegeneroussupportofKickstarterbackers.
ThisbookistypesetonthewebinGeorgiawithheadersinProximaNova.
PleasereportanymistakesinthebookorbugsinthesourcewithaGitHubissueorcontactmeatdanielatshiffmandotnet.
Author
DanielShiffmanisaprofessoroftheInteractiveTelecommunicationsProgramatNewYorkUniversity.
HeistheauthorofLearningProcessing.
TwitterGitHub