Neural Processor - WikiChip

文章推薦指數: 80 %
投票人數:10人

A neural processing unit (NPU) is a well-partitioned circuit that comprises all the control and arithmetic logic components necessary to execute ... Semiconductor&ComputerEngineering  WikiChip  WikiChip WikiChip Home RandomArticle RecentChanges ChipFeed TheFuseCoverage RecentNews ISSCC IEDM VLSI HotChips SuperComputing SocialMedia Twitter Flipboard Popular Companies Intel AMD ARM Qualcomm Microarchitectures Skylake(Client) Skylake(Server) Zen CoffeeLake Zen2 TechnologyNodes 14nm 10nm 7nm  Architectures  Popularx86 Intel Client Skylake KabyLake CoffeeLake IceLake Server Skylake CascadeLake CooperLake IceLake BigCores SunnyCove WillowCove SmallCores Goldmont GoldmontPlus Tremont Gracemont AMD Zen Zen+ Zen2 Zen3 PopularARM ARM Server NeoverseN1 Zeus Big Cortex-A75 Cortex-A76 Cortex-A77 Little Cortex-A53 Cortex-A55 Cavium Vulcan Samsung ExynosM1 ExynosM2 ExynosM3 ExynosM4  Chips  PopularFamilies Intel Corei3 Corei5 Corei7 Corei9 XeonD XeonE XeonW XeonBronze XeonSilver XeonGold XeonPlatinum AMD Ryzen3 Ryzen5 Ryzen7 RyzenThreadripper EPYC EPYCEmbedded Ampere eMAG Apple Ax Cavium ThunderX ThunderX2 HiSilicon Kirin MediaTek Helio NXP i.MX QorIQLayerscape Qualcomm Snapdragon400 Snapdragon600 Snapdragon700 Snapdragon800 Renesas R-Car Samsung Exynos FromWikiChip Page  Talk Contributions Login  Whatlinkshere Relatedchanges Printableversion Permanentlink Pageinformation Browseproperties SpecialPages  DecreaseSize  IncreaseSize  NormalSize NeuralProcessor Aneuralprocessor,aneuralprocessingunit(NPU),orsimplyanAIAcceleratorisaspecializedcircuitthatimplementsallthenecessarycontrolandarithmeticlogicnecessarytoexecutemachinelearningalgorithms,typicallybyoperatingonpredictivemodelssuchasartificialneuralnetworks(ANNs)orrandomforests(RFs). NPUssometimesgobysimilarnamessuchasatensorprocessingunit(TPU),neuralnetworkprocessor(NNP)andintelligenceprocessingunit(IPU)aswellasvisionprocessingunit(VPU)andgraphprocessingunit(GPU). Contents 1Motivation 2Overview 2.1Classification 2.2Datatypes 3Listofmachinelearningprocessors 4Seealso Motivation[edit] Executingdeepneuralnetworkssuchasconvolutionalneuralnetworksmeansperformingaverylargeamountofmultiply-accumulateoperations,typicallyinthebillionsandtrillionsofiterations.Thelargenumberofiterationscomesfromthefactthatforeachgiveninput(e.g.,image),asingleconvolutioncomprisesofiteratingovereverychannel,andtheneverypixel,andthenperformingaverylargenumberofMACoperations.Manysuchconvolutionsarefoundinasinglemodelandthemodelitselfmustbeexecutedoneachnewinput(e.g.,everycameraframecapture). Unliketraditionalcentralprocessingunitswhicharegreatatprocessinghighlyserializedinstructionstreams,machinelearningworkloadstendtobehighlyparallelizable,muchlikeagraphicsprocessingunit.Moreover,unlikeaGPU,NPUscanbenefitfromvastlysimplerlogicbecausetheirworkloadstendtoexhibithighregularityinthecomputationalpatternsofdeepneuralnetworks.Forthosereasons,manycustom-designeddedicatedneuralprocessorshavebeendeveloped. Overview[edit] Aneuralprocessingunit(NPU)isawell-partitionedcircuitthatcomprisesallthecontrolandarithmeticlogiccomponentsnecessarytoexecutemachinelearningalgorithms.NPUsaredesignedtoacceleratetheperformanceofcommonmachinelearningtaskssuchasimageclassification,machinetranslation,objectdetection,andvariousotherpredictivemodels.NPUsmaybepartofalargeSoC,apluralityofNPUsmaybeinstantiatedonasinglechip,ortheymaybepartofadedicatedneural-networkaccelerator. Classification[edit] Generallyspeaking,NPUsareclassifiedaseithertrainingorinference.Forchipsthatarecapableofperformingbothoperations,thetwophasesarestillgenerallyperformedindependently. Training-NPUsdesignedtoacceleratetrainingaredesignedtoacceleratethecuratingofnewmodels.Thisisahighlycompute-intensiveoperationthatinvolvesinputtinganexistingdataset(typicallytagged)anditeratingoverthedataset,adjustingmodelweightsandbiasesinordertoensureanever-moreaccuratemodel.Correctingawrongpredictioninvolvespropagatingbackthroughthelayersofthenetworkandguessingacorrection.Theprocessinvolvesguessingagainandagainuntilacorrectanswerisachievedatthedesiredaccuracy. Inference-NPUsdesignedtoaccelerateinferenceoperateoncompletemodels.Inferenceacceleratorsaredesignedtoinputanewpieceofdata(e.g.,anewcamerashot),processitthroughthealreadytrainedmodelandgeneratearesult. Datatypes[edit] Thissectionisempty;youcanhelpaddthemissinginfobyeditingthispage. Listofmachinelearningprocessors[edit] Alibaba:Ali-NPU AlphaICs:Gluon Amazon:AWSInferentia Apple:NeuralEngine Arm:MLProcessor Baidu:Kunlun Bitmain:Sophon Cambricon:MLU Cerebras:CS-1 FlexLogix:InferX Nepes:NM500(GeneralVisiontech) GreenWaves:GAP8 Google:TPU GyrfalconTechnology:Lightspeeur Graphcore:IPU Groq: Habana:HLSeries Hailo:Hailo-8 Huawei:Ascend Intel:NNP,Myriad,EyeQ,GNA Kendryte:K210 Mythic:Template:mythic NationalChip:NeuralProcessingUnit(NPU) NEC:SX-Aurora(VPU) Nvidia:NVDLA,Xavier Samsung:NeuralProcessingUnit(NPU) Rockchip:RK3399Pro(NPU) Amlogic:KhadasVIM3(NPU) Synaptics:SyNAP(NPU) Tesla:FSDChip Vathys WaveComputing:DPU Brainchip:Akida(NPU&NPEs) Thislistisincomplete;youcanhelpbyexpandingit. Seealso[edit] accelerators Retrievedfrom"https://en.wikichip.org/w/index.php?title=neural_processor&oldid=99525"Category:acceleratorsHiddencategory:Articleswithemptysections reportthisad Thispagewaslastmodifiedon4November2021,at15:39. PrivacypolicyAboutWikiChipDisclaimers



請為這篇文章評分?