HIGHLIGHTS
- who: Sulaiman Sadiq et al. from the University of Southampton, United Kingdom have published the research work: TinyOps: ImageNet Scale Deep Learning on Microcontrollers, in the Journal: (JOURNAL)
- what: The authors develop TinyOps an inference engine which accelerates inference latency of models in slow external memory using a partitioning and overlaying scheme via the available Direct Memory Access (DMA) peripheral to combine the advantages of external memory (size) and internal memory (speed). The work shows that the TinyOps space is more efficient compared to the internal or external memory design spaces and should be explored . . .
If you want to have access to all the content you need to log in!
Thanks :)
If you don't have an account, you can create one here.