Dynamics and information import in recurrent neural networks

HIGHLIGHTS

SUMMARY

    At present, the field of Machine Learning is strongly dominated by feed-forward neural_networks, which can be optimized to approximate an arbitrary vectorial function y=f(x) between the input and output spaces (Funahashi, 1989; Hornik et_al, 1989; Cybenko, 1992). Due to the feedback built into these systems, RNNs can learn robust representations (Farrell et_al, 2019), and are ideally suited to process sequences of data such as natural language (LeCun et_al, 2015; Schilling et_al, 2021a), or to perform sequential-decision tasks such as spatial navigation (Banino et_al, 2018; Gerum et_al, 2020). Understanding and controlling . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?