As one of the most popular events in the neuroscience calendar, this year’s BNA Festive Symposium kick-started the BNA’s annual theme for 2022 – Artificial Intelligence – what can AI tell us about biological intelligence, and how can AI be used to interrogate neuroscience data and learn more about the nervous system?

The festive symposium was held online, greatly increasing accessibility and allowing people to join a day filled with neuroscience, AI, and festive fun from any part of the world.
At first glance, it is almost impossible to tie Christmas, Artificial Intelligence, and Neuroscience together and produce a themed talk!

However, the fantastic speakers not only managed to tie each talk to the festive season (especially Dr Dan Jamieson who involved the audience in a journey to saving Christmas through AI) but some took it a step further and dressed up for the occasion! The whole panel team opened the symposium and welcomed the audience with wonderful Santa hats and Christmas jumpers which set the mood for the whole day of festive science talks!

The day was split into five sessions of talks and announcements of BNA awards and prizes winners. The first session was chaired by the BNA president Prof Rik Henson and saw talks by Prof Christopher Summerfield and Dr Dan Jamieson. Prof Summerfield discussed the trajectory of developments in AI and how those can not only bring major changes in our everyday lives but also in neuroscience research. He particularly focused on how artificial intelligence invites us to consider the limitations of current neuroscience research and how we could use this to develop new research opportunities. For example, one key point raised at the beginning of his talk is the nature of neuroscience research to study parts of the brain in isolation.

Despite successful collaboration between labs scientists tend to focus their investigation of one small part of the brain, ignoring the rest. The really hard problem, Prof Summerfield argues, is figuring out how different functions are integrated, and how different brain regions communicate with each other. In fact, AI can offer solutions in this matter – whether one likes it or not, an AI agent would not work unless its individual components are pieced together. Thus, instead of focusing on how memory, or perception, or decision-making work in isolation, one would have to integrate all of these to produce an AI agent. With the advances in AI technology, such problems could offer changes in how neuroscience research is done – helping scientists embrace the notion of structured computation by understanding how whole networks work.

Then, Dr Jamieson, a CEO and co-founder of Biorelate, discussed the power of AI and deep learning to process and understand scientific articles. With the help of such processing software services, he argued, scientists can not only save time during literature searches (the software can auto-curate over 30 million articles in under six hours) but also make connections between relatively distant concepts and accelerate research intelligence. The creativity of Dr Jamieson did not go unnoticed, for his talk was framed within a Christmas fable – could Biorelate save Christmas by utilising such powerful software and finding a disease cure for his reindeers before Christmas? Spoiler alert – Christmas was saved!

The second session, again chaired by Prof Henson, saw Prof Mihaela van der Schaar discuss Quantitative Epistemology. This is a new area of research pioneered by herself and members of her lab in Cambridge as a strand of machine learning aimed at understanding, supporting, and improving human decision making. Their work includes studying and identifying suboptimalities in beliefs and decision-making processes and constructing support systems to empower better decision making.

Prof Aldo Faisal then followed with a talk on harnessing the power of AI in changing how we do science. His talk highlighted ways in which humans and machines can interact and focused on different methods of machine learning.

After a short break, in a session chaired by Prof Tara Spires-Jones, Dr Sadhana Sharma discussed upcoming funding opportunities at the interface of AI and neuroscience and Prof Thomas Nowotny spoke about utilisations of algorithms inspired by insect anatomy. Prof Nowotny made a very interesting case of using less-sophisticated, insect-inspired algorithms as the basis of more robust and efficient AI.

In the last talks of the day, Prof Eleni Vasilaki discussed sparse reservoir computing – an approach of introducing sparsity into a reservoir computing network making neurons with low thresholds contribute to decision making whilst suppressing information from neurons with high thresholds. This approach, which her team term “SpaRCe”, optimises the sparsity level of the reservoir without affecting the reservoir dynamics. With such approach, SpaRCe alleviates the problem of catastrophic forgetting.

Dr George Cevora-Arca Blanca then spoke about instability in AI, portrayed through adversarial examples – a tiny, but carefully designed change to a picture, which would be imperceptible to humans, causes a machine vision to dramatically change its classification of the image. This may pose a significant danger when AI systems are deployed and misled – for example a self-driving car could mis-recognise a STOP sign on a road, with potentially catastrophic consequences. In his talk, George argued that instability may be unavoidable in light of how we currently frame Machine Vision tasks, but solutions do exist to make AI systems safe. Additionally, he postulated that humans are not immune to adversarial examples, but their occurrence is extremely improbable.

Deciding where to draw the mental line between machines and beings with minds is going to prove a contentious question for all of us to tackle together.

Henry Shelvin

Finally, Dr Henry Shelvin discussed the advances of language processing capabilities of AI and gave us a few examples of AI ‘friends’ such as Replika and Woebot. Many users seem to attribute sincere thoughts, desires, and even emotions to the systems they interact with, forming sometimes deep relationships. Yet, what is the value of human-robot friendship? Cognitive scientists largely do not take the attribution of mental states to these systems seriously. This creates a dilemma for cognitive scientists in the upcoming decades: should they play the role of ‘killjoys’ and attempt to debunk the idea that these systems have mental states, or – in light of changing norms of ascription among the general public – instead attempt to revise their scientific concepts to accommodate these ‘uncanny communicators’? To end with a quote from his talk: “Deciding where to draw the mental line between machines and beings with minds is going to prove a contentious question for all of us to tackle together.”

In between these talks, BNA 2021 awards for undergraduate, postgraduate as well as the prestigious award for Outstanding Contribution to Neuroscience and Public Engagement of Neuroscience were announced. You can follow this link to find more about the winners