The AI: More than Human exhibition at the Barbican Centre in London shines a spotlight on ground-breaking artificial intelligence projects by the likes of Sony CSL and Makr Shakr
Subscribe to our email newsletter
AI is a buzzword we often hear about – and now an exhibition is showing the public examples of future artificial intelligence with a simulator for building sustainable cities using Lego and a robot cocktail maker.
The AI: More than Human exhibition at the Barbican Centre in London launches today (16 May) and runs until 26 August.
It tells the story of AI, from its ancient roots in Japanese Shintoism and the early experiments in computing by Ada Lovelace and Charles Babbage in the 19th century, through to its major development leaps beginning in the 1940s and the present day quest to create artificial life.
The gallery also explores some of the most cutting-edge projects being carried out by organisations including DeepMind, Massachusetts Institute of Technology, Sony Computer Science Laboratories (CSL), Google and Jigsaw.
Compelo was given a sneak peek at the exhibition ahead of its launch – here’s some of the most exciting projects still being developed that could have a profound impact on the future of AI.
Kreyon City
Lego might represent a pleasant dose of nostalgia for many adults but a team of Paris-based Sony CSL researchers are using the famous toy bricks to help build sustainable cities.
Their Kreyon City concept is an interactive infrastructure game aimed at groups of urban planners and policymakers.
Participants are tasked with recreating a whole city – including houses, industries, schools, services and infrastructure – by adding new elements or restructuring those built by others.
This emerging city is monitored by cameras, while a machine learning algorithm uses historic socio-economic data to give real-time feedback on the consequences of certain actions.
It means the planners can visualise important city features, such as the number of inhabitants, employment level, waste produced and green areas – giving them ideas when it comes to building a real-life city of how this might impact factors such as average salary, quality of life and private vehicle ownership.
Sony CSL assistant researcher Bernardo Monechi, who developed the platform alongside colleagues Vittorio Loreto and Enrico Uboldi, says: “It enables people to see how the city so they can find solutions to problems that arise.
“We can see that sometimes having many parks is good for quality of life but there could also be drawbacks in economic output.
“Using trial and error, and complex reasoning, is a great way of making the public aware of how difficult it is to solve these problems.
“Cities are the central point of unity, where innovation and great changes occur, but populations are increasing at an unprecedented level – so if we just keep building cities without thinking, there will be problems in the future.”
Kreyon City has been in development for the past two years and, prior to its installation at the Barbican Centre, was showcased in Rome with a five-metre long city model.
The platform sets up particular complex challenges for participants to solve, such as reduce the amount of privately-owned cars and add more public services.
Mr Monechi adds: “We’re using it as a scientific experiment to see how people interact with the data and what they understand about the algorithm behind it.
“Of course there’s many more variables to take into account but they can get a feel for how complex it is to manage a city.
“In the future, we’d like to make this into something like a ‘super-decision’ tool where urban planners come to make decisions around a table. It would be less playful and more digitalised.”
Alter 3 robot
One of the key drivers in enhancing future intelligence involves creating robots that behave like humans.
This is the ultimate goal of the Japanese team behind the Alter project, which have built an android that learns and matures by interacting with its surroundings to move autonomously.
It was created by University of Tokyo artificial life researcher Takashi Ikegami and Osaka University roboticist Hiroshi Ishiguro, in collaboration with social networking website Mixi and entertainment company Warner Music Japan.
The first Alter, unveiled at the Japan Science Museum in 2016, made movements that were governed by a neural network – a learning computer system that mimics the way a brain works – and without any human input
It also featured a series of sensors that could detect proximity, temperature, humidity and noise to influence its movements.
A year later, Alter 2 arrived and learned how to conduct an orchestra for composer Keiichiro Shibuya’s opera Scary Beauty.
Alter 3, the latest incarnation, now mimics human movements as part of the learning process in becoming more humanlike than its predecessors.
It carries the same hallmarks as its cousins – a bare body that exposes the internal machinery and a face without age or gender.
Mr Ikegami, who has developed the software underpinning Alter, says: “The purpose of the robot is to see how its personality emerges in order to make a robot that has its own mind, which it has uploaded from humans.
“Copying a mind from humans to a robot is my mission. The behaviour and personality it shows is memorised in the neural network so it can eventually create its own degree of behaviours.
“Alter 2 was used as a music conductor but it can’t be the conductor by itself.
“There’s a complex metronome so it has to have its own certainty and humanlike capabilities, which is why I wanted to build a robot that could mimic people.”
Mr Ikegami, who plans to make Alter robots increase their intelligence by interacting with each other in future, says his ultimate goal involves two aspects.
“We have to think about life rather than intelligence,” he adds. “Artificial life comes first and AI is a by-product of that.
“So you have to make life-like behaviours with the robot. We don’t have to visualise the big data because AI can understand it.
“The android has its own way of thinking about the world so creating this way of thinking is based on its own embodiment.
“Secondly, we have to create a language or way of thinking for itself. Human intelligence is limited but, by collaborating with the android and making the computer do something we can’t with our brain, we can bridge the gap.
“This means the robot will help us to open up new directions in science that we can’t understand by ourselves.”
Robotic bartender
Watching cocktails be made can be a dazzling experience but it also creates a huge queue at the bar, such is the level of attention that goes into making the best pina coladas and espresso martinis.
So perhaps having a robot whip up a tasty alcoholic concoction is the solution to achieving both quality and speed.
That’s the aim of Makr Shakr, which has created a series of robotic bartenders that can pour up to 120 drinks an hour across more than 10,000 combinations that involve 60 different spirits.
Customers can use the Makr Shakr app to select pre-made recipes or design their own by choosing their preferred spirits, juices, sodas and garnishes, while getting instant visual feedback on each step of the cocktail-making process using real-time insights.
To coincide with the AI: More than Human exhibition, the Italian company’s Toni auto-bartender has taken up residence at the Barbican Centre to serve cocktails and mocktails to visitors.
Modelled on the gestures of Italian dancer and choreographer Marco Pelle, of the New York Theatre Ballet, it claims Toni is the most advanced drink-mixing robot on the market.
It was launched in April this year and features two mechanical arms that can precisely prepare serve any drink in seconds – shaking, stirring and muddling with co-ordinated, dance-like movements.