Yann LeCun, the chief AI scientist at Meta, recently said that machine learning models could be trained without human-labeled examples. His remarks have given new life to the debate over whether machines as intelligent as humans are possible or even a worthy goal.  “Human-level intelligence in AI is something that will remain to be seen in the near future,” EY’s global chief technology officer Nicola Morini Bianzino told Lifewire in an email interview. “We must first focus on building AI that complements human intelligence and is compatible with the functions we want it to serve.”

Smarter Machines

In the recent event held by Meta AI, LeCun discussed possible paths toward human-level AI. One possible avenue that LeCun is exploring is using the human development model to train AI. For example, researchers are exploring ways to get machines to learn about the world through observation the same way babies do.  LeCun cited human and animal learning. “What kind of learning do humans and animals use that we are not able to reproduce in machines? That’s the big question I’m asking myself,” he said. But tremendous obstacles remain before AI develops anything like human-level intelligence. While enterprise AI adoption is widespread, AI is still limited in its ability to achieve human-level common sense knowledge and creativity, Bianzino said.  “Creativity is a uniquely human trait, and it is difficult to replicate this using technology,” he added. “As we think about how AI can act as a software to emulate human cognition, we must carefully consider what data should be powering the software.” The potential of AI has been studied over the centuries, AI expert Meltem Ballan said in an email interview. Researchers often argue over how to mimic human perception, attention, and motivation. Open sources bring AI closer to a human level of perception, Ballan added.  “However, human-level intelligence has much more elements than developing algorithms and pipelining (labeling and data augmentation),” Ballan said. “We first need to understand the synergy between brain and behavior at a level to build neuronal level algorithms following the neural level firing rates and implementing it over the entire process.”

Risks and Rewards

One area where human-level AI could be helpful is cybersecurity which is facing a major staffing shortage, Kumar Saurabh, the CEO of LogicHub, a cybersecurity company specializing in the use of AI, said in an email.  “We urgently need to accelerate the use of AI-driven automation just to keep up,” he added. “Humans are not good at analyzing thousands of security alerts or picking out a threat from millions of data points, but machines excel at this. This is not about replacing human intelligence, but rather augmenting human capabilities and turning human experience into automation that can scale to meet demands.” Maria Vircikova, the CEO of Matsuko, a real-time hologram app that leverages AI, said the real value in artificial intelligence is in augmenting human abilities rather than creating a machine that can act by itself.  “Adding another virtual assistant—but for specific and simple tasks—is as simple as cloning a piece of software—instant, frictionless, and relatively inexpensive,” Vircikova said. “The economic impact is profound, but still, we cannot call it ‘human-level AI.’” But if human-level AI is ever reached, the impact on society could be profound, said EY’s Bianzino. “The value of human-level AI is that the AI would become truly symbiotic with human intelligence, helping us work on complex tasks, understand the world in new ways and drive decisions based on predictive analytics,” he added.  However, most experts agree that bias will continue to be a risk in the development of human-level AI. “Technologists must carefully analyze the data they are using to train these models and ensure controls are in place to prevent their own personal biases from creeping in,” Bianzino said.