AI Scientist Running Experiments On Its Own

By Brian Myers | Published

AI extinction

Researchers from the University of Oxford collaborate with scientists from other public and private institutions as a common practice. But a recent effort between the university and the University of British Colombia has been raising some eyebrows since the release of a recent grouping of papers. The research papers were generated entirely by what is being heralded as an “AI Scientist,” a move that some joke is bringing humanity one step closer to the beginnings of a science fiction dystopia.

A Solid Foundation For The Future Of AI

The AI scientist that produced methods and ideas on how to improve algorithms was created jointly by teams at the University of British Columbia, the University of Oxford, and a startup tech company named Sakana AI. The results are promising, in that this is a big step toward allowing artificial intelligence to absorb ideas and knowledge which, in turn, it applies to what it perceives as real-world scenarios.

The AI scientist isn’t producing anything perfect, and other than the sheer speed of its methodology and calculating speed and power, isn’t doing anything to exceed what the human mind can do. But it’s certainly a solid foundation for what AI is being created to do for the future.

In Time, Human Input Won’t Be A Requirement

The hope for this particular AI scientist, as well as others that will inevitably follow, is that it will be able to overcome its limitations that have restricted it to only consuming and processing data that have been generated by human input. If AI can have its “mind” opened, for lack of better terminology, then it will be able to work outside of the constraints of the data it has been fed and be able to truly form its own thoughts while unlocking ideas that go beyond anything a human mind could possibly conceive.

Trial And Error Will Lead To Progress

The lead professor at the University of British Columbia, Jeff Clune, maintains that if the AI scientist is able to have the amount of data it processes accelerated greatly if “the computer power feeding them” is elevated. And while Clune admits that the current AI scientist that he helped oversee hasn’t produced any results that are especially imaginative, he’s hopeful that quicker processing speeds, more data, and additional trial and error will yield the results that he and the other human researchers are looking for.

Computers Teaching Computers

detect AI

Clune’s efforts are also focused on designing an AI scientist that is capable of doing its own creations. The researcher and professor revealed that his lab is wanting to develop an AI technology that, in turn, develops and controls additional AI units. The AI that exists today is perfectly capable of going head-to-head with human minds and beating them when it comes to mathematical formulas and reading comprehension (as seen by anyone who uses Chat GPT), but Clune sees a need for this technology to have other attributes added to its tool belt.

Potential Dangers To Consider

artificial intelligence ai art

Clune maintains that should any AI scientist be capable of creating others in its likeness that certain safeguards will have to be developed in conjunction. Keeping an AI unit from designing and creating AI that can “misbehave” is a scenario that Clune admits is “potentially dangerous,” and emphasizes the importance of more research and proper safeguards from the start.

Sources: ArXiv