High Tech

Developer creates “self-healing” programs that fix

An AI-generated image of
Enlarge / An AI-generated and human composited image of “Wolverine programming on a computer.”

Benj Edwards / Midjourney

Debugging a faulty program can be frustrating, so why not let AI do it for you? That’s what a developer that goes by “BioBootloader” did by creating Wolverine, a program that can give Python programs “regenerative healing abilities,” reports Hackaday. (Yep, just like the Marvel superhero.)

“Run your scripts with it and when they crash, GPT-4 edits them and explains what went wrong,” wrote BioBootloader in a tweet that accompanied a demonstration video. “Even if you have many bugs it’ll repeatedly rerun until everything is fixed.”

GPT-4 is a multimodal AI language model created by OpenAI and released in March, available to ChatGPT Plus subscribers and in API form to beta testers. It uses its “knowledge” about billions of documents, books, and websites scraped from the web to perform text processing tasks such as composition, language translation, and programming.

In the demo video for Wolverine, BioBootloader shows a side-by-side window display, with Python code on the left and Wolverine results on the right in a terminal. He loads a custom calculator script in which he adds a few bugs on purpose, then executes it.

“It runs it, it sees the crash, but then it goes and talks to GPT-4 to try to figure out how to fix it,” he says. GPT-4 returns an explanation for the program’s errors, shows the changes that it tries to make, then re-runs the program. Upon seeing new errors, GPT-4 fixes the code again, and then it runs correctly. In the end, the original Python file contains the changes added by GPT-4.

The code is available on GitHub, and the developer says the technique could be applied to other programming languages. Using Wolverine requires having an OpenAI API key for GPT-3.5 or GPT-4, and charges apply for usage. Right now, the GPT 3.5 API is open to anyone with an OpenAI account, but GPT-4 access is still restricted by a waitlist.

Recently, several experiments involving GPT-4 in recursive loops, such as Auto-GPT and BabyAGI, have attempted to give GPT-4 more “agentic” abilities that let it spin up more GPT-4 instances (agents) to perform several tasks simultaneously or act autonomously.

While it’s currently primitive proof-of-concept, techniques like Wolverine illustrate a potential future where apps may be able to fix their own bugs—even unexpected ones that may emerge after deployment. Of course, the implications, safety, and wisdom of allowing that to happen have not yet fully been explored.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button