Artificial Intelligence

Metal Toad Prompt Engineering Hackathon

In this latest event, in the last week of September we embarked on a journey to learn how to work with prompt engineering, unleashing the power of Artificial Intelligence to assist in software development.


Twice a year, Metal Toad celebrates innovation and collaboration with a Hackathon, aimed at embracing creativity, pushing boundaries, transforming ideas into reality and learning new technologies.

In this latest event, in the last week of September we embarked on a journey to learn how to work with prompt engineering, unleashing the power of Artificial Intelligence to assist in software development. This process involved writing precise instructions to get desired responses from AI models, some tailored to writing actual programming code, and others to work with graphics and design.

Teams were recruited and allowed to work on any project deemed interesting, as long as they used prompt engineering for as much of the project as possible. The team size had a minimum of 3 people. Projects ranged from an avatar generator, to a recommendation system for books, songs or movies, to generating tickets using natural language. All code in the hackathon was attempted to be generated by ChatGPT.

One of the first things noticed is that when generating programming code, ChatGPT can generate a response very quickly, but it rarely generates the exact answer you are looking for the first time round. You will need to refine your questions and add further constraints or explanations about what you are looking for.

For example “write me a function in python that accesses an API” provided code that was incompatible with AWS Lambda. After adding “wrap the previous code in a Lambda function” solved that but used the python “requests” library, which isn’t available in Lambdas without creating a custom layer. “Write me a Lambda function that accesses an API using urllib3” provided resulting code that could be used without resorting to layers. And so on.

This showcases that you need to know how to ask for what you want, to get helpful responses or example code from ChatGPT, and again, to be able to tell if the response is going to do what you want it to. Overall, around 40-60% of the code was generated by ChatGPT and the remainder was hand tweaked by humans.

Hopefully in the future, prompt engineering will be able to generate better responses, and allow even greater increases in developer performance.

Similar posts

Get notified on new marketing insights

Be the first to know about new B2B SaaS Marketing insights to build or refine your marketing function with the tools and knowledge of today’s industry.