Session

Your AI Needs an Assistant

Today’s era of “AI”, better referred to as internet-enhanced autocomplete, is wonderful at producing lots of code - but terrible at producing correct code. And while AI is getting better over time, we still need tooling to analyze generated code for defects. Do you know what parts of your of your toolchain run quickly, in editors, to provide configurable reports on likely bugs and best practice violations? Linting, unit tests, and type checking!

This balanced, nuanced talk will walk through the best and worst cases for AI-generated code. That includes how dynamic analysis (tests!) and static analysis (linting! type checking!) can help resolve many of those issues:

- Knowing when to prefer first-party docs over risky AI regurgitation
- Pitfalls for common code prompts - and the common issues linting and type checking can automatically spot in AI responses
- Systems such as TypeChat for re-prompting for better results by reporting detected defects back to AIs

To conclude, we'll pull that all together with a walkthrough of a new system for AIs that strings together tailored prompts and traditional analysis for a much better code prompt experience. You don't need to run a full v0 to get great code completions locally!

By analyzing both the inputs and the outputs of AI prompting, we can use this amazing new technology in the ways it’s made for - and avoid its all-to-common pitfalls.

I'd like to start by contrasting the benefits to developer education and productivity with drawbacks of lower quality code, misconceptions, and outright “hallucinations” (also known as lies). I'd especially like to show off real-world cases where exuberant AI use led developers down bad paths and had a net negative impact on their ability to function.

The points I'd like to go through specifically are:

- Automatically re-prompting for better results by reporting detected defects back to AIs
- Configuring your linter to find issues tailored to your project’s needs - in and out of AI code
- Forcing prompts to act in “untrusted” modes for AIs, so user-provided misconceptions don’t taint results
- Increasing type safety for AI communications with projects like TypeChat

I want to make this a balanced talk. AI has a lot of hype and a lot of detractors. I think it's useful to put forward a fair, nuanced opinion that respects where we are in the Gartner Hype Cycle for AI.

Josh Goldberg

Open Source Developer

Wakefield, Massachusetts, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top