Session

AI Needs Linting

Today’s era of “AI”, better referred to as internet-enhanced autocomplete, is wonderful at producing lots of code - but terrible at producing correct code. And while AIs are getting better over time, we still need tooling to analyze generated code for defects. Do you know what parts of your of your toolchain run quickly, in editors, to provide configurable reports on likely bugs and best practice violations? Static analysis! Your linter and type checker were born for this.

This balanced, nuanced talk will walk through the best and worst cases for AI-generated code. We’ll start by contrasting the benefits to developer education and productivity with drawbacks of lower quality code, misconceptions, and outright “hallucinations” (also known as lies). We’ll see real-world cases where exuberant AI use led developers down bad paths and had a net negative impact on their ability to function.

We’ll then see how static analysis can help resolve many of those issues:

- Automatically re-prompting for better results by reporting detected defects back to AIs
- Configuring your linter to find issues tailored to your project’s needs - in and out of AI code
- Forcing prompts to act in “untrusted” modes for AIs, so user-provided misconceptions don’t taint results
- Increasing type safety for AI communications with projects like TypeChat

By linting both the inputs and the outputs of AI prompting, we can use this amazing new technology in the ways it’s made for - and avoid its all-to-common pitfalls.

Josh Goldberg

Open Source Developer

Philadelphia, New York, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top