Session
The Hidden Cost of Bad Prompts: A Practical Guide to Prompt Engineering for Data Professionals
Berry Wand has a problem. Six months into her role as Chief Data Officer for a leading police force, the data culture work is paying offm leadership is bought in, governance is tightening, and her team is finally being asked the right questions.
The new problem is that her team cannot answer them fast enough.
Microsoft Fabric is in. Copilot licences have been procured. Power BI is the standard. The tools are in place. And yet, when Berry watches her senior analysts work, she notices the same thing every time. They write a one-line prompt. They get back something that is almost right. They spend the next twenty minutes editing the output into something usable. By the time they are done, they could have written it from scratch faster.
Berry's team has not been trained in prompt engineering. They have been trained in DAX, in T-SQL, in Power BI semantic modelling, in Fabric architecture.
The prompting, they have been left to figure out for themselves. And the gap between what good prompting produces and what one-line prompting produces, multiplied by every analyst on her team, every working day, is the largest unrecognised productivity cost in her department.
Then she heard about a session at SQLDaya focused on prompt engineering for data professionals. Not generic "10 ChatGPT hacks" content. Not a vendor demo.
A working set of frameworks, tested in real engagements, designed for the work data professionals actually do.
She arrived ready to take notes for her team.
In this session, Berry learned:
Why most "bad AI output" is actually a prompting problem, and how to spot it in five seconds
The five core prompt frameworks every data professional should know (RTF, BAB, CARE, CRIT, RISE) and when to reach for each
Worked examples for the work that fills the week: writing DAX, drafting technical documentation, reviewing PRs, building Power BI report descriptions, generating test data, summarising stakeholder meetings
Grounding: how to point Copilot at the right SharePoint folder, Fabric workspace, or document so you stop getting averaged, generic output
The prompt library pattern that turns one-person productivity gains into team-level capability
What to teach a data team first if you have one hour, one day, or one quarter to lift them out of "ad-hoc" prompting.
Berry left with three things: a framework reference her analysts could open on a second monitor on Monday morning, the language to explain to her CIO why the Copilot licence cost is the smaller half of the AI investment, and a working prompt library template she could share with her team that afternoon.
This session is for SQL Server, Fabric, and Power BI professionals who have access to Copilot or another enterprise AI tool, are getting middling results from it, and suspect there is a better way. It is not for AI specialists or prompt engineering consultants. It is for the people doing the actual data work, who need AI to make that work measurably faster without the output needing to be rewritten before it can be used.
Gethyn Ellis
Where data strategy meets real-world execution.
Wolverhampton, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top