Working with JSON files in Synapse Analytics
More and more the sources systems we work with are becoming web services. Many of these web services return a JSON file when called. This brings both advantages and disadvantages. JSON is a fairly readable format, and many systems know how to interpret it. But there are challenges such as schema drift and deeply nested nodes.
So how do you reliably ingest and transform JSON data without learning a lot of new skills or breaking the bank? Ingesting is most often fairly simple as Synapse Pipelines are good at receiving a JSON document from web services. The bigger problem comes when you need to work with the files after ingesting.
When you work with Synapse Analytics you have (in principle) a lot of tools to work with. You can use Synapse Pipelines, Mapping Data Flows, Spark or SQL. Each of these tools offer a way to work with JSON documents after ingestion. Some offer simple ways while others offer complex ways to work with them.
In this session we will ingest a JSON document from a web service using Synapse Pipelines. Then we will use each of the tools in the Synapse toolbox to see what they can offer and how they cope with simple and complex JSON structures. We will demo the capabilities calling the Power BI scanner API and working with the resulting JSON.
At the end of the session the audience will have a good understanding of what tool in the Synapse Analytics toolbox to use and when to use each.
Data Platform MVP
Hafnarfjörður, IcelandView Speaker Profile