© Mapbox, © OpenStreetMap

Speaker

Kim Berg Hansen

Kim Berg Hansen

Lead Software Developer, Cegal Danmark A/S | #Oracle #SQL #PLSQL #SYM42

Middelfart, Denmark

Kim Berg Hansen is a database developer from Middelfart in Denmark. Originally wanting to work with electronics, he almost coincidentally tried computer programming and discovered where his talent lay, as the programs he did worked well – unlike the electronics projects he soldered that often failed. After that experience he progressed from Commodore Basic on VIC-20 over Modula-2 and C at Odense University to Oracle SQL and PL/SQL, which last two languages he now has worked with extensively since the year 2000. His professional passion is to work with data inside the database utilizing the SQL language to the fullest to achieve the best application experience for the users.

Kim shares his experience and knowledge by writing the Practical Oracle SQL book, blogging, presenting at various Oracle User Group conferences, and being the SQL quizmaster at the Oracle Dev Gym. His motivation is when peers go "now I understand" after his explanations, or when end users "can't live without" his application coding. He is certified Oracle OCE in SQL as well as awarded Oracle ACE Director. Outside the coding world Kim is married, loves to cook, and is a card-carrying member of Danish Beer Enthusiasts association.

Awards

Area of Expertise

  • Information & Communications Technology

Topics

  • SQL
  • Oracle
  • Oracle Database
  • PL/SQL
  • Oracle APEX

Invitation to the Wedding of JSON and Relational Data

- Elevator pitch -

“The happy Data couple cordially invites you to the Wedding of Mr. JSON and Ms. Relational.”

Mr. JSON began courtship in Oracle version 12.1.0.2 to the delight of Ms. Relational. In every Oracle version since then, the two have grown closer, right up to the JSON Relational Duality marriage in 23c.

- Description -

Native support of JSON within Oracle Database became available in version 12.1.0.2 and has greatly expanded and improved in every version since. This session will give you an overview and examples of the various functionalities to marry JSON and Relational data across the versions from 12.1.0.2 until 23c including:

• Different datatypes to store JSON
• Discover content of schemaless JSON with Dataguide
• Validate JSON with a JSON Schema
• Querying JSON data in different ways
• Creating JSON data in different ways
• Modifying JSON data
• Using JSON Relational Duality view to work with same data both as JSON and as relational
• JSON in PL/SQL
• GeoJSON and Spatial

- Notes -

JSON functionality in the database has grown a lot since the simple functions introduced in version 12.1.0.2. This session covers the JSON functionality from 12.1.0.2 to 23c, including:

• Query JSON: JSON_VALUE, JSON_QUERY, JSON_TABLE, JSON_EXISTS, JSON_EQUAL
• dot-notation and simplified JSON_TABLE
• JSON path expressions, including item methods
• Predicates in JSON path expressions
• JSON search index improving search performance using variant of Oracle Text
• Dataguide detecting metadata of the JSON content
• Dataguide autocreation of JSON schema
• Dataguide autocreation of relational views on the JSON content
• Dataguide autocreation of virtual columns per key in key:value pairs
• JSON creation: JSON_ARRAY, JSON_OBJECT, JSON_ARRAYAGG, JSON_OBJECTAGG
• Updating JSON: JSON_MERGEPATCH and JSON_TRANSFORM
• PL/SQL API: JSON_ELEMENT_T, JSON_OBJECT_T, JSON_ARRAY_T
• GeoJSON to SDO_GEOMETRY and GeoJSON in Dataguide
• JSON_SERIALIZE
• JSON datatype and JSON Schema
• JSON Relational Duality View

Presented at:
- POUG 2023
- DOUG 2023

Analytic Functions 101

- Elevator Pitch -

Plain SQL gives you either detailed single rows or aggregated data. When you need both, or need in one row to get data of other rows, you tend to do self-joins or slow procedural processing. Instead you should learn to use analytic functions. Here you learn basics of this indispensable part of SQL.

- Description -

SQL gives you rows of detail, aggregate functions gives you summarized data. To solve your requirements you often need both at the same time. Or you need in one row to access data from or compare with one or more other rows. That can be messy with plain SQL often requiring accessing the same data multiple times.

Analytic functions can help you to make your SQL do many such things efficiently, that you otherwise might have been tempted to do in slow procedural loops. Properly applied analytic functions can speed up the tasks (for which they are appropriate) manyfold.

In this session I’ll cover the syntax (including 21c enhancements) as well as little quirks to remember, all demonstrated with short and clear examples.

- Notes -

This presentation will focus on how analytic functions work. Analytics will be explained from the basics requiring no particular audience knowledge besides basic SQL.

• Explanation of syntax.
• When and how to use PARTITION BY.
• When and how to use ORDER BY.
• When and how to use ROWS BETWEEN.
• When and how to use RANGE BETWEEN.
• 21c enhancements - WINDOW, GROUPS, EXCLUDE
• How analytic functions differ from aggregate functions.
• When are analytic functions appropriate to use rather than aggregates.
• Using analytic functions on aggregate results.
• “Visualize” your data and requirements to help thinking analytically.
• When analytics can outperform aggregates.
• Analytic functions not always the best choice.

Note: This presentation and presentation “Analytic Functions in Real Life” are well suited for a 2-hour double session on analytic functions.

Presented at:
- DOAG 2015
- EMEA User Group online tour 2020
- OUGN 2023
- ODTUG Kscope 2023

Analytic Functions in Real Life

- Elevator Pitch -

When you learn the basics of analytic functions, most often you will see nice short examples. You may then have problems translating those basics into useful code for your real work. In this session you will see more complex examples of using analytic functions, all from applications in real life.

- Description -

Have you learned the syntax of analytic functions? Seen many presenters demonstrate with nice short examples how smart analytic functions are? And then went home and thought: “Well, they may be smart, but how can I use them in my real work?”

I’ve used analytic functions in thousands of SQL statements in more than 15 years developing our application. I can’t imagine coding SQL without analytic functions, as they allow me to put much functionality in SQL that otherwise might have needed procedural looping. Boss is happy as well, as the application perform much faster.

In this session I’ll showcase several of the real life use cases of analytic functions I’ve made over the years, hopefully serving as inspiration to your own work.

- Notes -

This presentation presumes some audience knowledge about analytic functions. It will focus on showing real-life usage examples to demonstrate the power of analytics giving attendees some idea of the breadth of problems that can be efficiently solved in single SQL statements.

• Top selling items: RANK, DENSE_RANK, ROW_NUMBER and RATIO_TO_REPORT.
• Picking by FIFO: Rolling sums using ROWS BETWEEN clause.
• Picking route: DENSE_RANK for numbering warehouse aisles.
• Picking efficiency: LAG on event log to determine time spent by employee waiting and working.
• Forecast sales: REGR_SLOPE to forecast next years sale.
• Forecast zero stock: Rolling sums on hourly budget data for fireworks sale.

Every one of these cases come from the retail application developed over the 16 years I worked at my previous employer.

Note: This presentation and presentation “Analytic Functions 101” are well suited for a 2-hour double session on analytic functions.

Presented at:
- ODTUG Kscope 2014
- ilOUG 2019
- DOAG 2019
- OUGN 2022

Analytic Views in Oracle Database: The Virtual Cube

- Elevator Pitch -

Dimensional analysis of data traditionally means copying data to a cube, as the realtime alternative is lots of advanced SQL. In Oracle 12.2 that became a lot easier with analytic views, where the advanced code is done in small pieces declaratively - then the database writes the heavy SQL itself.

- Description -

Traditionally to model and analyze your data with dimensions and hierarchies, either you copy your data to a data warehouse and from there to a dimensional cube, or you have to write quite a lot of rather advanced code and SQL to emulate the same directly on your relational data.

With analytic views (introduced in Oracle Database version 12.2), you declaratively define the dimensions and hierarchies similar to creating a cube, but it is only stored as a metadata layer on top of your tables; it is a multidimensional view of your relational data. The SQL language has been expanded to allow a simpler query syntax on the analytic views to utilize these metadata and automatically use advanced analytic functions to give you dimensional analysis on real-time data. As SQL is used below the covers, performance can be tuned using query rewrite with materialized views or In-Memory options or any other option you would normally use for SQL tuning.

This session will show code examples covering a lot of the syntax you have for creating and using analytic views.

- Notes -

Analytic views in version 12.2 provides a syntax for modelling data much like most dimensional cube databases with dimensions, hierarchies, attributes and measures. It is just not stored as a cube, but is only a metadata layer on top of the relational layer. Topics included in this session:

• CREATE ATTRIBUTE DIMENSION
• CREATE HIERARCHY
• Ordering hierarchies
• NAME, CAPTION and DESCRIPTION
• Relations between levels of hierarchies
• CREATE ANALYTIC VIEW
• Measures from multiple fact tables
• Calculated measures
• Hierachical syntax in analytic functions
• Special analytic functions like LAG_DIFF, LAG_DIFF_PERCENT, SHARE_OF
• Querying analytic views at various levels of the hierarchies - SQL enhancements
• Formatting metadata

Presented at:
- OUGN 2017
- Trivadis TechEvent 2017
- APEX Connect 2017
- ODTUG Kscope 2017
- DOUG 2017
- COUG online 2020
- NoCOUG online 2020
- DOAG 2023

Data Twisting

- Elevator Pitch -

Quite often you have data in rows and need them in columns, or vice versa. You can also often have delimited data in strings and need them separated out, or conversely aggregate rows of data into delimited strings. You can do the Data Twist in SQL, pivoting, unpivoting, splitting and aggregating.

- Description -

Humans often can get a better overview of data when it is presented in columns, while working with the data in sets in SQL is easier having the data in rows. Being able to twist the data from columns to rows and vice versa is very useful. Learn different ways to do the Data Twist in this session:

The SELECT statement has builtin capabilities for doing this for fixed number of columns - UNPIVOT to turn columns into rows, PIVOT to turn rows into columns. For older database versions these can be simulated using row generators for unpivoting and grouping for pivoting.

When the number of columns are not known, you’re typically dealing with data in a string separated by some delimiter like comma, semicolon or tab. Other techniques can be used for turning separated data into rows and turning rows into separated data.

- Notes -

Different techniques will be discussed and demonstrated:

• UNPIVOT with single or multi-column dimensions
• Unpivoting with row generators
• PIVOT with single or multi-column dimensions, with or without grouping
• Pivoting with GROUP BY and CASE
• Turning comma-separated data into rows
• ODCI dynamic table function parser
• LISTAGG to turn rows into comma-separated data
• Alternative methods for string aggregation

Presented at:
- OUGN 2016
- OGH 2016
- ilOUG 2019
- ODTUG Kscope 2019
- OGB Nordic Tour 2019 (4 countries)
- NoCOUG online 2020
- DOAG 2022

External Tables - Not *Just* Loading a CSV File

- Elevator Pitch -

With external tables you can SELECT from files - great for loading plain CSV files, but it can do so much more than that. Learn about loading from text files, DataPump dump files, HDFS and HIVE, as well as how to add constraints and partitioning for optimizer optimizations of the external tables.

- Description -

External tables are a way to map data from files in the OS to columns in a rowsource - in effect allowing you to SELECT directly from a file. This can be useful in many ways allowing you to for example join the external data to lookup tables in the database, as well as use the complete range of SQL syntax and functions to transform the external data if needed.

From version to version, Oracle database has added functionality to the external table, so it now is capable of so much more than simply loading a CSV file. You can load data from flat text files, from binary DataPump dump files, from HDFS (Hadoop Distributed File System) or from Apache HIVE. You can even do a limited unloading to a DataPump dump file. You can have the loading process perform an OS preprocess call, like for example unzip to allow you to load from a compressed file.

Oracle 12.2 also adds the possibility of adding certain metadata to the external table useful for the optimizer. You can define constraints (primary key, referential, unique, not null) for the optimizer to trust and rely on, and you can define partitions to allow partition pruning to read only the necessary files of a multi-file external table.

This presentation discusses the various possibilities and shows examples of syntax and use cases for the various access drivers - ORACLE_LOADER, ORACLE_DATAPUMP, ORACLE_HDFS and ORACLE_HIVE.

- Notes -

Topics covered include:

• Access drivers and parameters: ORACLE_LOADER, ORACLE_DATAPUMP, ORACLE_HDFS and ORACLE_HIVE.
• Location: directories and files, HDFS URIs.
• Datatype conversion, characterset handling, delimited and fixed-length data.
• Error handling, logging, bad file.
• SQL*Loader comparison and use as code generator.
• XMLTAG not parsing but text tag search.
• PREPROCESSOR to manipulate or generate datafile.
• Multiple files, parallelism and partition pruning.
• Trusted, relied constraints on external tables.
• CSV and ALL FIELDS OVERRIDE.
• ORACLE_DATAPUMP reading dump files.
• ORACLE_DATAPUMP writing dump files with CTAS.
• HDFS/HIVE parameters for column mappings, overflow handling and more.

Presented at:
- ODTUG Kscope 2018
- Trivadis TechEvent 2018
- DOAG 2019
- NoCOUG online 2020
- RoOUG online 2021
- nlOUG 2022
- OUGN 2022

Forecasting in Oracle using the Power of SQL

- Elevator Pitch -

You have many options for analyzing your data, for example in R. But Oracle Database and SQL can do a lot of directly. So is specialized tools like R or good old SQL preferable? We use Forecasting as an example of an analytic task, showing our experiences with R, ORE and SQL - guess which is faster!

- Description -

When it comes to performing analytics all we seem to hear about is that we need to use R. But what if we could do it in SQL? Can SQL perform many of the analytical tasks our organisation needs? To borrow a phrase from Barack Obama, “Yes we can”. Yes, we can use SQL to perform many of the analytical techniques our business users and analysts want to perform.

But the biggest challenge is how can you do it. Oracle comes with over 280+ statistical and analytic functions but many of us are only aware of a few or a good few of these.

In this presentation we will present a case study from a real life scenario where R was used to perform forecasting of product sales. When using this approach we encountered many challenges! To overcome these we looked to using the power of SQL to perform the same task. We do so with some surprising results.

Our presentation will bring you on a journey of what we did, what we discovered and why we wanted to share our discoveries with you.

Additionally we will also discuss how you can expand the, already extensive, analytic capabilities of the Oracle Database by running R as part of the Oracle Database using Oracle R Enterprise.

- Notes -

The session will talk about and demo:
• Using R with ROracle driver to extract data for time series analysis, do the forecasting and visualization very simply for single time series, returning results to the database, performance problems of doing this in loops for many time series.
• Do time series analysis in SQL using analytic functions, scaling this method by doing single SQL with analytic partition by to perform many time series forecasts in bulk (parallel if need be.)
• Visualizing the SQL forecast via various tools like APEX, Forms, OBIEE, etc.
• How to expand the analytic capabilities of the Oracle Database using Oracle R Enterprise.

Joint presentation by Kim Berg Hansen and Brendan Tierney.

Presented jointly with Brendan Tierney at:
- ODTUG Kscope 2016

Presented as sole speaker at:
- Trivadis TechEvent 2018

OK Oracle, Tell Me *Approximately* How Many Unique Visitors We Had Last Week

- Elevator Pitch -

Statistical overviews often use distinct counts, medians and percentiles. On larger datasets they might take more time than the users like. But often approximate answers are “good enough” if they come fast, which you now can do in SQL. Learn how to do FAST=TRUE with approximations.

- Description -

Imagine doing a SELECT and the answer coming back is “approximately 42, give or take a little.” Normally you wouldn’t want that to happen, you expect a SELECT to give you an accurate answer every time. But building a sales dashboard for your boss, for instance, could be a choice between showing him “we had approximately 4200 unique visitors from France last week” in a quarter of a second, versus showing him “we had exactly 4189 unique visitors from France last week” in 5 seconds. Most likely your boss would be happier with a fast but “good enough” answer.

Oracle supports several Approximate Query Processing Functions, starting with APPROX_COUNT_DISTINCT in version 12.1, adding APPROX_MEDIAN and APPROX_PERCENTILE as well as the capability of doing approximate aggregations on multiple levels in version 12.2, followed by APPROX_RANK, APPROX_COUNT and APPROX_SUM in version 18.1. These functions allow you to write SQL that gets approximate, but fast, answers.

But what if you already have a dashboard application built using good old trusted SQL giving accurate answers, but it is really slightly unsatisfactory as each click takes several seconds to complete? Then you can do a FAST=TRUE - if you are prepared to accept approximate answers. You can do an ALTER SESSION that enables the optimizer to rewrite for example your COUNT(DISTINCT column) to an APPROX_COUNT_DISTINCT(column) automatically. Then your application will give approximate, but fast, answers without needing change.

This presentation discusses the different approximate functions, the approximate aggregations and the ALTER SESSION method of speeding the queries - showing examples of syntax and use cases.

- Notes -

Topics covered include:

• Approximate query processing functions APPROX_*.
• Precision and performance trade-offs.
• Approximate multilevel aggregation.
• Materialized views with approximate functions.
• Initialization parameters for executing exact queries with approximations instead.
• Query rewrite to use approximate materialized views.

Presented at:
- OUGN 2018
- POUG 2020
- ODTUG Kscope 2022

PowerHauling Data with PowerShell

- Elevator Pitch -

There are many ways of migrating data between databases, even between different database vendors. PowerShell scripts can be a useful method, and if the target database supports bulk copy classes (for example Oracle and Microsoft SQL Server) it can be fast as well.

- Description -

There are countless ways and tools to migrate data – choosing often depends on the complexity of the situation, know-how, and how much code must be developed. For many less- to medium-complex cases, using PowerShell with .NET classes and libraries can be a good compromise for a relatively simple solution with not too much code requirement with the added benefit of being quite fast for some target databases (like Oracle and Microsoft SQL Server.)

The .NET managed data access classes for Oracle and SQL Server support bulk copy inserts of data, improving migration speed a lot. The presentation shows how to script this in PowerShell, accomplishing a simpler solution than having to build for example a C# program. Differences between PowerShell Core (PowerShell 7) and the classic Windows PowerShell (PowerShell 5) will be covered.

What makes these .NET classes flexible is the capability of accepting many types of source data – as long as you can create a DataReader or DataTable object, you’re good to go. It can be almost any OLEDB or ODBC connection, or dedicated database connection classes, or even flat files if you have a .NET class to parse it into a DataTable.

You can script the details on your own or you can work with existing PowerShell libraries like SimplySql that provides ready-to-use functionality. SimplySql also has the advantage that you have a choice of target databases besides Oracle and SQL Server – it can migrate to PostgreSQL, MySql and SQLLite as well, just using conventional inserts instead of bulk copy.

I’ll present and demo both using hand-built code as well as SimplySql, migrating data both to and from Oracle. I’ll highlight the small differences and caveats I’ve found in the various situations where I used this migration method, including differences whether the target tables are pre-created or has to be created as part of the process.

- Notes -

These .NET classes and PowerShell libraries are fairly powerful and relatively easy to use. They are not for every case of data migration, but for many of the not overly complex use cases, they can be a good choice.

I have used these successfully in several cases, migrating from legacy systems to Oracle with an ODBC source, migrating from Oracle to MS-SQL with an OLEDB source, and migrating from Oracle to PostgreSQL with Oracle ManagedDataAccess driver as source. I’ve found it to be reasonably easy to script, but I’ll share the small tips&tricks I discovered along the way while developing these scripts, like for example dealing with datatype differences.

Presented at:
- DOUG 2023
- ODTUG Kscope 2023

Shape-Changing SQL

- Elevator Pitch -

There’s static SQL and dynamic SQL, but in between there’s also SQL that’s “dynamic at parse-time” (but not at runtime) - also known as polymorphic (meaning it can change “shape”.) This talk looks at different ways to achieve this, from ODCI over polymorphic functions to SQL macros.

- Description -

In PL/SQL on the one side we have static SQL that is plain fixed text in the source code and compiled together with the PL/SQL. On the other end of the spectrum we have dynamic SQL where a string containing SQL is built at runtime and parsed+executed with EXECUTE IMMEDIATE or DBMS_SQL.

In between those two ends, the spectrum over the versions are including more and more ways of doing SQL that is somewhere in-between; neither completely static in the source code, nor completely run-time dynamic. We can make SQL that is more or less dynamically built, but it is built during the hard parsing phase, leading to getting many benefits of static SQL along with some benefits of dynamic SQL. This can for various use-cases lead to better reusability of code in static SQL without necessitating dynamic SQL with its possible drawbacks. It can also sometimes be a way to expand the capabilities of SQL more performantly than PL/SQL functions.

Different methods exist in different versions for achieving such compile-time-dynamic SQL - ranging from Oracle Data Cartridge Interface (ODCI) over polymorphic table functions (PTFs) to SQL macros. This talk shows an overview of these methods, pros and cons, and when some methods are more applicable than others (and vice versa.)

- Notes -

This talk will take a look at:

• ODCI aggregate functions
• ODCI pipelined table functions
• Polymorphic table functions
• SQL table macroes

It will discuss use cases of each, potential benefits, drawbacks and pitfalls, and how to examine the actual SQL produced.

Presented at:
- OUGN 2022
- ODTUG Kscope 2022
- DOAG 2022
- UKOUG 2022
- hrOUG 2023

Uses of Row Pattern Matching

- Elevator Pitch -

Row pattern matching is useful for recognizing patterns in rows of data, as the name implies. But the uses of MATCH_RECOGNIZE extends beyond the obvious - it can be applied to grouping consecutive data, merging overlapping periods, bin fitting, child number calculations in hierarchies, and more.

- Description -

With 12c came the ability to search efficiently for patterns within your rows of data using a syntax similar to searching for characters patterns in strings with regular expressions, but instead of searching text it searches for data patterns.

This is useful for pattern recognition like finding ups and downs in values over time (i.e. stock tickers) or transaction patterns in accounts that may look like fraudulent behaviour. But it can also be used for grouping consecutive data, finding gaps, finding or merging overlapping periods, calculating number of children in hierarchical data, and more.

Join me for a ramble of different use cases of row pattern matching.

- Notes -

Presentation will briefly cover syntax of row pattern matching in general, and then go on to show more details by demonstrating various use cases, where each use case highlights different ways of using different parts of the MATCH_RECOGNIZE clause.

Topics to be covered:

• DEFINE - what to define, PREV, NEXT, FIRST, LAST, referring other variables, order of evaluation
• PATTERN - regular expression style syntax
• MEASURES - RUNNING, FINAL, CLASSIFIER, MATCH_NUMBER
• ONE / ALL ROWS PER MATCH
• AFTER MATCH - where to go to next?
• ORDER BY
• PARTITION BY

Examples given of ticker search, grouping like tabibitosan, gap and overlap finding, hierarchical child count.

NOTE: This talk won Best Speaker Award for Database track of ODTUG Kscope19.

Presented at:
- OUGN 2016
- ProHuddle online 2016
- ODTUG Kscope 2019
- Trivadis TechEvent 2019
- OGB Nordic Tour 2019 (4 countries)
- DOAG 2019
- NoCOUG online 2020
- ODTUG Learn From Home online 2020
- nlOUG 2022
- UKOUG 2022
- hrOUG 2023

JSON in Oracle Database version 12.2

- Elevator Pitch -

Oracle database 12.2 works well with JSON, both querying and creating. You can search in the JSON and index it, integrate GeoJSON with Spatial, and generate JSON from your relational data. The database can even autogenerate views and virtual columns from the content of the JSON data if you wish.

- Description -

Native support of JSON within Oracle Database became available in version 12.1.0.2 and greatly expanded and improved in version 12.2. This session will give you an overview and examples of the various JSON functionalities, such as:

• Functions for querying and extracting JSON data
• JSON path expressions including predicates
• JSON search indexes, specialized Text indexes
• JSON Dataguide giving you JSON Schema and automatic views and virtual columns
• Generate JSON with JSON SQL functions
• PL/SQL object API for manipulation JSON objects
• Native spatial integration of GeoJSON objects

- Notes -

JSON functionality in version 12.2 is much more than the simple functions introduced in version 12.1.0.2. This session will cover the complete JSON functionality of 12.2 (including what was already introduced in 12.1.0.2.) Topics covered will include:

• Query JSON: JSON_VALUE, JSON_QUERY, JSON_TABLE, JSON_EXISTS
• dot-notation
• JSON path expressions
• Predicates in JSON path expressions
• JSON search index improving search performance using variant of Oracle Text
• Dataguide detecting metadata of the JSON content
• Dataguide autocreation of JSON schema
• Dataguide autocreation of relational views on the JSON content
• Dataguide autocreation of virtual columns per key in key:value pairs
• JSON creation: JSON_ARRAY, JSON_OBJECT, JSON_ARRAYAGG, JSON_OBJECTAGG
• PL/SQL API: JSON_ELEMENT_T, JSON_OBJECT_T, JSON_ARRAY_T
• GeoJSON to SDO_GEOMETRY

Presented at:
- OUGN 2017

Ways to the Bacon

- Elevator Pitch -

Is Bacon something you eat or calculate? Both. The Bacon Number of an actor is the lowest degree of separation from Kevin Bacon defined by linking via actors appearing in same movie cast. How to calculate Bacon in Oracle? We’ll look at different ways to do this with SQL, PL/SQL and/or graphs.

- Description -

Six Degrees of Kevin Bacon is about linking actors by movies where they have worked together. The Bacon Number of an actor is the number of links in the shortest path from the actor to Kevin Bacon.

Calculation of the Bacon number is an interesting exercise (like given as a lab at Oberlin College) and of course we can do it in Oracle. But simple implementations might easily become excessively resource consuming for the database - we might need to do things like early pruning to get acceptable performance on larger datasets.

This talk investigates different Bacon calculation methods with SQL and PL/SQL as well as graph methods (remember that Graph option is now no-cost in supported database versions). We will look at how to map tabular data to a graph in a way that the two models can coexist without duplicating the data, and then perform path queries on the graph. These methods can be useful for other real sets of data besides movie data, such as bill-of-material data in the manufacturing industry.

For the live demo, we will use Graph Studio in Autonomous Database, which supports both SQL and PGQL (graph) queries, and includes visualization of the results as a graph. To wrap up, we will look at how the new GRAPH_TABLE() function in Oracle 23c will make these kinds of queries even more widely applicable.

- Notes -

The inspiration for this talk came from Bryn Llewellyn, who wrote about calculating Bacon Number using recursive subquery factoring.

In his blog post he uses recursion in YugabyteDB/PostgreSQL with some early pruning. This talk will investigate similar in Oracle and compare SQL and PL/SQL methods to using Graph, which is a no-cost option that many developers aren’t yet thinking of when deciding how to solve tasks similar to Bacon Number calculation.

Joint presentation by Kim Berg Hansen and Hans Viehmann.

Presented jointly with Hans Viehmann at:
- OUGN 2023
- DOAG 2023

Presented as sole speaker at:
- POUG 2021

When 7-bit ASCII ain't enough - about NLS, Collation, Charsets, Unicode and such

- Elevator Pitch -

A-Z in upper and lower case isn’t enough in a global world, you need to support all sorts of textual data. Oracle database can handle most of it, but there are pitfalls to be aware of - even using Unicode that should handle everything. Learn about NLS and collation and converting charactersets.

- Description -

How life as a DB developer would be much easier if the entire world was happy with just A-Z in upper and lower case. But the globalized world is a very diverse place with myriads of special characters and modifiers, different alphabets, writing directions, sorting and comparison rules, and many ways to encode these characters. So your database and application absolutely needs to know about these things and handle them correctly, or you’ll find your app displaying garbage or worse, storing garbage.

Oracle database offers many NLS (National Language Support) settings to handle such globalization on database, instance and session/client level. NLS functions are available in the SQL language to handle specific cases with code when the session settings aren’t granular enough. In Oracle 12.2 was added Collation support, where you specify right down on column level how this column should sort and compare values. You have choices whether to store everything with database characterset UTF8 or stick to a single-byte characterset - in both cases be aware of how conversion takes place if the client is not using the database characterset.

This presentation discusses many of the issues involved and ways to use NLS and Collation to avoid them. Pitfalls will be shown, where you can get mangled (or even invalid) data stored if you are not careful.

- Notes -

Topics covered include:

• Character sets and encodings. ASCII, ISO, UTF and more.
• Database character set - what can be used for identifiers.
• National character set - alternative for using Unicode in DB with single-byte DB charset.
• NLS settings of the database versus instance versus session (and client.)
• Plug-compatible charsets in multi-tenant environment.
• Circumstances leading to invalid characters in the database.
• BYTE versus CHAR length semantics.
• Linguistic sorting and matching - NLS_SORT, NLS_COMP.
• Data-bound collation - defaults, column level and PL/SQL units.
• Database Migration Assistant for Unicode (DMU) for changing DB charset.

Presented at:
- OUGN 2018
- ODTUG Kscope 2018
- POUG 2018
- Trivadis TechEvent 2018
- AIOUG online 2020
- DOAG online 2020
- hrOUG 2021

Who's Afraid of the Big Bad SQL?

- Elevator Pitch -

Developers often treat a database as a bit bucket, processing data procedurally or using an ORM. Doing the same processing in SQL can be unfamiliar. I’ll compare the two and show that SQL is not scary. And in Oracle 23c you can hide the SQL within JSON views without need of ORM.

- Description -

I’ll hazard the guess that you (like me) started your developer career by learning a procedural language like Basic, Pascal, C, C++, C#, Java or something else depending on your generation. There’s something about a procedural language that makes it relatively easy to understand what is happening - you can “single step debug” in your head and walk through your code piece by piece.

In SQL that is not quite as easy. You need to think about dealing with sets of data instead of a single row at a time, and the language has a declarative nature where you specify what you want the done and let the database figure out the details of how exactly to get the individual rows pieced together to form the final result. You can rarely follow along what is happening, but you just execute the statement, witness the result, and then need to figure out from that result what actually happened, so you can make changes and try again. You can’t “single step” your SQL code - it is often more of an iterative trial-and-error development process.

So SQL has aquired a reputation - in my opinion unjustified - of being a hard programming language to learn. There’s a gazillion frameworks and ORMs out there that promise that you can write your applications without SQL. The problem is that you cannot really work with your data without SQL - the frameworks simply generate SQL for you. This can be OK in maybe 80% of the cases, but for the last 20% a human is very likely to be able to write SQL that by far outperforms what the framework generates. And for a human that knows SQL, I believe it is even easier and faster to write the SQL than use the framework.

With Oracle 23c an alternative to the ORMs can be using JSON Relational Duality views, hiding the SQL inside views that can be accessed easily as JSON, giving the best of both worlds.
I will take a single use case - picking items by FIFO (first-in-first-out) in a warehouse - and demonstrate coding this procedurally with an ORM versus doing it in SQL, along the way showing differences and similarities in thought processes of the developer of each method. Ultimately I will show that it is not really harder writing a single 100-line SELECT statement than a procedural application doing the same thing.

At the end, I will wrap the SQL in a JSON Relational Duality view to demonstrate how this can be an alternative or a supplement to the ORM.

- Notes -

The goal of this presentation is to line up a practical use case and walk through in detail developing a solution in a procedural language and compare it to developing the same in SQL.

The idea is to show basic and intermediate code and show that SQL is not a language to be afraid of using properly. This will not be a deep-dive performance comparison of Smart-DB versus Java like Toon Koppelaars does - the intended audience of this presentation is beginner and intermediate, trying to convey a feeling of “oh, this SQL isn’t as hard as I thought - it can actually make my coding simpler and easier.”

Presented at:
- DOUG online 2021
- ODTUG Kscope online 2021
- hrOUG 2021
- DOAG online 2021
- UKOUG 2021

DOAG 2023

My presentations:
- Ways to the Bacon
- Analytic Views - The Virtual Cube

November 2023 Nürnberg, Germany

POUG 2023

My presentations:
- Invitation to the Wedding of JSON and Relational Data

November 2023 Warsaw, Poland

hrOUG 2023

My presentations:
- Shape-Changing SQL
- Find Your Match - MATCH_RECOGNIZE to the Rescue

October 2023 Rovinj, Croatia

DOUG Day 2023

My presentations:
- Invitation to the Wedding of JSON and Relational Data

October 2023 Copenhagen, Denmark

ODTUG Kscope23

My presentations:
- Analytic Functions 101
- PowerHauling Data with PowerShell

June 2023 Aurora, Colorado, United States

DOUG Coffee Meeting

My presentations:
- PowerHauling Data with PowerShell

June 2023 Copenhagen, Denmark

OUGN 2023

My presentations:
- Analytic Functions 101
- Ways to the Bacon

April 2023 Oslo, Norway

UKOUG 2022

My presentations:
- Shape-Changing SQL
- Uses of Row Pattern Matching

December 2022 Birmingham, United Kingdom

OUGN EMEA Tour 2022

My presentations:
- Analytic Functions in Real Life
- External Tables - Not *Just* Loading a CSV File
- Shape-Changing SQL

October 2022 Oslo, Norway

DOAG 2022

My presentations:
- Data Twisting
- Shape-Changing SQL

September 2022 Nürnberg, Germany

ODTUG Kscope22

My presentations:
- OK Oracle - Tell Me Approximately How Many Unique Visitors We Had Last Week
- The Things You Can Do Between Static And Dynamic SQL

June 2022 Grapevine, Texas, United States

nlOUG DBA Dag 2022

My presentations:
- External Tables - Not *Just* Loading a CSV File
- Uses of Row Pattern Matching

April 2022 Rotterdam, The Netherlands

UKOUG 2021

My presentations:
- Who's Afraid of the Big Bad SQL

November 2021 London, United Kingdom

hrOUG 2021

My presentations:
- When 7-bit ASCII ain't enough - about NLS, Collation, Charsets, Unicode and such
- Who's Afraid of the Big Bad SQL

October 2021 Rovinj, Croatia

POUG 2021

My presentations:
- Ways to the Bacon

September 2021 Warsaw, Poland

POUG 2020

My presentations:
- OK Oracle - Tell Me Approximately How Many Unique Visitors We Had Last Week

September 2020 Warsaw, Poland

DOAG 2019

My presentations:
- Uses of Row Pattern Matching
- Analytic Functions in Real Life
- External Tables - Not *Just* Loading a CSV File

November 2019 Nürnberg, Germany

Nordic Groundbreakers Tour 2019 SWEOUG

My presentations:
- Uses of Row Pattern Matching
- Data Twisting

October 2019 Stockholm, Sweden

Nordic Groundbreakers Tour 2019 OUGF

My presentations:
- Uses of Row Pattern Matching
- Data Twisting

October 2019 Helsinki, Finland

Nordic Groundbreakers Tour 2019 OUGN

My presentations:
- Uses of Row Pattern Matching
- Data Twisting

October 2019 Oslo, Norway

Nordic Groundbreaker Tour 2019 DOUG

My presentations:
- Uses of Row Pattern Matching
- Data Twisting

October 2019 Copenhagen, Denmark

ODTUG Kscope19

My presentations:
- Data Twisting
- Uses of Row Pattern Matching

June 2019 Seattle, Washington, United States

ilOUG Tech Days 2019

My presentations:
- Data Twisting
- Analytic Functions in Real Life

January 2019 Petaẖ Tiqva, Israel

POUG 2018

My presentations:
- When 7-bit ASCII ain't enough - about NLS, Collation, Charsets, Unicode and such

September 2018 Sopot, Poland

ODTUG Kscope18

My presentations:
- When 7-bit ASCII ain't enough - about NLS, Collation, Charsets, Unicode and such
- External Tables - Not *Just* Loading a CSV File

June 2018 Orlando, Florida, United States

OUGN 2018

My presentations:
- Approximate but Fast Answers versus Accurate but Slow
- When 7-bit ASCII ain't enough - about NLS, Collation, Charsets, Unicode and such

March 2018 Oslo, Norway

DOUG 2017

My presentations:
- Analytic Views In Oracle 12.2 - The Virtual Cube

October 2017 Copenhagen, Denmark

ODTUG Kscope17

My presentations:
- Analytic Views In Oracle 12.2 - The Virtual Cube
- Analytic Functions Hands-on Lab
- Row Pattern Matching Hands-on Lab

June 2017 San Antonio, Texas, United States

APEX Connect 2017

My presentations:
- Analytic Views In Oracle 12.2 - The Virtual Cube

May 2017 Berlin, Germany

OUGN 2017

My presentations:
- Analytic Views In Oracle 12.2 - The Virtual Cube
- JSON in Oracle 12.2

March 2017 Oslo, Norway

Oracle OpenWorld 2016

My presentations:
- Read, Store and Create XML and JSON

September 2016 San Francisco, California, United States

ODTUG Kscope16

My presentations:
- Read, Store and Create XML and JSON
- Forecasting in Oracle using the Power of SQL

June 2016 Chicago, Illinois, United States

OGh DBA / SQL Celebration Day 2016

My presentations:
- Data Twisting

June 2016 Zeist, The Netherlands

OUGN 2016

My presentations:
- Data Twisting
- Read, Store and Create XML and JSON
- Uses of Row Pattern Matching

March 2016 Oslo, Norway

DOAG 2015

My presentations:
- Analytic Functions 101

November 2015 Nürnberg, Germany

DOUG Dag 2014

My presentations:
- External Data via HTTP, FTP and Web Services

October 2014 Copenhagen, Denmark

ODTUG Kscope14

My presentations:
- Analytic Functions - Advanced Cases
- External Data via HTTP, FTP and Web Services

June 2014 Seattle, Washington, United States

UKOUG 2012

My presentations:
- Really Using Analytic Functions

December 2012 Birmingham, United Kingdom

ODTUG Kscope12

My presentations:
- Real Cases of Indispensability of Analytic Functions

June 2012 San Antonio, Texas, United States

Kim Berg Hansen

Lead Software Developer, Cegal Danmark A/S | #Oracle #SQL #PLSQL #SYM42

Middelfart, Denmark

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top