Most Active Speaker

Andreas Erben

Andreas Erben

CTO for Applied AI and Metaverse at daenet

Ponte Vedra Beach, Florida, United States

Actions

Andreas spent most of his professional life and a career of over 25 years integrating "backend" applications. First inside or between Enterprises and Startups directly, then - "in the cloud".
Before that, Andreas dabbled with Virtual Reality in the 90s. In 2013, he started to work with Kinect for Windows V2 and something wonderful happened - Andreas had new fun with new toys and he was all set to work on new paradigms to interact with computers.
Then Microsoft HoloLens was announced and he expanded his focus on Mixed Reality applications and concepts.
He cannot stop talking about technology, which apparently some people like so Microsoft gave him the MVP award. Because he has shown leadership can advise C-levels and business, he also got accepted into the ranks of the Microsoft Regional Directors .
Andreas helps customers as a trusted advisor, external CTO and innovation consultant, he also produces exciting applications and solutions with a great team of skilled individuals.
He acts as CTO Applied AI for daenet.

Awards

  • Most Active Speaker 2023
  • Most Active Speaker 2022

Area of Expertise

  • Health & Medical
  • Information & Communications Technology

Topics

  • Mixed Reality
  • Windows Mixed Reality
  • Azure Cognitive Services
  • Microsoft Azure Cognitive Services
  • Azure Custom Vision
  • Democratized Artificial Intelligence
  • Healthcare Technology
  • Virtual Reality
  • Augmented Reality
  • Extended Reality
  • Azure
  • Metaverse
  • Artificial Intelligence
  • AI
  • Technology Leadership

Fantastic Machine Learning models and how to find and bind them

Machine Learning can be leveraged effectively by anyone. "Just use it!"
Runtimes like Windows Machine Learning (WinML) enables easy evaluation of ONNX based models to use in applications. Designing and training any machine learning model is often a long task. What if there would be already many pre-trained models available to just be easily integrated and used in own applications?
In this session attendees will see examples of where to find and retrieve available models and use them in own applications. Steps to integrate and bind the models to own code will be demonstrated.

Getting started with the MetaVerse - AR,VR, MR, and XR

The "MetaVerse" is the hyped up term that has been increasingly adopted by technology companies around the world since about late March of 2021.
This session looks at the relevant concepts of the MetaVerse and how they relate to Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and Extended Reality (XR).
From there on, attendees, specifically software developers, will get an overview of ways to get started building applications for the MetaVerse with a focus on the AR, VR, and XR aspects. The software development environment Unity3D will be introduced as a specific example.
Decision makers will also learn about the value the MetaVerse can provide.

ChatGPT, GPT-4 and friends, large AI models that will change our life

ChatGPT, GPT-4, and similar large models are receiving a high amount of attention across industries. Almost weekly new ways to use those models are unveiled and integration in different products and services announced.

This ranges from generating creative text, doing translations, correcting grammar, answering questions, classifying content, and in some cases even tools to transform natural language into code in a variety of development languages. It is not a question if those models will be used, but how much they will change the way we work. They are already integrated in many Microsoft offerings including M365 and Power Platform.

We will start with describing for non-scientist audiences how those models were built.

We then move on to discuss a few of the most relevant models with a focus of showcasing what you can use them for.

Since this field is innovating frequently, the precise models being featured are subject to change but will at least have the capabilities of OpenAI's ChatGPT and GPT-4 to the degree it is available.

OpenAI Codex and GitHub CoPilot will also be demonstrated with their ability to assist in writing code.

This session will feature a high number of demos.

Finally, the session will cover the issue of some of the risks such as biases that are present in those models.

After this session you will have a better understanding about Large Language Models and their hands-on application beyond what major vendors will tell you.

Beyond ChatGPT - Imagine your mind blown

Almost everybody has heard about ChatGPT, but the story does not stop there.
Imagine you asking your computer how it would look like to have your mind blown in abstract art and it generates a selection of images that each blows your human mind?

This is a very hands-on session demonstrating advanced concepts of today's AI including ChatGPT/GPT-4, DALL-E, Midjourney, and similar models.

In this session you will also receive actionable information about how to run some of those models and understand the business value of using those models.

Applied Artificial intelligence combined with vast computing resources and enormous trained models can accomplish amazing feats when it comes to working with data including semantics, context, subtext, and all sorts of relationships that you could imagine.

We will look at some of those services, solutions, and products, many of them in the cloud, that enable scenarios that many of us would have thought impossible only a few years ago.

Expect to start working on textual data with simple scenarios like classifying information such as company types, we will journey on to summarize or rewrite text, or to come up with a representative title. We then explore further creating creative text or completing a story.
This includes the capability to work with source code such as written in .NET favorite's C# or Python through GitHub Co-Pilot or OpenAI's Codex.

This is actionable information to anybody who has to deal with text today.

Then we go towards incorporating other modalities, specifically generating images and visual art with services such as DALL-E 2.

We discuss the upcoming world of "multi-modal" models and how they are already appearing in products and services.

We will finish asking - are computers dreaming yet? Am I imagining your mind blown?

Intelligent IoT on Microsoft Azure - Architecture, Devices, and Data

Azure IoT Hub together with Azure IoT Edge are Microsoft's "Dream Team" to enable intelligent IoT applications combining the power of the cloud platform and the versatility of edge devices.

In this session attendees will see the basics of setting up an IoT Hub, incorporating IoT Edge devices, how to make changes on those devices, and what to do with data coming from edge.

We will look at analyzing streaming data in the cloud but also at how decisions can be made right on the edge. Starting from simple decisions based on sensor reads, to advanced scenarios such as AI inferencing on the edge, or even orchestrating AI training.

One of the devices used will be Azure Percept, but most of this session is applicable for all types of IoT devices on Azure.

Metaverse and Microsoft Azure Cognitive Services

In this session, attendees will learn about the capabilities of Microsoft Azure Cognitive Services and how they can be integrated into Unity applications to enable a new level of AI capabilities.

This session will provide an overview of how to use Azure Cognitive Services services to perform image recognition, computer vision and natural language processing, and how to use them to enhance the immersive experiences of your Unity applications.

Session Takeaways:

- Learn how to use Azure Cognitive Services to enhance your Unity applications

- See how to integrate Azure Cognitive Services into Unity

- Understand how to use Azure Cognitive Services to perform image recognition, computer vision and natural language processing

Introduction to Azure Percept - accelerated AI on the IoT edge

Everybody interested in combining IoT and Artificial Intelligence will benefit from this session.
Azure Percept is a new platform to create hardware accelerated Artificial Intelligence solutions that run on the edge. It is effectively an IoT device. Integration with cloud-based AI such as Azure Cognitive Services or Azure Machine Learning enables quick development and deployment of pre-built intelligence.

In this session, attendees will learn about the hardware Azure Percept and about the concept of the device ecosystem for sensors that integrate with Azure Percept.
The session also demonstrates the Azure based tools to administer systems that incorporate the device and understand the relationship with the Azure IoT offerings.

The presenter will share personal insights including experiences with typical challenges when working with this type of device.

Developers will see the developer experience to develop applications that leverage Azure Percept and receive information how to get started.
IT Pros will understand how Azure Percept relates to the Microsoft Azure IoT ecosystem.
Decision makers will hear about the value proposition of Azure Percept and why it could be relevant for their organizations.

MRTK - How to get quick results with HoloLens 2

Mixed Reality Toolkit provides a plethora of functionalities and best practices that can be combined to wire up the application logic for Mixed Reality Applications with a rich interactive presentation layer.
Building rich HoloLens 2 applications is an often complex task and leveraging the capabilities of the platform can be a frustrating exercise if having to build things from the ground up.
In this session attendees will learn how to get quick results with the help of Mixed Reality Toolkit when developing Mixed Reality applications, specifically for HoloLens 2.
The different capabilities of Mixed Reality Toolkit will be introduced through demonstrations to show how to implement them fast in an application without a lot of code.

StereoKit - C# alternative to game engines for XR

StereoKit is an open source project that aims to make it easy to develop extended reality (XR) applications without needing to use commercial game engines like Unity3D or Unreal Engine and aims to be easy to use for C# developers.
This session gives a brief overview of StereoKit and how to get started.

The flatlander's song about meeting in the MetaVerse

Join our journey to the MetaVerse.

I worked in Microsoft Teams
and whenever I saw your presence
noticed your shirt's seams
but didn't feel your essence

Need the human connection
do tools detection
explore there's more to the core
from the flatland shore
opened the door to the store-y
now sail hurr-y
start the journe-y
to MetaVerse countre-y
AltspaceVR-ey
a Microsoft Mesh bash
backslash fresh dash
brash Flash thresh that
Hulk says smash that
now trash that
hash-tag flatland
lend a human hand
join the human band

Now we meet in the MetaVerse
your body language there's
signals to our limbic system
connecting with the true self within

This session starts with looking where are we coming from spending our work-life in a 2D flatland and how we meet and interact today, with different levels of separation impacting our perception of presence and connection to our fellow human beings.

We will then introduce tools of the MetaVerse such as Microsoft Mesh and AltspaceVR or third party offerings and discuss the respective capabilities, how to make use of them, and why there is fundamental new value in interacting in 3D space in Virtual Reality in the MetaVerse when it comes to connecting with our co-workers, friends, and family and the concepts of presence and perception.

I hope to have peaked your interest, but be aware, the talk is technical with the goal to be able to reach all types of attendees.

The Metaverse, Microsoft Mesh, Azure, and how those work together

The "Metaverse" is a term that first got wider attention in 2021 when Microsoft and Nvidia used it and later Facebook even renamed itself to Meta.
At it's core it is about persistent often immersive 3D worlds that users can interact with and it can have a relationship with the real world.

In this session we will look at what you can do on Microsoft's cloud, software, and services platform when it comes to Metaverse.

We will look at what "Microsoft Mesh" is and what functionalities or tools have been made available to date. Attendees will learn what cloud services in Azure are relevant, such as Azure Spatial anchors.

Developers will hear about the prevalent approaches to develop software and how tools such as Microsoft maintained MRTK play a role to design cross platform applications that run on Microsoft HoloLens but also on Meta's Quest 2.

MRTK3 - Introduction to getting quick results for the Metaverse

Mixed Reality Toolkit 3 - in short MRTK, in its third iteration has matured to be a flexible way to accelerate development for Mixed Reality (MR), that is Virtual Reality (VR), Augmented Reality (AR), or Extended Reality (XR). With this it is a key enabler to develop applications for the much hyped "Metaverse".

Both internally at Microsoft and in the overall ecosystem, MRTK is used extensively. Not just for HoloLens development, even though this talk will of course cover HoloLens development, but also to enable developing for Meta Quest in the same straightforward and easy way.

Join this talk to learn how MRTK accelerates application development for the Metaverse.
Developers will take away what building blocks can they leverage easily in their development workflow and how to deliver quick results.
Decision makers will see how MRTK can lower technical barriers to getting started with developing applications for the Metaverse.

The time is right to get started with building Metaverse applications!

Mixed Reality Workshop - Building AR/VR Applications

In this 1 day workshop, attendees will learn about the various concepts and technologies that comprise Mixed Reality applications.

The typical tool chain to develop applications will be demonstrated and explained. This is a technical workshop and active participation of attendees using their own notebook computers is strongly suggested.

Discussing MetaVerse Tech in the MetaVerse

This technical session will present technical concepts of the MetaVerse in the MetaVerse. We will talk about Microsoft's Mixed Reality starting from HoloLens to Spatial Computing. Content will include some light software development related topics but the session will be inclusive for all technically inclined audiences.

AR, VR, MR, future of collaboration?

AR, VR, MR - what are they and why should you care?
Are they buzzwords or do they mean something for collaboration?

What is different, what is the same?

What are interaction concepts for applications you should care about and what is more important for applications that collaborate in a AR, VR, MR context?

How might some of it apply to products you are already using?

AR and VR applications with Windows Mixed Reality for Developers

In this session participants will learn the capabilities of the Windows Mixed Reality platform for developers and how to develop applications for Mixed Reality, including Microsoft HoloLens or "Immersive" VR type devices.

The session covers the toolchain such as Visual Studio and Unity3D, the main concepts to consider as a developer, as well as how to implement those concepts in practice with step-by-step demonstrations.

Cloud Services for the Metaverse

During the COVID-19 pandemic, the use of Virtual Reality started to pick up for gaming, fitness, and professional uses such as meetings and presence in VR. In 2021, the term "Metaverse" started to get broad attention and also Microsoft made relevant announcements such as Microsoft Mesh.

In this session attendees will learn about what the Metaverse is and what it can be used for.
Then we will look at what services Microsoft has made available within Azure to enable Metaverse scenarios and what has been announced in this context.
This is a rapidly evolving field and the exact list of services is expected to grow every year.
Some of the currently relevant services discussed will be Azure Object Anchors, Azure Spatial Anchors, and Azure Remote Rendering.
Some hands-on demos will round up the presentation.

Finally the session will look at what IoT has to do with the Metaverse and how Microsoft's offerings relate to it.

Code that (almost) writes itself? GitHub Co-Pilot and OpenAI Codex in action

Large scale neural language models trained on publicly available source-code in GitHub and other repositories are able to predict what a software developer may want to write based on a rough description or by using the context of the code they are currently writing.
In this session we will look at how that works and at some examples what you can do with it with languages such as C#, JavaScript or Python.

Computer Vision in Azure - an overview

Computer Vision can enable many scenarios in modern applications. Microsoft Azure has relevant capabilities in many places to empower all types of developers to build such scenarios.

This session will provide a quick overview of the different places you can find and use computer vision in Azure from Cognitive Services to Azure Machine Learning.

Join for a mixture of demos and an information about the relevant capabilities.

Decision makers will understand how easy it is to tackle advanced problems with computer vision.

Bursting with AI power in Azure with creative vision

As a developer you know how to build applications, you learned how you can scale architectures, but you are told that for AI you need some mystical AI or deep learning experts.

Through offerings in Azure and related products, developers can do powerful things without needing to get a deep learning degree.

In this session attendees will learn about the breadth of rich AI capabilities in Azure and see computer vision capabilities accessible to all types of developers in action starting from getting information about arbitrary images, to training custom models for specific scenarios, or recognizing faces and much more.

We will also look at with concrete examples how the new generation of large AI models such as GPT-3 will transform many business domains and will have a rich impact on what has traditionally been considered to be "creative" work.
In the end we will combine creatively what computer vision and advanced AI can do together.

There will be tech and code and information how to get started.

While understanding both the technology and the business value, we will try to have fun and point out the limitations and some of the pitfalls that exist when using AI in the cloud.

The Metaverse - Beyond the buzzword

The last few months in 2023 saw partial divesting of various large companies, including Microsoft, in the Metaverse field. Hence it is even more important than ever to look beyond the "buzz" and look at what the foundations of Metaverse applications can be.
The buzz started 2021 when Microsoft, then Nvidia and finally Facebook even rebranding itself to "Meta" started using the term "Metaverse", all types of businesses and products suddenly had to do something to do with it.
In this session we will look at common capabilities that are generally understood as contributing to the "Metaverse". But we will look beyond the buzzword and talk about what value those capabilities provide, specifically those that are already part of Microsoft Azure or related to Microsoft Azure.

Our goal is to look at what can you leverage today to build the Metaverse solutions of tomorrow and why does it make sense to do so.

Particularly we will look at aspects of the industrial metaverse and what it has to do with IoT, we will talk about how to interact with and understand physical environments.

To be more specific, you may see appearances of technologies such as Azure IoT, Azure Kinect, Azure Spatial Anchors, Azure Remote Rendering, but also Azure Cognitive Services, and more.

We will talk about some of the challenges of developing Metaverse applications regarding some typically involved toolchains which in many case come from the game development background such as Unity3D and suggest some approaches to get started.

A visit to the model zoo: Using common or exotic models in your apps

Artificial Intelligence is here, there is no question about it. Yet not everybody is a data scientist or a neural network engineer.
Democratization of AI is a driving force for the next generation of innovation, cloud vendors are leading the field making training available as a service or host the largest models available.
Yet, there is another path to empower developers to use AI: using pre-trained models that other people created and made available.
Those models can be discovered in so-called "model-zoos". Communities like "Hugging Face" are getting the traction and attention to impact the direction of the industry.

This session will introduce various places where to find useful models. A significant amount of this session will provide hands-on-demonstrations of various models.

Decision makers will learn how easy and cost effective it is to start using AI.

Come and join us on our visit to the model zoo.

Applied AI: large language models for knowledge workers and writers

You are proud of your analytical skills, your writing abilities, and about your creative spark.
Yet, some days you may need a little help?
Or cranking out text can be repetitive in some cases?

We are in the age of Applied AI and large language models such as GPT-3 can change the nature of the game of some tasks that were only possible to be done in a highly focused manner by people with a very good command of language and writing ability.

In this session we will look at how typical scenarios when having to come up with anything that is "text" can be augmented by available AI models.
We will look at text summarization, rewriting text for a specific audience, coming up with headlines, product names, and analyzing the context or tone of a text and many more examples of how you can use large language models today.

Besides of the value and the usefulness, the session will also talk about some of the risks that individuals and companies who are looking into those technologies will have to consider.

An AI generated lightning talk - and how I made it

This unusual session will have one or more short segments that were AI generated.
Will you be able to tell?
The mystery will be revealed and the presenter will share how this content was created.

Foundations of the Metaverse: Azure Spatial Anchors and Understanding 3D Space

The Metaverse is one of the big buzzwords that has been adopted by large tech companies in 2021 to the extent that Facebook's parent company even changed their name to Meta.

A core component of the Metaverse is ability to precisely locate oneself, real objects and virtual objects in a real environment and to be able to understand that environment.

This session looks at relevant Azure, software, and hardware technologies that help with this task such as Azure Spatial Anchors to enable the Metaverse.

Microsoft HoloLens 2 will serve as a demonstration device for those technologies.

After this session attendees will have a better understanding of what can enable a Metaverse that integrates with the real world.

Microsoft Mesh - Cloud services to enable collaboration in 3D space

Microsoft Mesh was first introduced as the anchor point at Microsoft's Ignite conference in March 2021 which was held in Virtual Reality. Some consider this a watershed moment for breadth Enterprise readiness of AR/VR technologies.

This new platform enables building experiences that let people interact remotely as if they were in the same physical space. As a combination of various cloud-powered services, tools, and a SDK, developers can create new types of solutions that some consider to be the future of Microsoft Teams.

This session will discuss the most important aspects of Microsoft Mesh, from the capability to exchange and synchronize application state related to 3-dimensional coordinates, to having access to a so-called "Avatar System" that provides an illusion of physical presence and expressiveness for remote participants, to integrating spatially persistent objects and more.
Unlocking so-called "HoloPortation" is one of the more advanced goals of the platform.

Microsoft Mesh also enables more easily creating applications that can interact between multiple different headset vendors and AR/VR devices such as Microsoft HoloLens 2 and but also iOS and Android devices that are AR capable.

Developers will learn about the hands-on developer experience and receive the necessary information to get started.
Decision makers can expect to get an understanding about the unique value Microsoft Mesh can provide for their organization.

Jupyter Notebooks - a versatile tool

Jupyter Notebooks are a popular tool for many professionals and are very prevalent with Python developers and Data Scientists.
But they are actually useful for many more scenarios.

In this session, attendees will see a variety of applications of Jupyter Notebooks to expand their toolbox.

Starting with vanilla Python use cases, the session will also introduce Microsoft Azure's flavor of Jupyter Notebooks.
From there demonstrations will branch out into different development languages and use cases, such as connecting to a SQL Server or writing C# code in a Jupyter Notebook.

Computer Vision based AI in Azure

Microsoft Azure has powerful Computer Vision capabilities in many places that can enable regular developers to be heroes incorporating advanced image understanding through AI in their applications.

In this workshop, attendees will see various of such capabilities spanning several product lines in more detail to get a better understanding about them.
The home for those offerings is in various parts of Azure such as Cognitive Services, Applied AI, and also play a role in Azure Machine Learning.

Join for a workshop dealing with computer vision tasks and you will be able to get information out of arbitrary images, will see how to train models classifying images and detecting objects in a larger image, see basic OCR capabilities and recognizing content in forms, analyze videos, and much more.

For those wondering, some of the those services are Computer Vision, Custom Vision, Face API, Video Analyzer, Azure Form Recognizer, and more.

Large Language Models: Experiments with language processing neural networks

General purpose and specialty purpose large scale neural language models such as OpenAI'sGPT-3, Codex, or related GitHub CoPilot can change the nature of some businesses such as copywriting, but also unlock new approaches to writing software, or provide translation capabilities without being trained to do so.
This session will briefly look at some examples that some of those models can do.

Using such an AI based service is not without risk and the session will mention some of those risks and provide some examples of things you definitely should avoid.

Oh, and we will have some fun with creative uses. AI generated poetry? A virtual rap-battle with Eminem? The possibilities are endless.

WebXR: AR and VR without the fuzz

Extended Reality (XR), specifically AR and VR applications are usually developed in complex game engines like Unity3D or Unreal Engine.
Those engines are powerful but have an often significant learning curve and also an overhead as a development environment and in the DevOps flow.
WebXR tries to make AR and VR applications more approachable and attainable for developers that are already familiar with web standards.
This session provides an introduction into this group of standards.
Attendees will learn what functionalities WebXR covers and become familiar through demonstrations how to solve typical scenarios in WebXR type projects. Relevant libraries such as Babylon.js or AFrame will be introduced.
Developers and decision makers will hear why WebXR can make more sense than choosing a full Gaming Engine to implement Extended Reality applications.

WebXR, WebVR, WebAR - a quick introduction

This session provides a quick introduction into WebXR, WebAR, and WebVR, the technology to build Augmented Reality and Virtual Reality solutions through browser technologies.
The session will incorporate live demos.

You have not started with Azure Machine Learning yet?

Azure Machine Learning is the center-stage service Microsoft offers to handle Machine Learning workloads.
This session aims to give attendees the long-overdue chance to learn about how to get started using Azure Machine Learning.

In this demo-centric presentation, various capabilities of Azure Machine Learning for the Azure Machine Learning lifecycle workflow, all the way from training AI model training to deploying and running models will be shown.

Developers, architects, and decision makers will get a better understanding about what Azure Machine Learning can do and how to get started using it.

AltspaceVR and friends - meetings and more in the Metaverse

One of the killer use-cases of the Metaverse is collaborating with other human beings. AltspaceVR is one of the pioneering applications that allow human beings to interact in a three-dimensional Virtual Reality environment while offering a different experience through remote "presence" with customizable avatars.

In this session, attendees will learn about interactive collaborative use cases of the Metaverse, in other words, what you can do in some of the key Microsoft Mesh use cases.
The capabilities of the platform AltspaceVR will be introduced and the presenter will discuss how to leverage them as participants or organizers of virtual events.
Ranging from simple participation in third party events, to setting up your own online event experience, users will understand the options better they have on that platform.

The presenter will talk about the relationship of AltspaceVR with Microsoft Teams and if at that time Mesh capabilities will be available in Microsoft Teams, the presentation will cover those.
Other alternative third party applications for collaboration in the Metaverse will be briefly discussed.

After this session, attendees can expect to have a better understanding how to partake in organize interactive events events in Virtual Reality.

Python: A quickstart for non-Python developers

Python is one of the most popular development languages, and by some measures currently the most popular language for beginners. In Artificial Intelligence and Data Science it is practically impossible to not use Python in one way or the other.
Yet, many Enterprise Developers have not started yet.
This fast focus provides regular developers a quickstart to start using Python.
This is for everybody who has not used Python yet and wants a quick overview on how to get started.

Foundations of the MetaVerse: Spatial Anchors and World Locking

Mixed Reality and also Augmented Reality applications need a concept of representing fixed locations in the real world to be able to enhance the environment with.
The MetaVerse requires such an "anchoring" in the real world.
For this, concepts like Spatial Anchors were introduced first with Microsoft HoloLens but are now available for different device families.
World Locking is a way to then reliably bind the coordinate systems of the real world and the physical world to allow application developers to develop spatial applications more easily.
This fast focus session will cover Spatial Anchors, World Locking Tools for Unity, and talk about how Spatial Anchors can be translated between device families with Azure Spatial Anchor Services.

Hackathons - unleashing hidden potential in organizations and empower individuals

Hackathons are a hot topic to drive innovation. Organizations across the globe have discovered that it enables them to help their digital transformation endeavors through unleashing hidden potential within the organization or across an industry.
For individuals, hackathons can help to dive into a new subject matter, empowering to work on subjects they always wanted to work on, and in some cases hackathons have changed career paths.
This session will discuss topics around hackathons:
What different types of hackathons exist? What value do hackathons have for organizations and for participants? How does one best prepare for a hackathon?

Most importantly, hackathons are about culture and in some cases helping to change culture in a positive way. The best hackathons are inclusive and empower people to succeed.

Attendees will learn about the value of hackathons for organizations and individuals, and tips will be shared how to run or participate in a successful hackathon. This session is for everyone who cares about innovation, digital transformation, and how to disrupt and reinvent themselves.

Get started: What, I need to use Python? Help!

Python has rapidly become the most popular software development language measured by online searches. The daily responsibilities of many Web-Developers and Enterprise Developers do not allow diving into this completely new ecosystem as part of their role, yet they are increasingly often expected to know Python. This focused session provides the starting point for software developers to begin working with Python.

Introduction to Azure Percept - accelerated AI on the IoT edge

Everybody interested in combining IoT and Artificial Intelligence will benefit from this session.
Azure Percept is a new platform to create hardware accelerated Artificial Intelligence solutions that run on the edge. It is effectively an IoT device. Integration with cloud-based AI such as Azure Cognitive Services or Azure Machine Learning enables quick development and deployment of pre-built intelligence.

In this session, attendees will learn about the hardware Azure Percept and about the concept of the device ecosystem for sensors that integrate with Azure Percept.
The session also demonstrates the Azure based tools to administer systems that incorporate the device and understand the relationship with the Azure IoT offerings.

The presenter will share personal insights including experiences with typical challenges when working with this type of device.

Developers will see the developer experience to develop applications that leverage Azure Percept and receive information how to get started.
IT Pros will understand how Azure Percept relates to the Microsoft Azure IoT ecosystem.
Decision makers will hear about the value proposition of Azure Percept and why it could be relevant for their organizations.

Jupyter Notebooks - a quick look for regular developers

Jupyter Notebooks are a popular tool for many professionals and are very prevalent with Python developers and Data Scientists.
But they are actually useful for many more scenarios.

In this session, attendees will see a variety of applications of Jupyter Notebooks to expand their toolbox through quick demos.

Starting with vanilla Python use cases, the session will also introduce Microsoft Azure's flavor of Jupyter Notebooks.
From there demonstrations will branch out into different development languages and use cases, such as connecting to a SQL Server or writing C# code in a Jupyter Notebook.

Intelligent Spatial Applications

The availability of cloud based services such as Microsoft Azure Cognitive Services unlock advanced computer vision capabilities for regular developers. At the same time, with AR/VR and Mixed Reality Devices such as Microsoft HoloLens, a new paradigm has emerged: Spatial Computing.
Here new concepts such as "Spatial Anchors" have been introduced that have on-device and cloud-based tools that let developers contextualize information to fixed locations in space.
In this session attendees will see computer vision capabilities in action and then those tools will be applied to gain a better understanding of a real 3-dimensional space by being able to annotate what a camera has recognized in that space.
Sounds complicated? It is powerful but you will see that the tools are actually straightforward to use.

Vision in Microsoft Azure Cognitive Services and Applied AI

Computer Vision and AI are not easy to do from scratch. Microsoft Azure Cognitive Services offers various commoditized yet powerful and sophisticated services to work with visual content through Computer Vision.
With those services regular developers can use their existing skills and leverage to deliver AI enabled solutions without needing to find deep learning experts.

Currently those services can help with tasks such as to
- detect and recognize Faces,
- classify images or find objects in an image
- understand rich information in an image,
- process whole videos and get insights from the video
- recognize handwriting and ink in applications
- provide OCR (Optical Character Recognition) on documents
- automatically parse a scan of a paper form

This fast paced session will introduce the currently available set of services and explain through sample code and demonstrations how to use them, new services will be added as they become available.

Decision makers and architects will benefit to understand those capabilities to deliver value in their organizations.

This session will be updated to the currently available services.

Did you know Cognitive Services could do this? A Computer Vision quest

Microsoft Azure Cognitive Services offer a rich set of features to analyze arbitrary images, detect brand logos, train custom models, index videos, transcribe text in images, and much more.
In this session, attendees will see how to combine and use some capabilities of Microsoft Azure Cognitive Services in useful, and possibly unexpected ways.
With the power of the Intelligent Cloud, Computer Vision will extract information from a set of images, this information then can be used to automatically generate a specialized Custom Vision model that finally can be used on the intelligent edge devices to annotate the real world in real time.
Hands-on demos provide actionable starting points for both developers and decision makers to incorporate democratized deep learning in their own applications. The shown code samples will enable developers to start building right away.

Azure Spatial Anchors - what, why, and how?

Azure Spatial Anchors is a new service that is part of Microsoft's Mixed Reality Services in Azure.
This new service lets applications share and persist location information in real space between different ecosystems: Android, iOS, and Microsoft Mixed Reality.
This session will cover what Azure Spatial Anchors are, why it is an important service, and how to use it.
Developers will learn how to use Azure Spatial Anchors through example code and a demonstration.

Introducing Kinect for Azure for Developers

This session introduces Kinect for Azure and provides a quick overview what developers can accomplish with this latest iteration of the popular depth camera.
Through demos and samples code, attendees will receive the necessary information how to start developing applications that leverage Kinect for Azure.

Unity3D with IL2CPP - what is it and how does it work?

Unity3D is a popular multi-platform gaming engine increasingly often used to develop enterprise applications and Microsoft's first choice to develop applications for Mixed Reality.
Unity3D's default scripting language is C#, typically edited, debugged, and deployed in Visual Studio.
But Unity3D was built on Mono to support multiple platforms and that mapped to .NET when running on Windows. Today Unity uses IL2CPP (Intermediate Language To C++) for many target platforms, which then compiles to native code for the target device. In this session, the developer user experience, from project creation, code editing, to deployment and debugging will be demonstrated and discussed.

Mixed Reality and HoloLens 2 - What's new and different for developers

Microsoft HoloLens shipped to developers in 2016. Since then Mixed Reality as a platform evolved with Cloud Services and HoloLens 2. In this session, attendees will learn about what's new and different in 2020, including how to utilize the new capabilities of HoloLens 2, like fully articulated hand- and eye-tracking, and relevant cloud-services like Azure Spatial Anchors.

Developing for HoloLens typically is done with the Unity3D engine, deprecating .NET for UWP and requiring a run-time called "IL2CPP" when targeting HoloLens. This results in changes in the development workflow and debugging experience. Come and learn the tools of the trade.

Cognition for applications - Cognitive Services and Mixed Reality in action

Cognition is defined as acquiring knowledge and understanding through thought, experiences, and the senses. This summarizes very well what can be accomplished when using Microsoft Azure Cognitive Services and Mixed Reality capabilities together.
In this session, the presenter uses those services together enabling advanced scenarios. Cognitive Services allow understanding the environment, combined with Mixed Reality services, this understanding maps to locations in the real world.
This allows to create completely new spatially aware experiences.
Example code and demos will show both decision makers and software developers of all types what is possible, and how to start implementing in their organizations.

Space is what matters - looking at the core value of Mixed Reality and HoloLens 2

In this session attendees will learn why it is different and valuable to pursue Mixed Reality solutions compared to 2-dimensional apps. Many MR applications display graphics in 3 dimensions in a visually appealing way but stop there. Often this results in a lost opportunity: Interacting in 3D-space enables completely new scenarios that would be impossible with 2D-applications.
The presenter will discuss what insights and value can be gained with spatially aware applications.
Developers will receive the necessary starting points to start building apps, including for HoloLens 2, and decision makers will receive guidance, enabling them to think about solutions differently.

Developing with Azure Kinect - Understanding the human body

Azure Kinect is the latest iteration of the widely successful Kinect depth sensor that enables understanding human beings. Kinect is all grown up with a modern powerful set of APIs allowing development on multiple operating system platforms.

This session will introduce the developer experience and how to develop for Azure Kinect, from working with the sensor streams of the device, to accessing skeletal tracking of human bodies.

Attendees will learn about scenarios they can implement with skeletal tracking. Through specific examples developers and decision makers will understand how to leverage the technology in real applications.

The foreseeable future of Azure Kinect and news in the related and hardware ecosystem will also be discussed, how it relates to related Cloud Services, the MetaVerse, and technologies such as Microsoft Mesh.

Understanding the Human Body with Depth Cameras

Since the arrival of Microsoft Kinect for the XBOX 360, depth cameras created new possibilities for software developers and researchers to use democratized “skeletal tracking” capabilities to understand the human body. With the announcement of Kinect for Azure, the ecosystem providing intelligence and perception capabilities on the edge is getting richer and it is a good time to understand what developers can do with Skeletal Tracking.
This session will discuss skeletal tracking and introduce some simple math concepts that are helpful to reason on the provided tracking data. Tips and guidance how to build applications that work with skeletal tracking systems will empower developers to get started immediately.
Attendees will learn about specific scenarios like human posture recognition, fall detection, or energy expenditure estimation for sports applications and how those scenarios can be implemented in their own applications.

Mixed Reality - moving from two-dimensional screens to spatial computing

This session teaches attendees how to implement effective Mixed Reality applications that consider the three dimensional space in an empowering and useful way. Through specific code examples and explanations how to implement the presented scenarios, developers will be enabled to start building applications that participate in three dimensional space.
Decision makers will learn how to think about Mixed Reality applications being more than displaying information in 3D, but to create value that only this new category of applications can provide.

A range of scenarios and how to implement those scenarios will be discussed. This will start with the user's location, understanding what a user is looking at, or how a user can navigate and freely roam around in the 3D application space.
The session will cover how to make smart use of the real three-dimensional space in the application and to gain "spatial understanding" of the environment the application runs in.
Looking beyond individual applications and devices, a demonstration will introduce how Microsoft Azure Spatial Anchors can enable persistent cross-device experiences.
Various ways to make use of spatial information in solutions will be presented.
Attendees will be empowered to comfortably leave their safe space and dive into Mixed Reality.

Microsoft Cognitive Services Vision capabilities for developers

Microsoft Azure Cognitive Services has impressive Computer Vision functionalities that are available for developers. This session will demonstrate
to developers how to use those services in a useful and fun manner.
Hands-on examples will teach how the rich set of APIs that are available can be used and enable developers to get started immediately.
Through seeing Cognitive Services in action, decision makers can better evaluate what Cognitive Services can do for their organizations, envision a future for their solutions, and understand a path how to enrich their existing applications with new capabilities.

Next Gen Mixed Reality - HoloLens 2, Anchors, and Friends

Mixed Reality has advanced the field of Human Computer Interaction (HCI) with Microsoft HoloLens and its Spatial Computing capabilities.
Early in this year, Microsoft announced several products that change and accelerate how applications can evolve towards understanding user intent and enabling new interactions.
In this session attendees will hear about Microsoft HoloLens 2, its capabilities, new interaction paradigms and how to develop software for those.
Then the talk will cover how Azure Spatial Anchor Services enables new cross-device and cross-device ecosystem spatial applications. Finally the session will demonstrate how Kinect for Azure can add additional value to Spatial Applications.

Successful hackathons: value for organizations and hackers, experiences as organizer or participants

Hackathons are a hot topic in digital transformation to unleash hidden potential in your organization or across your industry. On an individual level, they can also help you to dive into new subject matters, empower you to work on something you always wanted to work on, or even change your career path. What different types of hackathons exist? What value do hackathons have for organizations and for me? How do I prepare for a hackathon?
After organizing, mentoring, and participating in various hackathons, I see it is necessary to share my bigger picture view of hackathons for a larger audience. Often the unanticipated was the most interesting of the participation. I will provide tips for both organizers and participants.

AR/VR/Mixed Reality and HoloLens – making it real and useful

Do Holograms dream of photonic sheep? No really, what are those magical things that developers should think about when considering building applications for immersive and holographic AR/VR/MR devices beyond drawing pretty pixels in 3D space. And how can it help your organization and empower your employees?
Currently some voices in the industry echo like: “What do we want? AR! VR! Mixed Reality! When do we want it? Now!” Let’s talk how to make it useful for commercial and industrial customers and we will talk how that can be achieved.
We talk about use cases, such as industrial applications, education, healthcare or IoT integrations, but also think beyond specific use cases and spend time to cover necessary concepts. How does it all fit together and why do we sometimes say that we are dealing with a different category of computer when we talk about Microsoft’s HoloLens device for example.

Mixed Reality - It's all about that space, not pixels

Most coverage of Mixed Reality, especially about Microsoft HoloLens is all about great looking applications with pretty looking pixels.
I hate to break it to you: But it's all about that space, not pixels.
In this session we will talk about scenarios you can implement when utilizing spatial information.
It can be all about you, as the user: Where are you? What are you looking at? How do you roam around?
But it can also be about making smart use of the space around you.
We will discuss various ways to make use of spatial information in solutions. Comfortably leave your safe space and dive into Mixed Reality.

Leveraging democratized AI and Microsoft Azure Cognitive Services

In this two day workshop, attendees will explore a rich set of capabilities that the Microsoft platform provides, both "on the edge" (on a device), and in Azure with Microsoft Cognitive Services.
Examples of Services covered will include, Computer Vision, Custom Vision, Video Indexer, Forms Analyzer
After this session, developers will have gained experience with democratized AI that they can leverage in their workplace.

Mixed Reality Workshop

In this two day workshop, attendees will receive a comprehensive overview on how to develop applications for Mixed Reality.
After familiarizing themselves with the tool chain, attendees will explore the fundamental capabilities of Mixed Reality applications.
After basic rendering of holograms on HoloLens or in immersive VR applications, attendees will use the buildings blocks such as position, gaze, gesture, user interface elements, persistent spatial coordinates, and more, to learn how to build basic Mixed Reality applications.

A lap around democratized AI any Microsoft developer can use

Artificial Intelligence is everywhere on the Microsoft platform. This sessions covers the spectrum of democratized AI for developers that people can just use without needing to be deep learning experts or data scientists.
Through various demos AI capabilities built into Windows, Windows Machine Learning (WinML), to Windows Vision Skills will be introduced, then Azure based services that developers can leverage, will be presented. After this session attendees will be comfortable to integrate AI in their apps.

AI driven Image generation

In 2022, the field of AI-driven image generation reached a notable milestone with technologies like DALL-E, MidJourney, and Stable Diffusion leading the charge, producing image quality that won widespread approval. Now, as we traverse through 2024, the adoption of these technologies is on an upward trajectory, making it crucial for developers to delve deeper into this domain.

In this session, we'll spotlight some of the prominent image generation tools that have captured the industry's attention. While bypassing the nitty-gritty of the underlying math, we'll touch on the AI strategies propelling text-guided image generation, making it digestible for a broad spectrum of developers.

A significant portion of these AI models are accessed via cloud-based APIs, and we'll provide a hands-on demonstration on leveraging such an API for image generation tasks. On the flip side, we'll also explore self-hosted alternatives like "Stable Diffusion" that facilitate running models locally, a boon for businesses and individuals seeking greater control over their data and computational resources.

Pushing the envelope further, we'll delve into advanced use cases including optimizing image output to adhere to a specific style, or tweaking a model to generate images that bear your likeness.

While the crux of this session is rooted in practical application, we won't shy away from addressing the risks associated with this domain. We'll briefly discuss the evolving landscape of Responsible AI, shedding light on risk mitigation strategies in this context.

Join us in this engaging exploration as we dissect the current landscape of AI-driven image generation, offering you a practical understanding and hands-on experience, all while keeping a prudent eye on the associated risks and responsible AI practices.

Mastering the new world of ChatGPT, GPT-4, DALL-E 2, and Multi-modal models

Since ChatGPT, everybody became aware of the power of large AI models, yet not many people have an overview of this field, and how to get started.

This workshop provides a closer look at the state of the art of large AI models that you can use.

This is a rapidly evolving field and the workshop will aim at talking about the available state of the art.

Some of the items covered will be:

Large-Language Models:
- What are Large Language Models and why are they new?
- Overview of the brief history of Large Language Models
- Differences between OpenAI's models: Base-model, Instruct Models, Chat Model
- Other relevant Large Language Models
- Large Language models you can run on your own infrastructure
- Use cases such as Text generation, Summarization, Entity extraction, Classification, Correction, Translation, Style-transfer (rewriting), Code generation, Semantic search, Clustering of information
- Prompt-engineering and Meta-prompts
- Working with program code, e.g. Codex and GitHub CoPilot
- Using OpenAI's APIs directly and in Azure
- Integrating with Large Language Models
- Build your own Chat Bot

Image-generation:
- What are the principles behind image generation models?
- What services exist
- OpenAI's DALL-E 2
- Image generation, In-Painting, Out-painting
- Prompt engineering for image generation
- Using APIs to control DALL-E 2
- A closer look at Stable Diffusion, the advantages of direct model access
- Fine-tuning and adapting Image Generation to specific requirements (e.g. generate images that look like you)
- Exploring advanced concepts such as Latent Space, and changing other advanced parameters
- Generating animations/movies

Multi-Modal models:
- Overview, e.g. Florence, GPT-4
- What are the known capabilities?
- How are Multi-Modal models used?

- Azure:
- What pre-built large AI services exist?
- How do those Services relate to specialized Azure Services, e.g. Translation or Summarization, Computer Vision, Custom Vision, Form Recognition, Face Recognition, and more

Large AI Models Everywhere
- in Bing
- in M365
- in Power Platform

More advanced integration
- Integrate your data-sources with large AI models

Discussion - Impact and Responsible AI:
- Where are we headed?
- What risks exists?
- How can risks me mitigated?

This is not a complete list and it will be updated as new capabilities appear in the market.

This workshop will start introductory but cover more advanced topics.

No deep-learning experience required.

Note that for many topics, the cloud providers and vendors require "Responsible AI" approval processes, those parts will be mostly demonstrated by the presenter and probably cannot be tried out live by the audience.

Microsoft Mesh for developers

Explore the capabilities of Microsoft Mesh, the innovative technology fueling collaborative 3D experiences in applications like Microsoft Teams, aimed as a solid platform for developing Metaverse applications.

In this session, we will unpack what Microsoft Mesh is and what's needed to develop Metaverse applications in its realm.

We will delve into hands-on concepts such as designing worlds in Unity3D, cloud-based scripting, and physics integration.

After this session, you’ll be equipped to begin building your own solutions with Microsoft Mesh.

Advanced GPT: Agent, do you have a plan? Help me decide!

ChatGPT has showcased its prowess in drafting summaries, translating text, and crafting catchy marketing tag-lines. Yet, is that the extent of its capabilities? Amidst a barrage of hype around writing prompts for various scenarios, it's easy to overlook the potential for more nuanced interactions with these models.

In this session, we aim to venture beyond the superficial and dive into advanced prompting concepts that unlock superior responses. We'll explore the realms of Zero-Shot, One-Shot, and Few-Shot Prompting, delve into the essence of Chain-of-Thought Prompting or Self-Consistency, and unveil other techniques that stand to enhance the efficacy of your queries.

But why stop there? As we cast our gaze into the not-so-distant future of Large Language Models, the horizon is intriguing. We'll discuss the prospects of harnessing these models as the 'brains' behind more complex tasks – think tool-building, plan-creation, decision-making. We'll also ponder on the steps towards endowing them with a higher degree of agency, a leap towards more autonomous AI.

Yet, no technology is without its Achilles heel. A segment of our discourse will be dedicated to unmasking some of the prevailing weaknesses inherent in current AI models, a necessary contemplation for any forward-thinking developer.

Join us as we navigate through the evolving narrative of Large Language Models, from mastering advanced prompting techniques to envisioning their role in a bolder future. This session is tailored to not just inform, but to provoke thought, stimulate discussion, and fuel your imagination on the boundless possibilities awaiting in the AI frontier.

Red-Teaming your own prompts

Generative AI can create wonderful but also horrible things. With it come new types of risks and attack vectors on systems.

In this session the presenter talks about some of his experiences trying to understand risks and limitations by "red-team"ing his own usage.

The presenter will share observations he made as to some of the patterns that he believes can often circumvent prompt- or content filter-based protections.

With plenty of hands-on examples some of the unique properties of Large Language Models will be explored, and the cat-and-mouse game between attackers and defenders will be discussed.

You will also hear about various attack vectors that you may need to defend against when building AI systems based on Large Language Models or even building Large Language Models.

Furthermore you will also get some insights into existing "off-the-shelf" solutions such as various Copilots that can be exposed through Red-Teaming and gain an understanding how your own solutions may be subject to similar attacks.

Andreas Erben

CTO for Applied AI and Metaverse at daenet

Ponte Vedra Beach, Florida, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top