Andreas Erben

Information & Communications Technology

Health & Medical

Mixed Reality Windows Mixed Reality Azure Cognitive Services Microsoft Azure Cognitive Services Azure Custom Vision Democratized Artificial Intelligence Healthcare Technology Virtual Reality Augmented Reality Extended Reality

Ponte Vedra Beach, Florida, United States

Andreas Erben

This entity is not a Hologram

Andreas spent most of his professional life and a career of over 25 years integrating "backend" applications. First inside or between Enterprises and Startups directly, then - "in the cloud".
Before that, Andreas dabbled with Virtual Reality in the 90s. In 2013, he started to work with Kinect for Windows V2 and something wonderful happened - Andreas had new fun with new toys and he was all set to work on new paradigms to interact with computers.
Then Microsoft HoloLens was announced and he expanded his focus on Mixed Reality applications and concepts.
He cannot stop talking about technology, which apparently some people like so Microsoft gave him the MVP award. Because he has shown leadership can advise C-levels and business, he also got accepted into the ranks of the Microsoft Regional Directors .
Andreas helps customers as a trusted advisor, external CTO and innovation consultant, he also produces exciting applications and solutions with a great team of skilled individuals.
He acts as CTO for MR and Applied AI for daenet.

Current sessions

Foundations of the Metaverse: Azure Spatial Anchors and Understanding 3D Space

The Metaverse is one of the big buzzwords that has been adopted by large tech companies in 2021 to the extend that Facebook's parent company even changed their name to Meta.

A core component of the Metaverse is ability to precisely locate oneself, real objects and virtual objects in a real environment and to be able to understand that environment.

This session looks at relevant Azure, software, and hardware technologies that help with this task such as Azure Spatial Anchors to enable the Metaverse.

Microsoft HoloLens 2 will serve as a demonstration device for those technologies.

After this session attendees will have a better understanding of what can enable a Metaverse that integrates with the real world.


Microsoft Mesh - Cloud services to enable collaboration in 3D space

Microsoft Mesh was first introduced as the anchor point at Microsoft's Ignite conference in March 2021 which was held in Virtual Reality. Some consider this a watershed moment for breadth Enterprise readiness of AR/VR technologies.

This new platform enables building experiences that let people interact remotely as if they were in the same physical space. As a combination of various cloud-powered services, tools, and a SDK, developers can create new types of solutions that some consider to be the future of Microsoft Teams.

This session will discuss the most important aspects of Microsoft Mesh, from the capability to exchange and synchronize application state related to 3-dimensional coordinates, to having access to a so-called "Avatar System" that provides an illusion of physical presence and expressiveness for remote participants, to integrating spatially persistent objects and more.
Unlocking so-called "HoloPortation" is one of the more advanced goals of the platform.

Microsoft Mesh also enables more easily creating applications that can interact between multiple different headset vendors and AR/VR devices such as Microsoft HoloLens 2 and but also iOS and Android devices that are AR capable.

Developers will learn about the hands-on developer experience and receive the necessary information to get started.
Decision makers can expect to get an understanding about the unique value Microsoft Mesh can provide for their organization.


Jupyter Notebooks - a versatile tool

Jupyter Notebooks are a popular tool for many professionals and are very prevalent with Python developers and Data Scientists.
But they are actually useful for many more scenarios.

In this session, attendees will see a variety of applications of Jupyter Notebooks to expand their toolbox.

Starting with vanilla Python use cases, the session will also introduce Microsoft Azure's flavor of Jupyter Notebooks.
From there demonstrations will branch out into different development languages and use cases, such as connecting to a SQL Server or writing C# code in a Jupyter Notebook.


Computer Vision based AI in Azure

Microsoft Azure has powerful Computer Vision capabilities in many places that can enable regular developers to be heroes incorporating advanced image understanding through AI in their applications.

In this workshop, attendees will see various of such capabilities spanning several product lines in more detail to get a better understanding about them.
The home for those offerings is in various parts of Azure such as Cognitive Services, Applied AI, and also play a role in Azure Machine Learning.

Join for a workshop dealing with computer vision tasks and you will be able to get information out of arbitrary images, will see how to train models classifying images and detecting objects in a larger image, see basic OCR capabilities and recognizing content in forms, analyze videos, and much more.

For those wondering, some of the those services are Computer Vision, Custom Vision, Face API, Video Analyzer, Azure Form Recognizer, and more.


Experiments with language processing neural networks

General purpose and specialty purpose large scale neural language models such as OpenAI'sGPT-3, Codex, or related GitHub CoPilot can change the nature of some businesses such as copywriting, but also unlock new approaches to writing software, or provide translation capabilities without being trained to do so.
This session will briefly look at some examples that some of those models can do.

Using such an AI based service is not without risk and the session will mention some of those risks and provide some examples of things you definitely should avoid.

Oh, and we will have some fun with creative uses. AI generated poetry? A virtual rap-battle with Eminem? The possibilities are endless.


WebXR: AR and VR without the fuzz

Extended Reality (XR), specifically AR and VR applications are usually developed in complex game engines like Unity3D or Unreal Engine.
Those engines are powerful but have an often significant learning curve and also an overhead as a development environment and in the DevOps flow.
WebXR tries to make AR and VR applications more approachable and attainable for developers that are already familiar with web standards.
This session provides an introduction into this group of standards.
Attendees will learn what functionalities WebXR covers and become familiar through demonstrations how to solve typical scenarios in WebXR type projects. Relevant libraries such as Babylon.js or AFrame will be introduced.
Developers and decision makers will hear why WebXR can make more sense than choosing a full Gaming Engine to implement Extended Reality applications.


WebXR, WebVR, WebAR - a quick introduction

This session provides a quick introduction into WebXR, WebAR, and WebVR, the technology to build Augmented Reality and Virtual Reality solutions through browser technologies.
The session will incorporate live demos.


You have not started with Azure Machine Learning yet?

Azure Machine Learning is the center-stage service Microsoft offers to handle Machine Learning workloads.
This session aims to give attendees the long-overdue chance to learn about how to get started using Azure Machine Learning.

In this demo-centric presentation, various capabilities of Azure Machine Learning for the Azure Machine Learning lifecycle workflow, all the way from training AI model training to deploying and running models will be shown.

Developers, architects, and decision makers will get a better understanding about what Azure Machine Learning can do and how to get started using it.


AltspaceVR and friends - meetings and more in the Metaverse

One of the killer use-cases of the Metaverse is collaborating with other human beings. AltspaceVR is one of the pioneering applications that allow human beings to interact in a three-dimensional Virtual Reality environment while offering a different experience through remote "presence" with customizable avatars.

In this session, attendees will learn about interactive collaborative use cases of the Metaverse, in other words, what you can do in some of the key Microsoft Mesh use cases.
The capabilities of the platform AltspaceVR will be introduced and the presenter will discuss how to leverage them as participants or organizers of virtual events.
Ranging from simple participation in third party events, to setting up your own online event experience, users will understand the options better they have on that platform.

The presenter will talk about the relationship of AltspaceVR with Microsoft Teams and if at that time Mesh capabilities will be available in Microsoft Teams, the presentation will cover those.
Other alternative third party applications for collaboration in the Metaverse will be briefly discussed.

After this session, attendees can expect to have a better understanding how to partake in organize interactive events events in Virtual Reality.


Python: A quickstart for non-Python developers

Python is one of the most popular development languages, and by some measures currently the most popular language for beginners. In Artificial Intelligence and Data Science it is practically impossible to not use Python in one way or the other.
Yet, many Enterprise Developers have not started yet.
This fast focus providers regular developers a quickstart to start using Python.
This is for everybody who has not used Python yet and wants a quick overview on how to get started.


Foundations of the MetaVerse: Spatial Anchors and World Locking

Mixed Reality and also Augmented Reality applications need a concept of representing fixed locations in the real world to be able to enhance the environment with.
The MetaVerse requires such an "anchoring" in the real world.
For this, concepts like Spatial Anchors were introduced first with Microsoft HoloLens but are now available for different device families.
World Locking is a way to then reliably bind the coordinate systems of the real world and the physical world to allow application developers to develop spatial applications more easily.
This fast focus session will cover Spatial Anchors, World Locking Tools for Unity, and talk about how Spatial Anchors can be translated between device families with Azure Spatial Anchor Services.


Hackathons - unleashing hidden potential in organizations and empower individuals

Hackathons are a hot topic to drive innovation. Organizations across the globe have discovered that it enables them to help their digital transformation endeavors through unleashing hidden potential within the organization or across an industry.
For individuals, hackathons can help to dive into a new subject matter, empowering to work on subjects they always wanted to work on, and in some cases hackathons have changed career paths.
This session will discuss topics around hackathons:
What different types of hackathons exist? What value do hackathons have for organizations and for participants? How does one best prepare for a hackathon?

Most importantly, hackathons are about culture and in some cases helping to change culture in a positive way. The best hackathons are inclusive and empower people to succeed.

Attendees will learn about the value of hackathons for organizations and individuals, and tips will be shared how to run or participate in a successful hackathon. This session is for everyone who cares about innovation, digital transformation, and how to disrupt and reinvent themselves.


Get started: What, I need to use Python? Help!

Python has rapidly become the most popular software development language measured by online searches. The daily responsibilities of many Web-Developers and Enterprise Developers do not allow diving into this completely new ecosystem as part of their role, yet they are increasingly often expected to know Python. This focused session provides the starting point for software developers to begin working with Python.


Introduction to Azure Percept - accelerated AI on the IoT edge

Everybody interested in combining IoT and Artificial Intelligence will benefit from this session.
Azure Percept is a new platform to create hardware accelerated Artificial Intelligence solutions that run on the edge. It is effectively an IoT device. Integration with cloud-based AI such as Azure Cognitive Services or Azure Machine Learning enables quick development and deployment of pre-built intelligence.

In this session, attendees will learn about the hardware Azure Percept and about the concept of the device ecosystem for sensors that integrate with Azure Percept.
The session also demonstrates the Azure based tools to administer systems that incorporate the device and understand the relationship with the Azure IoT offerings.

The presenter will share personal insights including experiences with typical challenges when working with this type of device.

Developers will see the developer experience to develop applications that leverage Azure Percept and receive information how to get started.
IT Pros will understand how Azure Percept relates to the Microsoft Azure IoT ecosystem.
Decision makers will hear about the value proposition of Azure Percept and why it could be relevant for their organizations.


Jupyter Notebooks - a quick look for regular developers

Jupyter Notebooks are a popular tool for many professionals and are very prevalent with Python developers and Data Scientists.
But they are actually useful for many more scenarios.

In this session, attendees will see a variety of applications of Jupyter Notebooks to expand their toolbox through quick demos.

Starting with vanilla Python use cases, the session will also introduce Microsoft Azure's flavor of Jupyter Notebooks.
From there demonstrations will branch out into different development languages and use cases, such as connecting to a SQL Server or writing C# code in a Jupyter Notebook.


Intelligent Spatial Applications

The availability of cloud based services such as Microsoft Azure Cognitive Services unlock advanced computer vision capabilities for regular developers. At the same time, with AR/VR and Mixed Reality Devices such as Microsoft HoloLens, a new paradigm has emerged: Spatial Computing.
Here new concepts such as "Spatial Anchors" have been introduced that have on-device and cloud-based tools that let developers contextualize information to fixed locations in space.
In this session attendees will see computer vision capabilities in action and then those tools will be applied to gain a better understanding of a real 3-dimensional space by being able to annotate what a camera has recognized in that space.
Sounds complicated? It is powerful but you will see that the tools are actually straightforward to use.


Vision in Microsoft Azure Cognitive Services and Applied AI

Computer Vision and AI are not easy. Microsoft Azure Cognitive Services offers various commoditized yet powerful and sophisticated services to work with visual content through Computer Vision.
With those services regular developers can use their existing skills and leverage to deliver AI enabled solutions without needing to find deep learning experts.

Currently those services can
- detect and recognize Faces,
- classify images or find objects in an image
- understand rich information in an image,
- process whole videos and get insights from the video
- recognize handwriting and ink in applications
- provide OCR (Optical Character Recognition) on documents
- automatically parse a scan of a paper form

This fast paced session will introduce the currently available set of services and explain through sample code and demonstrations how to use them, new services will be added as they become available.

Decision makers and architects will benefit to understand those capabilities to deliver value in their organizations.


Did you know Cognitive Services could do this? A Computer Vision quest

Microsoft Azure Cognitive Services offer a rich set of features to analyze arbitrary images, detect brand logos, train custom models, index videos, transcribe text in images, and much more.
In this session, attendees will see how to combine and use some capabilities of Microsoft Azure Cognitive Services in useful, and possibly unexpected ways.
With the power of the Intelligent Cloud, Computer Vision will extract information from a set of images, this information then can be used to automatically generate a specialized Custom Vision model that finally can be used on the intelligent edge devices to annotate the real world in real time.
Hands-on demos provide actionable starting points for both developers and decision makers to incorporate democratized deep learning in their own applications. The shown code samples will enable developers to start building right away.


Azure Spatial Anchors - what, why, and how?

Azure Spatial Anchors is a new service that is part of Microsoft's Mixed Reality Services in Azure.
This new service lets applications share and persist location information in real space between different ecosystems: Android, iOS, and Microsoft Mixed Reality.
This session will cover what Azure Spatial Anchors are, why it is an important service, and how to use it.
Developers will learn how to use Azure Spatial Anchors through example code and a demonstration.


Introducing Kinect for Azure for Developers

This session introduces Kinect for Azure and provides a quick overview what developers can accomplish with this latest iteration of the popular depth camera.
Through demos and samples code, attendees will receive the necessary information how to start developing applications that leverage Kinect for Azure.


Unity3D with IL2CPP - what is it and how does it work?

Unity3D is a popular multi-platform gaming engine increasingly often used to develop enterprise applications and Microsoft's first choice to develop applications for Mixed Reality.
Unity3D's default scripting language is C#, typically edited, debugged, and deployed in Visual Studio.
But Unity3D was built on Mono to support multiple platforms and that mapped to .NET when running on Windows. Today Unity uses IL2CPP (Intermediate Language To C++) for many target platforms, which then compiles to native code for the target device. In this session, the developer user experience, from project creation, code editing, to deployment and debugging will be demonstrated and discussed.


Mixed Reality and HoloLens 2 - What's new and different for developers

Microsoft HoloLens shipped to developers in 2016. Since then Mixed Reality as a platform evolved with Cloud Services and HoloLens 2. In this session, attendees will learn about what's new and different in 2020, including how to utilize the new capabilities of HoloLens 2, like fully articulated hand- and eye-tracking, and relevant cloud-services like Azure Spatial Anchors.

Developing for HoloLens typically is done with the Unity3D engine, deprecating .NET for UWP and requiring a run-time called "IL2CPP" when targeting HoloLens. This results in changes in the development workflow and debugging experience. Come and learn the tools of the trade.


Cognition for applications - Cognitive Services and Mixed Reality in action

Cognition is defined as acquiring knowledge and understanding through thought, experiences, and the senses. This summarizes very well what can be accomplished when using Microsoft Azure Cognitive Services and Mixed Reality capabilities together.
In this session, the presenter uses those services together enabling advanced scenarios. Cognitive Services allow understanding the environment, combined with Mixed Reality services, this understanding maps to locations in the real world.
This allows to create completely new spatially aware experiences.
Example code and demos will show both decision makers and software developers of all types what is possible, and how to start implementing in their organizations.


Space is what matters - looking at the core value of Mixed Reality and HoloLens 2

In this session attendees will learn why it is different and valuable to pursue Mixed Reality solutions compared to 2-dimensional apps. Many MR applications display graphics in 3 dimensions in a visually appealing way but stop there. Often this results in a lost opportunity: Interacting in 3D-space enables completely new scenarios that would be impossible with 2D-applications.
The presenter will discuss what insights and value can be gained with spatially aware applications.
Developers will receive the necessary starting points to start building apps, including for HoloLens 2, and decision makers will receive guidance, enabling them to think about solutions differently.


Developing with Azure Kinect - Understanding the human body

Azure Kinect is the latest iteration of the widely successful Kinect depth sensor that enables understanding human beings. Kinect is all grown up with a modern powerful set of APIs allowing development on multiple operating system platforms.

This session will introduce the developer experience and how to develop for Azure Kinect, from working with the sensor streams of the device, to accessing skeletal tracking of human bodies.

Attendees will learn about scenarios they can implement with skeletal tracking. Through specific examples developers and decision makers will understand how to leverage the technology in real applications.

The foreseeable future of Azure Kinect and news in the related and hardware ecosystem will also be discussed, how it relates to related Cloud Services, the MetaVerse, and technologies such as Microsoft Mesh.


Understanding the Human Body with Depth Cameras

Since the arrival of Microsoft Kinect for the XBOX 360, depth cameras created new possibilities for software developers and researchers to use democratized “skeletal tracking” capabilities to understand the human body. With the announcement of Kinect for Azure, the ecosystem providing intelligence and perception capabilities on the edge is getting richer and it is a good time to understand what developers can do with Skeletal Tracking.
This session will discuss skeletal tracking and introduce some simple math concepts that are helpful to reason on the provided tracking data. Tips and guidance how to build applications that work with skeletal tracking systems will empower developers to get started immediately.
Attendees will learn about specific scenarios like human posture recognition, fall detection, or energy expenditure estimation for sports applications and how those scenarios can be implemented in their own applications.


Mixed Reality - moving from two-dimensional screens to spatial computing

This session teaches attendees how to implement effective Mixed Reality applications that consider the three dimensional space in an empowering and useful way. Through specific code examples and explanations how to implement the presented scenarios, developers will be enabled to start building applications that participate in three dimensional space.
Decision makers will learn how to think about Mixed Reality applications being more than displaying information in 3D, but to create value that only this new category of applications can provide.

A range of scenarios and how to implement those scenarios will be discussed. This will start with the user's location, understanding what a user is looking at, or how a user can navigate and freely roam around in the 3D application space.
The session will cover how to make smart use of the real three-dimensional space in the application and to gain "spatial understanding" of the environment the application runs in.
Looking beyond individual applications and devices, a demonstration will introduce how Microsoft Azure Spatial Anchors can enable persistent cross-device experiences.
Various ways to make use of spatial information in solutions will be presented.
Attendees will be empowered to comfortably leave their safe space and dive into Mixed Reality.


Microsoft Cognitive Services Vision capabilities for developers

Microsoft Azure Cognitive Services has impressive Computer Vision functionalities that are available for developers. This session will demonstrate
to developers how to use those services in a useful and fun manner.
Hands-on examples will teach how the rich set of APIs that are available can be used and enable developers to get started immediately.
Through seeing Cognitive Services in action, decision makers can better evaluate what Cognitive Services can do for their organizations, envision a future for their solutions, and understand a path how to enrich their existing applications with new capabilities.


Next Gen Mixed Reality - HoloLens 2, Anchors, and Friends

Mixed Reality has advanced the field of Human Computer Interaction (HCI) with Microsoft HoloLens and its Spatial Computing capabilities.
Early in this year, Microsoft announced several products that change and accelerate how applications can evolve towards understanding user intent and enabling new interactions.
In this session attendees will hear about Microsoft HoloLens 2, its capabilities, new interaction paradigms and how to develop software for those.
Then the talk will cover how Azure Spatial Anchor Services enables new cross-device and cross-device ecosystem spatial applications. Finally the session will demonstrate how Kinect for Azure can add additional value to Spatial Applications.


Successful hackathons: value for organizations and hackers, experiences as organizer or participants

Hackathons are a hot topic in digital transformation to unleash hidden potential in your organization or across your industry. On an individual level, they can also help you to dive into new subject matters, empower you to work on something you always wanted to work on, or even change your career path. What different types of hackathons exist? What value do hackathons have for organizations and for me? How do I prepare for a hackathon?
After organizing, mentoring, and participating in various hackathons, I see it is necessary to share my bigger picture view of hackathons for a larger audience. Often the unanticipated was the most interesting of the participation. I will provide tips for both organizers and participants.


AR/VR/Mixed Reality and HoloLens – making it real and useful

Do Holograms dream of photonic sheep? No really, what are those magical things that developers should think about when considering building applications for immersive and holographic AR/VR/MR devices beyond drawing pretty pixels in 3D space. And how can it help your organization and empower your employees?
Currently some voices in the industry echo like: “What do we want? AR! VR! Mixed Reality! When do we want it? Now!” Let’s talk how to make it useful for commercial and industrial customers and we will talk how that can be achieved.
We talk about use cases, such as industrial applications, education, healthcare or IoT integrations, but also think beyond specific use cases and spend time to cover necessary concepts. How does it all fit together and why do we sometimes say that we are dealing with a different category of computer when we talk about Microsoft’s HoloLens device for example.


Mixed Reality - It's all about that space, not pixels

Most coverage of Mixed Reality, especially about Microsoft HoloLens is all about great looking applications with pretty looking pixels.
I hate to break it to you: But it's all about that space, not pixels.
In this session we will talk about scenarios you can implement when utilizing spatial information.
It can be all about you, as the user: Where are you? What are you looking at? How do you roam around?
But it can also be about making smart use of the space around you.
We will discuss various ways to make use of spatial information in solutions. Comfortably leave your safe space and dive into Mixed Reality.


Leveraging democratized AI and Microsoft Azure Cognitive Services

In this two day workshop, attendees will explore a rich set of capabilities that the Microsoft platform provides, both "on the edge" (on a device), and in Azure with Microsoft Cognitive Services.
Examples of Services covered will include, Computer Vision, Custom Vision, Video Indexer, Forms Analyzer
After this session, developers will have gained experience with democratized AI that they can leverage in their workplace.


Mixed Reality Workshop

In this two day workshop, attendees will receive a comprehensive overview on how to develop applications for Mixed Reality.
After familiarizing themselves with the tool chain, attendees will explore the fundamental capabilities of Mixed Reality applications.
After basic rendering of holograms on HoloLens or in immersive VR applications, attendees will use the buildings blocks such as position, gaze, gesture, user interface elements, persistent spatial coordinates, and more, to learn how to build basic Mixed Reality applications.


A lap around democratized AI any Microsoft developer can use

Artificial Intelligence is everywhere on the Microsoft platform. This sessions covers the spectrum of democratized AI for developers that people can just use without needing to be deep learning experts or data scientists.
Through various demos AI capabilities built into Windows, Windows Machine Learning (WinML), to Windows Vision Skills will be introduced, then Azure based services that developers can leverage, will be presented. After this session attendees will be comfortable to integrate AI in their apps.