Most Active Speaker

Todd Sharp

Todd Sharp

Principal Developer Advocate - Amazon Interactive Video Service - Twitch

Blairsville, Georgia, United States

I’m a developer who advocates and evangelizes about interactive video streaming at Twitch. I’ve been writing code since 2004 and I'm passionate about technology, code, and helping developers learn new things. As part of my role as a Developer Advocate, I write demos to show other developers how to use certain languages, frameworks, and technologies to solve the problems that they face every day. I have a background in writing and a love for theater, so I’m extra fortunate to have found a role that lets me use those skills and passions as part of my everyday routine. I’m married to my best friend and we live in the North Georgia mountains with our 2 kids, 8 chickens, 2 dogs, and a cat. When I’m not working, I love to cook, hike, travel, play video games and tinker with electronics and microcontrollers.


  • Most Active Speaker 2023

Area of Expertise

  • Information & Communications Technology
  • Media & Information


  • Cloud
  • Cloud & Infrastructure
  • Cloud Computing
  • Cloud Architecture
  • microservices
  • java
  • groovy
  • NodeJS
  • Raspberry Pi
  • Arduino
  • live streaming
  • Live Video
  • video streaming
  • video
  • aws
  • Amazon Web Services
  • video production
  • Gaming
  • Streaming
  • JavaScript
  • JavaScript & TypeScript
  • video processing
  • video analytics
  • Java & JVM
  • Enterprise Java
  • JavaScriptCore
  • Cloud Native
  • amazon web serivces
  • Artificial Intelligence
  • Machine Learning and AI
  • AI & Machine Learning
  • Machine Learning
  • WebRTC
  • Web APIs
  • Web Apps
  • Modern Web
  • IoT
  • Web
  • Game Engines
  • Game Development
  • Videogames Development
  • Video Games
  • Translation
  • Unity
  • Virtual Reality
  • Educational Technology

Building a Pan/Tilt/Zoom Live Streaming Camera Rig

In this session, you'll see how I used an XBox controller, Arduino, a few servo motors and some HTML and JavaScript APIs to build the whole assembly. Even if you don't need your own PTZ rig, you'll still learn about some really cool JavaScript APIs like Web Serial, and the Gamepad API that you may not be aware of.

If you've ever been in the market for a webcam that supports pan, tilt and zoom (PTZ), you'll know that it's pretty difficult to find one for under $1000. That's just silly, when we have the technology and APIs to build our own for around 100 bucks!

Adding Live, Interactive Video to Your Application with Amazon IVS

In this session, we'll look at how you can add live, interactive video streaming to your application with the Amazon Interactive Video Service (IVS) JavaScript SDK. In the first 5 months of 2022, users consumed over 543 billion minutes of live video created by almost 9 million creators on Twitch. There's no question that streaming is popular, but why is it so in-demand? Part of the draw for users is that streaming is unscripted, dynamic, and unpredictable. It's not the same old formulaic "cookie-cutter" content available via traditional mediums. Another attraction is the high level of interactivity with the content producer. What other medium provides live chat and interaction to viewers while they consume the content? Streaming gives the consumer a level of active participation in their entertainment instead of just passively consuming it. It's also highly accessible - anyone can set up a stream at little to no cost from the comfort of their own home or office - or anywhere as long as they have a modern mobile device and internet connectivity. But it's not just for video games - any application that wants to reach an audience - locally or globally - can take advantage of the power of streaming to reach millions of users!

Automated Multi-Cam Live Streaming

Most user generated content streams come from a content creator using a single camera - and that camera is usually in a fixed position. The reason for this is obvious - most streamers only require a single cam for “talking head” videos. Of course, there are streamers who use multi-cam setups and have hotkeys to swap between them, but this requires manually managing the transitions between the cameras, which can be tedious and time-consuming and can diminish the production quality of the stream. Professionally generated content like conferences and events are often multi-cam, which usually involves a producer to determine the proper cam to broadcast throughout the stream.

What if we could truly multi-cam live streams and make them accessible to both UGC and PGC streamers? In this session, we’ll look at one solution for automated multi-cam streams by utilizing an Aqara Presence Sensor FP2 to determine human presence in a zone and automatically composite multiple cameras or switch a stream’s source camera when appropriate. Imagine being able to live stream from your workshop and have your viewers be able to follow along with what you’re doing as you move around the shop. Think about a conference live stream that can switch cameras as the speaker moves around the stage. We’ll see the hardware and code required to automate multi-cam streams and explore the infinite possibilities.

Demystifying the Digital Dystopia: Improving User Generated Content Platforms with AI and ML

In the past year, GPT-4 opened the world’s eyes to the power of Artificial Intelligence. This awakening has brought concerns of neural networks that blur the lines of reality and sentient systems that will take over the internet and push their own evil agenda on the masses. In reality, these developments in AI technology have the power to be used to make the internet a better place by helping with mundane tasks like content detection and moderation.

In this session, we’ll look at various approaches to content detection and moderation of live streaming platforms, and how to improve them with the help of AI. We’ll see how to analyze live streams for offensive or inappropriate content and use that analysis as a prompt for moderator intervention. We’ll also see how we can use AI content detection to assist in stream discovery, monetization and how to analyze chat messages for sentiment analysis and PII detection.

Creating an AI Driven "Mood Ring" For Visualizing Chat Sentiment

Do you remember mood rings? They were a gimmick in the '70s and '80s that supposedly changed color based on your mood. It's time to dig that old gimmick up once again, but this time we'll bring it in to the 21st century and use AI, a microcontroller, and NeoPixel LEDs to make a legit mood ring that can be used to visualize sentiment in a live chat room.

In this session, we'll see how to perform sentiment analysis on a text string from a live chat room. We'll learn the basics of microcontrollers (Arduino) and use one to change the color of an LED light ring based on detected chat sentiment. We'll also discuss how this solution can be extended for other use cases.

Experimenting with Hardware and AI to Create Dynamic Live Streaming User Experiences

In this session, we'll take a look at various hardware and AI based experiments aimed at creating unique user experiences to enhance live streaming applications.

What if we could truly automate multi-cam live streams and make them accessible to both UGC and PGC streamers? In this session, we’ll look at one solution for automated multi-cam streams by utilizing millimeter wave human presence sensors to automatically composite multiple cameras or switch a stream’s source camera when appropriate.

What about using computer vision to analyze live stream chat or even frames to determine a broadcaster or viewers sentiment? We'll look at a real life "mood ring" that changes colors based on the sentiment as determined by AI/ML.

Have you ever looked into PTZ cameras for your live streams but realized just how expensive they are? We'll see how to build your own PTZ camera with some hardware and a video game controller.

These experiments might not be practical or cost effective, but they'll inspire you to think beyond traditional approaches and adding new levels of interactivity to your live streaming applications.

Next Level Interactivity: Live Streaming Directly From Your Game

In this session, we'll look at how to create a dynamic live streaming experience directly from a game built in Unity. Live streaming has traditionally required the use of third-party streaming software like OBS or Streamlabs. These programs come with a learning curve, and make it tricky for new streamers to get started. By integrating the live streaming experience directly into a game engine, you can lower the barrier to entry for new content creators. Additionally, you can add next level interactions like viewer-controllable cameras, multi-cam views, and even influence gameplay itself from user chat. We'll look at all of these use cases and more in this session.

Real-Time Live Stream Transcription and Translation

Live streams are a powerful way to share content with remote viewers in near real-time. However, the near real-time nature means that they're notoriously difficult to transcribe and translate, which means your content is less available to those certain viewers like the hearing impaired or those who don't natively speak the same language as the broadcaster. In this session, we'll look at a few options for real time transcription and translation for both low-latency (HLS) and real-time (WebRTC) live streams and discuss the pros and cons of the various approaches.

Add Live Streaming to Your Application (Without Being a Video Expert!)

Live streaming enables impactful community engagement through unpredictable, dynamic, and interactive experiences. However, building reliable, low-latency streaming infrastructure with global scale poses complex challenges. Amazon Interactive Video Service (IVS) aims to simplify adding live video to your application by providing a fully-managed service built on the same network and infrastructure that powers Twitch.

In this session, I'll introduce you to Amazon IVS and show you how it can be used to create low latency streaming channels for your users that can deliver video with around 2-5 seconds of latency. I'll also teach you how to create collaborative streaming experiences with Amazon IVS real-time streaming where latency is as low as 250ms. There's no need to be a video expert to unleash the power of live streaming in your application. You won't have to think about transcoding, storage, edge delivery - or any of the many challenges that come with building a low latency global network capable of delivering high quality video around the world. As a developer, you should focus on user experience - not building and maintaining the infrastruture that enables it.

Todd Sharp

Principal Developer Advocate - Amazon Interactive Video Service - Twitch

Blairsville, Georgia, United States

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top