Most Active Speaker

Teresa wu

Teresa wu

VP Engineer at J.P. Morgan, GDE Flutter/Dart

London, United Kingdom

Actions

Teresa is a public speaker, Google Developer Expert (GDE), mentor, and software engineer who is passionate about front-end development and Cloud technology. She has been working with many talented developers to craft various apps and projects throughout the years, and she likes to explore the world of multi-platform, the fun of continuous delivery, and seeing through the product from development to release. 

Awards

  • Most Active Speaker 2023
  • Most Active Speaker 2022

Area of Expertise

  • Finance & Banking
  • Information & Communications Technology
  • Media & Information

Topics

  • flutter
  • TDD & BDD
  • Google Cloud Paltform
  • GenAI
  • Web Development
  • React Native
  • React
  • generative ai
  • CI/CD
  • Automation & CI/CD
  • Gemini
  • Artificial Intelligence
  • Machine Learning and Artificial Intelligence
  • MLOps

Flutter WebApp with WebAssembly & Desktop

Single Page WebApp (SPA) has existed for nearly 2 decades, and it was always been built with JavaScript + HTML + CSS. Meanwhile, WebAssembly has provided another way to build web apps that compile other languages to support web apps.

This presentation walks you through:
- building Flutter WebApps using WebAssembly, with a deep dive into how Dart languages and Garbage collection are used in Flutter to support Wasm, provided with code examples and performance comparison
- Then it goes on to provide information on desktop applications using Flutter from building to distribution

DevOps for Frontend

DevOps is always known and common for the backend, but it is for both Frontend and Backend development, as it is a collection of sets of tools to accelerate our daily tasks and transfer everything from manual to automation.

However, DevOps for frontend projects is slightly different from Backend, and here in this talk, I will walk you through details of what DevOps means for frontend developers, and how to create a CI/CD pipeline that contributes to improving productively and releasing cycle.

Key takeaways:

- The different types of CI/CD pipeline
- Project design, architecture, and modularisation
- Decouple services and reduce cycle dependencies
- Tool configuration
- Types of pipelines for integration, release, and deployment
- Application versioning and infrastructure management

Gemma: Empowering Front-End Development with Open-Source AI

In this talk, we'll delve into the exciting world of Gemma, a groundbreaking family of open-source AI models by Google. We'll explore how you can leverage Gemma's capabilities to build innovative front-end projects.
Key takeaway:
• Unveiling Gemma: Understand the core functionalities and architectures of these lightweight, state-of-the-art models
• Benefits and Use Cases: Discover the advantages of using Gemma, including its open-source nature, versatility across tasks, and efficiency
• Implementation in Front-End Projects: explore practical methods for integrating Gemma into your front-end applications, unlocking new possibilities for user interaction and functionality
• Gemma vs. Gemini: shed light on the connection between Gemma and its predecessor, the powerful Gemini model, and explore the distinct characteristics of each model, including size, performance, and optimal use cases.

Join this session to:
• Gain a comprehensive understanding of Gemma open models
• Discover how to leverage Gemma's potential in your front-end projects
• Make informed decisions about choosing between Gemma and Gemini for your specific needs
• This talk is ideal for front-end enthusiasts eager to explore the cutting edge of AI and its applications in user-facing experiences

This talk dives into Google's open-source Gemma models, showing you how to integrate them into your front-end projects.
• Learn about Gemma's capabilities and benefits
• Discover how to use Gemma
• Understand the difference between Gemma and its big brother, Gemini

Google Gemini 101: Unleashing the Power of AI in Your Front-End Projects

Harness the Potential of Large Language Models (LLMs) with Google Gemini! This session provides a comprehensive introduction to Gemini, a powerful AI model from Google. We'll explore how you, as a front-end developer, can leverage its capabilities to build next-generation user interfaces.

Understand Gemini:
• LLMs & Artificial Intelligence (AI): Gain a foundational understanding of Large Language Models and their role within the broader field of Artificial Intelligence.
• Machine Learning (ML) Fundamentals: We'll break down the key concepts of Machine Learning, including training data, model architectures like transformers, and the training and tuning process.

Building with Gemini:
• The Power of Gemini: Discover the capabilities of Gemini, focusing on its ability to generate text, translate languages, and answer your questions in an informative way.
• Unlocking Gemini's Potential: Explore practical methods for integrating Gemini's functionalities into your front-end applications.

Join this session to:
• Gain a solid understanding of Large Language Models and Google Gemini.
• Discover how to use open model to build innovative front-end experiences.
• Explore practical implementation techniques for integrating Gemini into your projects.

This talk explores Google Gemini, a powerful AI model. Learn the basics of Large Language Models (LLMs) and Machine Learning (ML). Discover how Gemini uses transformers for tasks like text generation and translation. See how to integrate open-source AI like Gemini into your front-end projects for innovative features.

GenAI & Gemini in Modern Application

This presentation explores the integration of Gemini, Google's powerful family of large language models, into modern applications. We'll delve into the foundational concepts of Artificial Intelligence (AI), Machine Learning (ML), and the underlying principles of transformers, the architecture that powers Gemini's capabilities.

Key Takeaways:
- Demonstrate how to integrate Gemini with Google Cloud's Vertex AI platform

- Showcase a practical example of integrating Gemini and Retrieval Augmented Generation (RAG) to build a chatbot application. We'll explain how developers can leverage RAG to retrieve relevant information from external databases and enrich the prompts sent to Gemini, enhancing the chatbot's knowledge base

- Showcase a practical example of using Vertex AI agent to generate AI responses to text and image prompts, deploy the app using Cloud Run and set up a Firebase project and connect it to the Flutter app

- Explore the diverse applications of Gemini across various industries, including mobile app development, website modernization, data science, security engineering, and DevOps

- Emphasize the importance of responsible AI development and the role of MLOps in managing the lifecycle of AI models. We'll encourage developers to consider the ethical implications of their AI applications and to adopt best practices for building and deploying AI models responsibly

Teresa wu

VP Engineer at J.P. Morgan, GDE Flutter/Dart

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top