Angel Ceballos
Founder and CEO @ SeraphicGuardian | Architect of Defensible Systems
Raleigh, North Carolina, United States
Actions
Angel Ceballos is the Founder and CEO of SeraphicGuardian and an architect of defensible systems.
Her work focuses on how complex data and decision systems are designed, governed, and held accountable in real-world environments. Rather than treating governance as documentation or policy layered on top of technology, she builds it directly into system architecture.
With a foundation in data engineering and enterprise architecture, Angel designs and implements accountable infrastructure that governs risk, fairness, and long-term resilience. She challenges organizations to move beyond performance metrics and build systems that withstand scrutiny, degrade safely under stress, and produce defensible evidence by design.
Her mission is to ensure accountability and safety are not policy add-ons but engineered foundations that preserve the human voice, lived experience and narrative behind the number.
Links
Area of Expertise
Topics
When Technical Opinions Differ: What Matters Most?
A growing SaaS company runs a production backend written in Python with FastAPI. Traffic is increasing, features are shipping quickly, and operational pressure is rising. The frontend team is strong in JavaScript, while the backend team primarily supports Python.
A proposal is raised: rewrite the backend in Node.js to unify the stack.
Supporters argue that a single language simplifies hiring, improves alignment between frontend and backend, and offers a better async model for handling concurrency.
Opponents raise concerns about rewrite risk, operational stability, existing deployment familiarity, and the fact that database design and transaction handling may matter more than runtime choice.
Both sides are technically reasonable.
The disagreement is not about which language is "better." It is about what the company is choosing to prioritize at this stage.
- Is the priority hiring flexibility?
- Delivery speed?
- Operational predictability?
- Long-term architectural clarity?
- Reduced cognitive load across teams?
This lightning session introduces a practical structure for handling technical disagreements in situations like this:
- Clearly define what matters most for the current phase of the system
- Make the tradeoffs visible in concrete terms
- Assign ownership for the risks being accepted
Engineering and development teams do not need full consensus on the stack. They need alignment on priorities and accountability for the consequences of the decision.
This session is designed for engineers and technical leaders navigating architectural decisions where technical arguments conflict and tradeoffs must be made explicit.
The Evidence Layer: The Missing Component in Modern Data Architecture
Modern data systems are built around layers. We have storage layers, transformation layers, semantic layers, and increasingly feature stores and model layers. But as systems begin to influence automated decisions, another layer becomes necessary: an evidence layer.
Logs and dashboards show what happened. An evidence layer explains why it happened and how the system arrived there. Without it, traceability becomes reactive, and accountability becomes difficult under scrutiny.
This session explores the concept of the evidence layer as a structural component of data architecture. We will examine how lineage, assumptions, transformations, and decision paths can be organized in a way that supports defensibility, transparency, and long-term system integrity.
Attendees will leave with a practical framework for thinking about evidence as an architectural design choice rather than a reporting afterthought.
The Shift from Data Governance to AI Accountability
The Shift from Data Governance to AI Accountability
Traditional data governance was built for documentation, reporting, and compliance. But AI systems change the stakes.
As data pipelines increasingly influence automated decisions, recommendations, and outcomes, policies stored in documents and dashboards designed for visibility are no longer enough. Accountability must move from static oversight to structural design.
This session explores the evolution from governance to accountability in modern AI systems. What changes when systems begin to act, not just inform? How do lineage, embedded assumptions, dependencies, and feedback loops shape real-world impact? And what does it mean to design systems that are defensible, not just observable?
Attendees will gain a systems-level framework for rethinking governance in the age of AI and practical guidance on embedding accountability into the foundation of the systems they build.
Designing for Failure: Engineering AI Systems That Degrade Gracefully
AI systems operate in changing environments. Data shifts, assumptions evolve, dependencies update, and models continue running long after conditions have changed. When those changes are not anticipated in the system’s design, small issues can compound into larger consequences.
This session focuses on treating failure as a design consideration rather than an exception. We will examine how architectural decisions influence what happens when models drift, thresholds are misaligned, or upstream data changes unexpectedly. The discussion will cover containment strategies, escalation design, and approaches to limiting downstream impact when systems are under stress.
Participants will leave with a practical, systems-level perspective on building AI architectures that can absorb disruption, reduce unintended harm, and remain accountable when conditions change.
Managing Technical Teams Without a Technical Background
In many organizations, technical teams are led by operations, product, or business leaders who are not hands-on developers. This structure is common as companies scale and responsibilities become more specialized.
Tension tends to surface when expectations are unclear, visibility into technical work is limited, or responsibility for outcomes does not align cleanly with authority. Engineers and developers may feel constrained or misunderstood. Leaders may feel accountable without having enough insight. Both sides operate under pressure.
This session focuses on equipping non-technical leaders with practical tools to manage technical teams effectively while preserving trust, autonomy, and transparency.
We will focus on three practical areas:
- Establishing clarity without micromanagement: How to set expectations, define outcomes, and evaluate progress without controlling implementation details. This includes creating space for engineers to speak candidly about constraints and risk while giving leaders a structured way to understand what is happening.
- Building structured visibility: How to gain meaningful insight into technical work through shared decision criteria, explicit tradeoff discussions, and outcome-based reporting rather than code-level detail. The goal is transparency that builds trust, not surveillance.
- Defining shared accountability: How to assign ownership clearly so decisions, risks, and production responsibilities are visible. This includes aligning authority with responsibility and making expectations explicit on both sides.
The goal of this session is to strengthen collaboration between technical and non-technical roles by improving communication, decision quality, and shared confidence in production outcomes.
Intended audience: operations leaders, product leaders, engineering managers, and technical leads working across functional boundaries.
Designing Backend Systems That Survive Production Incidents
Production incidents often reveal weaknesses in system design rather than isolated coding errors. When a backend system fails under real load, teams struggle with structural questions: Which component owned the decision? Where did state change? Why did the failure propagate instead of isolating?
This keynote examines the architectural patterns that determine whether a system remains understandable and manageable during an incident.
Rather than focusing on a specific framework or stack, the session centers on three design considerations that apply across languages and platforms:
- Clear responsibility boundaries- Defining transport, business logic, and state transitions in a way that makes ownership visible and predictable.
- Context and traceability - Designing logging and observability so that teams can reconstruct not just events, but the intent behind decisions.
- Intentional architectural boundaries - Evaluating when separation improves resilience and when consolidation reduces operational cost and complexity, using factors such as change frequency, deployment independence, scaling behavior, and failure isolation.
Using practical examples drawn from real service-based systems, we will explore how these design choices influence failure blast radius, recovery time, and long-term maintainability.
This keynote is not about a specific architecture style. It is about developing judgment as an engineer. Trends and stacks evolve quickly but the need for systems that remain understandable under pressure does not.
Intended audience: software engineers, backend developers, platform engineers, and architects responsible for production systems.
Bias Beyond the Model: Why Equity Is Required for Fairness and Why Fairness Is a Systems Property
Most conversations about AI fairness focus on model metrics. Demographic parity. Equalized odds. Accuracy gaps.
But fairness is not a model property. It is a systems property.
Bias enters long before model training and continues long after deployment. It lives in data sourcing decisions, schema design, feature selection, thresholds, feedback loops, and human override policies. A model can appear statistically fair while the system that surrounds it produces inequitable outcomes.
In this session, we reframe fairness as an emergent architectural property. We explore why equity is required for meaningful fairness, how structural inequities become encoded into data pipelines, and why bias cannot be solved with periodic audits alone.
Attendees will leave with a systems-level mental model for evaluating fairness beyond model weights and a new framework for designing AI systems that account for real-world human impact.
Angel Ceballos
Founder and CEO @ SeraphicGuardian | Architect of Defensible Systems
Raleigh, North Carolina, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top