Volodymyr Khrystynych
AboutBlog

Meetup Notes: Global Azure Bootcamp

2026-04-18

Categories: Community

Tags: azure, security, ai, api

Spent the day at the Global Azure Bootcamp. Five talks, varying quality. Here's what stuck.

Governing Agents and APIs with Azure API Management — Callon Campbell

Everything is an API right now, and that's especially true once you start wiring up AI agents and MCP servers. The talk was a solid overview of how Azure API Management fits into that picture.

The core components: an API gateway as the front door, a developer portal for consumers, a management plane for admins, and built-in security (auth, rate limiting, etc). When you leave AI service APIs unmanaged you get unpredictable costs, no visibility, and no safety net if an endpoint goes down.

The policies that matter most for AI workloads:

  • Tokens per minute — scoped by a counter key like sub_id or IP address, so one user can't burn your quota
  • Caching — back it with Azure Redis, return a cached response when the same question gets asked twice
  • Circuit breaker — stops spamming a backend AI endpoint when it's struggling, routes the 429 cleanly to the frontend

The load balancer in the backends section lets you pool multiple AI endpoints and configure how traffic is distributed — useful for handling outages without redeploying your app. The argument for having this middleware layer rather than calling the AI API directly: when keys or endpoints change, you update the config, not the app.

Azure API Center also now has an MCP registry alongside the usual API registry and docs. You can define an MCP tool by selecting an API and specifying which functions the agent can call. The demo hit the classic live demo curse — the LLM returned "pet with id 1 not found" — but the concept landed.

Your Inbox Is the Front Door — Tamir Albalkni, QuantM Technology Inc.

If you're running Microsoft 365, attackers aren't targeting your passwords anymore — they're targeting your sessions. One compromised account in a tenant cascades fast because the Microsoft ecosystem is so integrated.

The attack menu:

  • Phishing / password spray — the classics, now AI-assisted
  • Device code abuse — send a phishing lure that says "go to microsoft.com/devicelogin and enter this code." We're all conditioned to do exactly that for Teams meeting activations and helpdesk tickets. The attacker gets a valid auth token.
  • Consent abuse — trick a user into granting an OAuth app permissions
  • Session theft / AitM (Adversary in the Middle) — the W3LL kit, which is literally phishing-as-a-service, runs a reverse proxy between the user and the real Azure login page. The user authenticates for real; the attacker captures the session token and can keep refreshing it.

The AI angle is what makes all of this worse. 82% of phishing emails are now written or rewritten by AI. In 2019 it was spray-and-pray with typos; now you spend a few API credits and get a native, well-written, targeted phish. Filters that caught bad grammar catch nothing here. 54% of recipients click an AI-generated phish. Voice phishing is also becoming a real thing.

A concrete example from the talk: someone's email got hacked, used to send malicious links to colleagues, the link opened an Excel file that prompted a Microsoft sign-in, and a VBA script installed malware. One account, cascading damage.

Fabric IQ: From Relational Database to Knowledge Graph — Ashraf Ghonsim

This was the most technically interesting talk of the day. The problem being solved: AI inherits all the messiness of your data. Data locked in PDFs, spread across systems, fragmented so that individual pieces are only meaningful when combined — the AI hallucinates to fill those gaps.

The core idea is a hierarchy: data → information → knowledge → wisdom, where each step requires more processing. The bet Fabric IQ is making is that a knowledge graph is the right representation for the "knowledge" layer.

In practice: your SQL tables have relationships, but those relationships are named for developers (customer_id, product_fk). A knowledge graph translates those into verbs and nouns an LLM can reason over — "customer purchased product", "product is stored in warehouse". The ontology layer then narrows what the LLM actually sees to exactly what's relevant to the query, keeping context tight.

My skepticism going in was that knowledge graphs usually just look cool on a slide. The argument that actually moved me: when a user asks about a product they bought, a natural language query maps more naturally onto a graph traversal than a SQL query the LLM might botch. The ontology layer doing context narrowing is the part I'd want to dig into more — that's doing real work.

Fabric IQ has two agent types: data agents (query and surface information) and operational agents (act in workflows — like identifying churning customers and sending targeted promotions, with a human confirmation step before anything goes out).

OpenClaw on Azure — Kaan Turgut

Self-hosted AI assistant you control through messaging apps. Architecture is: chat app → gateway → agent runtime → AI Foundry. The pitch is local-first — you control your data. If local isn't an option budget-wise (i.e., you don't have a Mac Studio sitting around), Azure is the recommended fallback for its private networking and security options. Not much more to say here.

Context Engineering on Azure — Tara Khani and Majid Fekri

Two quick product mentions: Moorcheh.ai (no re-indexing for vector search after updates, built-in reranker) and MemAnto.ai (HNSW-based memory, installs as a FastAPI with a CLI, exposes remember, recall, and answer endpoints for agents). Not enough detail to evaluate either.

← Back to blog