Sujan Naik
Game Dev

AI and Games Conference

Author

Sujan

Date Published

AI and Games Conference Logo

AI and Games Conference

Bringing together the leading experts in AI for the video games industry.

3rd - 4th November 2025
Goldsmiths, University of London, UK


I was a volunteer across the 2 days of the conference held within my university campus.


This post will summarise everything I learned from the talks.


Notes are somewhat formatted poorly.



Monday 3rd November 2025 - Ian Gulland Lecture Theatre

09:15-10:00 - Martin Weusten (Ubisoft Düsseldorf) "The illusion of life – Managing NPCs in Avatar: Frontiers of Pandora"

Avatar: Frontiers of Pandora - Tech Notes

Challenges

Scale and Performance

20x size difference

Flying mounts

NPCs visible up to 1.5km

Combat scenarios

Settlements

Available level of detail

States of realization

LOD System (Levels 0-5)

LOD 0-4:

Vary update rate of perception and behavior trees

Full navigation and capabilities

AI LOD 5:

Not updated perception and behavior trees

Stationary

Imposter:

Not updated perception and behavior trees

No navigation, can follow waypoints (air only) and other simple actions

Virtual (not visible to player):

Keep minimal data like position and rotation

No navigation

Can follow waypoints and other simple actions

Suspended:

Not visible

Minimal data

Only distance checks every 10s

Ghost NPCs and Ambient NPCs

Handling AI Budget - Too Many NPCs in One Location

Characters

Landmarks

Quests

Encounters

Mounts

Wildlife

Exclusion volumes to prevent wildlife from spawning

Exceeding NPC budget

Problems

Systems randomly interact

Players might do unexpected things

Solutions

Keep NPC numbers low

Accept NPC budget might be exceeded

Improve performance

AI LOD Clamp

Idea

Compute score values

AI update throttle

Rogue/Freak Waves in Nature

Wave frequencies randomly overlap

NPC updates with 128 worlds on the server

Similar effect causing performance spikes

Solution

Limit updates per frame per world

Delay least urgent updates per world

AI LOD Override

Idea

Overall low update rates

Additional on-demand AI update rates

Examples

Animation finished playing

Explosion far from player

Damage or arrow pass near NPC

Solutions

Immediate forced update

Temporary LOD boost (specific or radius)

AI Culling

Virtualize Not Visible NPCs

Idea:

Using sets of simple volumes (boxes, spheres)

When player is outside: virtualize NPC inside

When player is inside: virtualize NPC outside

Details:

Multiple independent setups - "Divide and conquer"

Tags & bit vectors

Sparse grid structure

Buffer areas

Additional volume types

Dealing with co-op

AI Activities

NPC Types

Patrollers

Overview:

Randomly following waypoints

Spawn at random waypoint

Optional actions at waypoints

Continue in simplified way when culled

Regular Activities

Overview:

Walk from activity to activity

Condition system

Time of day/weather

Cool down/duration/priority/probability

Spawn tags

Spawn activities

Spawn Activities

Overview:

Low-cost looping

No need for expensive movements

No enter/exit animations needed

Can still look at player when close

Same condition system

Movement through culling

Ghost NPCs

Overview:

Even less cost - animated props

Can't move or recognize player

Time of day and weather conditions

Movement through culling

Rider Formation

Overview:

Groups of riders patrolling

Leader + one or more escorts

Leader follows path loop

Escorts have their own path loop

Adjust speed to match

Precise control of pathing around obstacles


10:10-10:55 - Petr Smrček (Warhorse Studios) "Supporting thousands of simulated NPCs in the open world of KCD2"


Immersion by Whole World Simulation

As if the player was there in person

Building Blocks of Immersion:

Realism

Chores

Deep systems

NPC Behaviours

Daily routines

Crime

Advantages

NPCs are correctly positioned for their activities

Consequences affect the world

Player is not necessary

Performance Heavy

AI LOD

Simple AI LOD in KCD

Goals

Want deeply simulated world

Optimize later

Approach

Create one big level

Create all the behaviours

Profit

Modular Behaviour Trees

Visual scripting language

Low level

Run on each NPC

Behaviour selection

Behaviour execution

Reactions

Naive AI LOD

Optimization of Far NPCs

Remove render, animation and physics

But this breaks the script, so introduce LOD branching in script

Movement vs LOD

No physics and animation

Teleport

Consecutive small teleports

Transparent to script

Script Performance

Far away NPCs optimised

Simplified script branch

AI Update on Thread

10ms so unfeasible on main thread


Move to a new thread

Cannot change entities from AI - defer

Results

~600 NPCs @ 30 FPS

60-90m NPC visibility

Some NPCs unloaded

Bloated scripting logic

Long loads

Reworked AI LOD in KCD2

Requirements

Way more NPCs

Big city

Fix of the scripting interface

No unloading of NPCs

Design with AI LOD in mind

Core of the Solution

3 AI LODs

Close

70 visible NPCs up to 150m

Middle

Fast switch to close LOD

Up to 400 NPCs or 600m

Quick change to close LOD

Position and state precise

Full AI simulation

Similar optimisation to KCD

Transparent animations

Quick

Far

Heavily optimised

Position within 10m

No states or item handling

Behaviour execution not simulated

Fast behaviour startup

Behaviour Selection

MBTs Retrospectively Repeating Patterns

Time of day

Properties of the character

Constraints

Deterministic chance

Configured in script, executed in code

Simulation

Purely in code

Find behaviour, switch if needed

Decide position

Move if needed

Move uses only path

Memory Savings

Released Memory

75kb character

25kb physics

30kb big AI components

20kb other engine stuff

Remaining Memory

5kb RPG representation

10kb reaction AI

No Perception

Rarely something to perceive

Crime has representation

Switches NPCs to middle LOD

"Design for everything in KCD2"

Passive Approach

10kb

High Density of Population in Kuttenberg

~1500 NPCs

Limit for Close NPCs

70

Setting Early Limitations

White box model

Avoid showing too many NPCs

No long streets

No see-through fences

Fewer people beneath walls

Soft vision breakers

Speed limits

Visibility Areas

Structural Visibility

Split into areas ~100m²

Relative visibility

Occlusion insufficient

Close/middle LOD

Other Applications of AI LOD

Skip Time/Fast Travel

Simulated:

Quest progress

Crime or danger

Battles

Player cannot leave

World still simulated

Global far LOD

Soldiers optimized differently

No behaviour, always visible

Sprites, trackview




12:20-12:45 - Vincent Martineau (Ubisoft) "Let the NPCs Fight: Learning Attack Reach from Real Gameplay Data"

Learning attack reach from real gameplay data in assassin's creed shadow


Issues with the game 


NPCs attempting to attack out of reach 

NPCs not attempting to attack within reach


What can affect range

- the animation and the weapon used

- navmesh constraints

- Engine adapts animations (blending) - e.g. feet positioning

- Inverse Kinematics


What’s the problem

Decision making 


We need to know horizontal range and vertical range. 

but we can do this by hand

- List all the attacks -203 attacks unique

Fine al archetypes using these attacks - 168 

Measure the reach

- estimate the range

- find locations in the game world 

- use a test environment 


Total of 676 attacks so there is a lot of data and it's error prone


Solution: Automating with machine learning


Learning from data

1. Find all attacks

2. generate test positions for attacks

3. Extract gameplay data


Training

4. Clean the data

5. Train a model

6. Import model in game


Learn a decision tree

- fast 

-simple 

- human readable


Results

- <10 minutes finding and pairing

- 3-4 HHours data collection time

- 2.11s training time 


Where the model fell short

- incomplete or biased training data - e..g when swimming/other player animations

- easier to fix problems in code/data - e.g with destructibles


So train differently

4. Clean data

5. Process

6. Etc



Insight from gameplay data 

Editor makes updates safe and easy

Data views = better consistency

Find bugs


Tool delivers:

- fix all previous issues

- can be automated

- runs frequently

- understand the data



13:45-14:30 - Joey Faulkner (PlayerUnknown Productions) "Latent Landscapes: Using machine learning to generate terrain in Prologue: Go Wayback!"

Prologue: Project Artemis

Using ML to Generate a Planet Scale World

Three Game Plan

Using new technologies as tools, not worlds

Rogue Company: Payback

Survival game

Each run is 64km² fresh chunk of land

Generated on a GPU in seconds

Unreal Engine

Core Question

Can an ML model make landscapes feel as good as handmade ones?

Dynamic Range of Problem

Have to compress problem into manageable chunks → latent space

Resolution Pipeline

ML generated latent → Gets us to 128m ppx

Decoding (ML) → Gets us to 16m ppx

Upsampling (ML) → 4m ppx

Landscape Latent Creation

Input → Encoder → Decoder → Output

Goal: Compress high definition detail into low resolution latent space

Landscape Latent Diffusion

Take a latent space and add noise

Can we get out the latent space we started with, before the noise?

Training a machine learning model to take a Gaussian space and generate a latent space

Landscape Upressing

Allows us to...

Fundamental Qualities of ML Which Make It a Nightmare to Work With

Sudden Convergence

It doesn't work until it does

Conditions: models show poor performance and then suddenly understand

ML projects often fail

Impossible to guarantee quality

Problem: How do we develop a game when the ML model isn't working?

Solution: Using training data and pray that the ML model looks like it

Mode Collapse

Generative ML models tend to play it safe

Not creative

Problem: How do we make landscapes interesting and diverse?

Solution: Guided Generation

People are creative but ML is scalable

Instead of hoping ML models generate something we want - design ML modules to work with creative people

The Long Tail

"Good ML generations are all alike, but every bad ML generation is bad in its own way"

Problem: How do we cope with bad generations?

Systematic Issues: The Barren Mountain Maps

Playable but boring and breaks immersion

Solution: Detect at runtime and reject

Unpredictable Ones

ML model can and will do anything available to it

Cannot detect

Solution: Make pipeline deterministic and reproducible → log as bug

Qualities Which Make It Good to Work With

Real world data augmented by tech art

Gameplay features fit into natural landscapes

Drainage networks

Extensibility



15:55-16:40 - Patrick Palmer, Andrei Muratov (Amazon Web Services) "From Cloud to Edge: Optimizing Small Language Models for Game Applications on AWS"


From Cloud to Edge: Optimising Small Language Models on AWS

Industry Interest in AI Enablers

Cost and speed factors limit adoption of in-game generative AI

Birth of Generative AI Models as Game Assets

Thesis

Second screen generative AI player assistants are a viable path toward personalising gameplay and providing another input channel

Reference: https://aws.amazon.com/blogs/gametech/revolutionizing-games-with-small-language-model-ai-companions/

Proof of Concept

Steps:

Find a suitable game

Select and fine-tune a model

Build cloud architecture

Select and fine-tune a model

Build the second screen app

Test and measure

Note: Selecting and finding a model is the most time intensive

Key Principle

The higher the intelligence, the higher the inference cost

Fine-tuning SLMs allows smaller models to produce higher intelligence models

Fine-Tuning the SLM

Data Preparation and Generation

Synthetic Data Generation:

Generated 100 training, 200 test samples

Function calling using only our API

Single function calling

Multiple function calling

Multi-turn conversations

Parallel function calling

Handling missing information

AI generation is valid JSON

Realistic scenarios for each tool

Generated by Claude 4

Training Process

(Not elaborated)

Results and Analysis

(Not elaborated)

Conversion and Quantisation

fp16 → int4

Reduce size whilst maintaining precision

3.4GB to 1GB

Automatic Speech Recognition Considerations

Pictorial languages

Common Voice dataset

Model fine-tuning and performance



Tuesday 4th November 2025 - Ian Gulland Lecture Theatre

09:15-10:00 - Maciej Celmer (CD Projekt RED) "Heat, MaxTac, and Blockades: Expanding the Police System in Cyberpunk 2077"


Behaviour Trees Combined with FSMs

Car Chase Strategies

Drive towards the player

Drive away

Patrol the quadrant intersection

Get to player from anywhere

On foot search loops

Vehicle combat

Roadblocks

Set up along the player's expected route

Block the entire road

Detect median strip

No blocking dead ends

Vehicles and NPCs behind

MaxTac Encounter

Ultimate police law enforcement unit

Mini boss fight

Stop player's vehicle before the encounter starts

Spawned in player's view

Open space above spawn position

Flat ground around position

Coordinated NPCs

5 Heat Levels

Cars

SUVs

Roadblocks

Armoured SUVs

MaxTac AV (aerial vehicle)

Environment

Large and dense urban environment

Verticality

Heavy traffic and crowds

Challenges

Make police system reliable given environment

Working on existing codebase

Performance concerns

Coordination with gameplay logic

Traffic System

Multiple Connected Lanes - Split into Segments

Persisted Lane Data:

Length

Width

Direction

Connections

NPC areas

How Do We Select Positions for Spawning?

Our Options:

Predefined

Offline generation

Dynamic (at runtime) - CHOSEN

Why Dynamic?

Player's experience is unique, immersive and unpredictable

Level designer workload decrease

Keep memory footprint low

How to Implement?

Graph-Based Lane Discovery

Discover the graph around the player

Predict player's path

Calculate distance along the road - cannot use Euclidean in complex city, so Manhattan preferred

Algorithm

Select potential spawn points on lanes in the discovered graph

Filter points by:

Distance

Dead ends

Player viewport - for AV

Geometry checks - dynamic and static obstacles, free space above for AV

Return a batch of tested points

Wait for the player

Spawn NPC area and encounter vehicles

If timed out, make another request

Off Traffic Spawn

Used when traffic data is not available

Process:

Generate the points around the player

Check navmesh path

Perform other tests

Stages:

Points generated

Points filtered

AV spawned

Architecture

Communication between police system (scripts) and spawn system (code)

Async processing

Batch processing


11:25 - 12:10Design for everything in Kingdom Come: Deliverance 2 By Vadim PetrovWarhorse Studios

Design for Everything in KCD 2

Why Design for Everything?

Sight

One system, many use cases

Not a good idea

3 Pillars of Immersion

Fully simulated NPCs

Reactive environment

Lasting player effects - consequences last

Sight System Requirements

Big Picture

NPCs look at each other

Friendly NPCs look at the player

Notice suspicious behaviour

Player can sneak past NPCs

Specifics

Kuttenberg is huge

Instant for friendly NPCs

Enemy camps can be big or small

Player needs time to react to threats

Slow for non-friendly NPCs

Sight System Overview

What Can the NPCs See?

The player

Other NPCs

Items - armour, food and weapons

Perceptible volumes - invisible markers for sight (e.g. shopkeepers detect when something is stolen, or chicken killed)

What Can Be Seen on the Player

Perception states with custom conditions:

Drawn weapon

Crouch

Trespass

Loot

Lock pick

Visible stolen equipment

Who Is Looking at What?

We set a limit of a single sight query per NPC. This means deciding what to look at is critical.

Bad formula below:

const auto isLooting = shared.contains(t, m_PerceptionState.mIsLooting);

const auto isLockpicking = shared.contains(t, m_PerceptionState.mIsLockpicking);

const auto isCarrying = shared.contains(t, m_PerceptionState.mIsCarrying);

const auto& constants = m_Constants;

const auto lootBoost = isLooting ? constants.m_PerceptionLootCrimeBoost : 1.0f;

const auto lockpickBoost = isLockpicking ? constants.m_PerceptionLockpickCrimeBoost : 1.0f;

const auto carryBoost = isCarrying ? constants.m_PerceptionCarryCrimeBoost : 1.0f;

const auto crimePriority = lootBoost + lockpickBoost + carryBoost;


// We set a limit of a single sight query per NPC

// This means deciding what to look at is critical

const auto priority =

    A * distTerm

  + B * repCon

  + C * relationshipCoef

  + D * abs(cha - baseCha)

  + E * gender

  + F * weapon

  + G * dead

  + H * notHumanRace

  + I * targetBuffBoost

  + J * sideEffectBoost

  + K * crimePriority

  + L * playerBoost;


Challenges

NPCs See, Player Does Not

NPC sight and player sight are asymmetric

NPCs see artificially in a simplified virtual world

The player sees what is on the screen

Extreme case: NPC dogs invisible

Solution 1: Remove

(Not elaborated)

Solution 2: Recognition Time

Continually successful raycasts for a recognition time

Recognition time based on distance, player stance and NPC status

Solution 3: Two Vision Cones

Separate the vision cone into sectors

True detection only happens in the smaller cone

If an NPC detected the player in the bigger cone:

Play look around animation

Say something

Try to get closer

Solution 4: HUD Icon

Bunny

Inspired by sword icon to display in combat

Proposed in preproduction

"Add later if we need it"

Added late - bad to wait to add key gameplay elements

Pros to Solution 4:

Insight into complex crime system

More transparent gameplay

Cons:

Non-diegetic

"Telepathic"

Recognition Time Looks Dumb

When tuned for stealth gameplay, NPCs seem blind

Solution: Intuitive Stealth Mode

Recognition time is shorter for standing player

Longer for crouching

NPC Hyper Focus

NPC only queries one perception target at a time

Makes the NPC blind to everything else

E.g. NPC guarding a corpse fails to see the player hit the corpse with a sword

Corpse is an NPC with a boost to perception state

Player has a perception state

Player hitting a corpse with a sword creates perception volume

However, the corpse has a higher perception value

Solution: Perception Ignoring

An NPC only looks at a target shortly then ignores it for a couple seconds

E.g. NPC looks at corpse, then at armed player, then at nothing

Ignoring is cancelled when ignored target changes state

Unsolved Problems

Dynamic Vision Cone Ratio

Change vision cone ratio based on NPC conditions

Super confusing, brings nothing to the gameplay

Bushes

Player can hide in bushes in KCD2

Bush blocks raycasts

Problem: Player is invisible even when a torch sticks out of a bush

NPCs inside other bushes can see within bushes


13:45-14:30 - Wesley Kerr (Riot Games) "When Research Meets Release Dates: Production-Grade RL for Games"

Production grade RL for games


The hype


Research wins don't necessarily translate into production wins


Bots overview


Designer programmer AI 

- Well known

- High maintenance costs

- Scales with humans 

- Available before launch


Ai leans from players

- less well known 

- low maintenance costs

- scales with players

- only available after launch


Stores game state to learn from 


AI learns from scratch

- Less well known

- aLow maintenance costs

 - scales with compute 

- available before launch





Optimize for value from the policy rather than just he policy


Value not in solution but value created from t

E.g.


Testing -> 

Behaviour cloning

Russ testing automated aqa


artificial opponents ->

Drop in player practice matches

Inboarding wi


meta analysis ->

Win rate dashboard

Experimentation

Gianle strategy discovery

coach


Player insights

Practice focus

Personalised feedback 


One policy unlocks it all


Treat aral as a platform with gates (prototype -> pilot 




Bots

Play like a player

Adapt to patches

Meet players where they are

Long term engagement

Affordable


Can you deliver a bots experience that meets designer specifications in time for launch?

- Designers want tight control over players experience


Tight weekly iteration between tech research, production ML bots and  Design


Trying to meet readiness criteria might be difficult 

- Criteria might be mutually exclusive

- How to avoid “losing what works?”



RL specific challenges

Navigating variance across deep RL

Managing complexity across experiments

Building trust through transparency 


Failure modes

Reward hacking and proxy misalignment 

- Agents optimise reward, not behaviour


Non stationary data

- Game patches are constantly released 


Evaluation theatre

- Benchmarks don't predict real player fun or live “robustness”


Infra and cost blowups

- Scaling RL and self play looks linear


Griefing, exploits and degenerate gameplay


Parallel investment

Applied bot evolution and long term RL research 

- Need to gather telemetry and sata

Applied track:

Designer AI -> hybrid ML/Ai -> Imitation AI


Research track:

Rl from foundational tooling -> simplified gameplay -> complex gameplay


Engagement model


Research org, central tech team and game team work together in parallel


Research team de-risks early exploration, prototypes, direction

Tech team bridges research and prod 

Game team deploys 


Is RL the right tool now?

Reward alignment

Does the reward align with what designers are trying to achieve


Evaluation capability

How do you know if your bots are performing well


Safe failuremmode

How do you launch without impacting player experience


Baseline available

What player experience are you comparing against


Maintenance plan

How do policies adapt as your game evolves?


Four gates framework


Limited live -> production



Scope

Offline

Sandbox Qaa enfionmnte


Goal 

Heat heuiurostic baseline on live relevant meteics

Stable perf across eerrrs and don't reltas


Focus 

Reward sharpingz signal quality, early policy galidation

Costr training loop lznautomatednrgaluwtionznweta pipelines


Game

Ready when results are reproducible and interpretable

Ready when training + eval can run end to end prediftability



Evaluation before optimizaton

Spend time to figure out how to measure success


Gameplay quality Vs stability Vs coverage Vs operations


Design for handoff and sustainability


Build training that others can own extend and trust