Schedule

The time in the program is for your time zone .

Program is filling up

Program is filling up

New talks are published weekly. Follow updates or secure your ticket early.

  • AI

    8
    • Talk

      Scaling AI in QA at Yandex: Measuring the Impact of LLMs Across 1,000+ Engineers

      How do you measure the real impact of GenAI in testing at the scale of 1,000+ QA engineers and separate actual value from the “wow” effect? We’ll share the metric framework and results from real implementations: LLMs for generating test documentation, accelerating E2E automation, and AI agents running regression tests.

    • Talk

      Practical Vibecoding Without Hype

      How to transform chaotic "vibecoding" into a controlled engineering process using Spec-Driven Development, custom skills, and proper testing tools. Through a live project build (Web, Mobile, Backend), I will demonstrate how to get AI agents to deliver high-quality results on the first try, avoiding the trap of endless chat corrections.

    • Talk

      Agent Systems for Mobile Regression

      The practice of building an agent-based system for mobile regression: interaction with the device, model selection, and a set of engineering techniques that reduce non-determinism and help validate the result without self-deception.

    • Talk

      Implementing AI in Small Companies

      We will discuss how to implement AI in a company if you do not have large resources and free ML teams. I'll show you the free Roo Code AI assistant, as well as an unusual approach to writing feature evaluation agents, commit analysis, and requirements testing.

    • Talk

      Testing LLM Applications with DeepEval

      This talk focuses on the practice of testing applications based on large language models (LLM) using the DeepEval tool. It will cover the automation of quality assessment using the LLM-as-a-Judge approach, specialized metrics, and the integration of testing into the development process to ensure the reliability and predictability of system operation.

  • Tools/Frameworks

    5
    • Talk

      Implementing AI in Small Companies

      We will discuss how to implement AI in a company if you do not have large resources and free ML teams. I'll show you the free Roo Code AI assistant, as well as an unusual approach to writing feature evaluation agents, commit analysis, and requirements testing.

    • Talk

      Bug Reporting Automation With a Browser Extension

      The classic "how to reproduce this?" dilemma has been resolved! This talk introduces my browser extension that handles action recording, captures errors, and helps to produce bug reports automatically — no more manual note-taking or relying on memory.

    • Talk

      Testing LLM Applications with DeepEval

      This talk focuses on the practice of testing applications based on large language models (LLM) using the DeepEval tool. It will cover the automation of quality assessment using the LLM-as-a-Judge approach, specialized metrics, and the integration of testing into the development process to ensure the reliability and predictability of system operation.

  • Best Practices

    4
    • Talk

      Scaling AI in QA at Yandex: Measuring the Impact of LLMs Across 1,000+ Engineers

      How do you measure the real impact of GenAI in testing at the scale of 1,000+ QA engineers and separate actual value from the “wow” effect? We’ll share the metric framework and results from real implementations: LLMs for generating test documentation, accelerating E2E automation, and AI agents running regression tests.

    • Talk

      Testing Biometrics

      Testing the Smile Payment service.

      Security check and search for biometric system vulnerabilities.

    • Workshop

      How an Analyst Can Help a Tester and How a Tester Can Help an Analyst

      Errors occur at all stages of software development. The sooner we find out, the cheaper it is to fix them. At WS, testers will learn how to identify errors in requirements before the development stage. This will reduce the number of potential errors, the time and effort of the team to fix them.

  • Hardware

    2
  • Mobile

    2
  • Automation

    2
  • Load Testing

    1
  • Security

    1
    • Talk

      Autotests as an Entry Point for Attacks: You Won't Even Notice

      Poorly secured test repositories are not just a future threat that will materialize sooner or later. This is a current, actively exploited security breach. And as the protection of the production code strengthens, this gap will only widen, making attacks on tests more attractive to intruders. The question is not whether there will be an attack, but when it will affect your company.

  • GameDev

    1
  • Off Topic

    4