The Talos Principle 2 graphics analysis

The Talos Principle 2 graphics analysis

The Talos Principle was an incredible game, with cleverly designed puzzles and a gripping sci-fi story with mature philosophical themes, so it’s hard to overstate how excited I was about news of the sequel. Just a few days ago, Croteam suddenly dropped a PC demo of the sequel, featuring a shortened intro sequence along with two of the game’s locations. Lots of people have tried it by now, and I think by now it’s safe to say the general consensus is that the game is incredibly promising, with huge upgrades in visuals, variety and scope. A lot of people experienced graphical issues, though. I’ve got some experience with realtime graphics, so I decided to look deeper into this.

What’s the point, though? It’s easy to say that “graphics don’t matter”, and enjoy what’s there. The issue though isn’t that the game is ugly, because it’s obviously not. The problem is that TTP2 is a game of careful observation and analysis, and the visual effects used by the game constantly distract a player who’s focused and highly sensitive to small details, the boils and shifts taking attention away from genuine content, taking away from the immersion and integrity of the experience.

In the analysis below, I’ll try to explain all the terms I use in a way that makes sense to an average gamer.

Engine

Let’s begin, then. Most people probably know this, but the game is made in Unreal Engine, which came as something of a disappointment to a lot of people who were impressed by the rendering quality, reliability and performance of the Serious Engine. However, this decision was definitely not made lightly. TTP2 is a much more ambitious project than anything Croteam has made in the past, and their ideas would require huge engineering work on Serious Engine to support them. Some of the more obvious ones are:

  • NPCs animated via inverse kinematics,
  • Smooth facial movements synchronized with the voice acting,
  • Fully dynamically lit environments,
  • Asset streaming to eliminate travel loading times and reduce VRAM usage,
  • Extremely optimized rendering of high object and triangle counts.
All the reliefs on this wall are not a trick like parallax mapping, but actual geometry!

What most likely happened is that Croteam did the good ol’ cost-benefit analysis, and switching to an engine that already has the features they need to realize their vision was decided to be better. Please don’t judge them negatively for this, whether you ultimately agree with their decision or not. Making engines is an insanely time-consuming task, and without this switch we’d get TPP2 maybe 5 years later. Assuming they wouldn’t go under after this long without a release.

Global illumination

The elephant in the room. GI is the effect responsible for deciding how bright each part of the map is, and it also handles transfer of light as it bounces off walls to “color” the areas around them. Up until a few years ago, realtime GI was a pipe dream, because doing it accurately was prohibitively expensive. Instead, the brightness info was precalculated during development into a “light map“, which was then read by the game to know how lit each area was. As you might suspect, this meant that lighting conditions could not change at runtime. A change in the geometry wouldn’t update the overall lighting of the area. Even something as simple as turning a light on or off would require baking two versions of the lightmap, introducing delays in development.

So, moving GI to realtime is an extremely alluring prospect to developers. They can simply design the levels and place the lights, and everything is just lit correctly. Any part of geometry can be moved, lights can turn on/off or even move, and the lighting updates with them. It not only makes development faster, but unlocks new ideas for highly dynamic environments that weren’t possible before. That’s the idea, at least.

GI on Ultra (no RT): even when it works as intended, it takes a while for the light to appear and disappear from the surfaces it illuminates

TPP2 went for exactly this – the GI is always dynamic, there is no “Off” option. Most likely the option couldn’t even exist, because there are no offline lightmaps to go off of. Unfortunately, the techniques used by the game are all highly approximate and exhibit a lot of artifacts. As far as I can tell, there are three GI code paths in the game:

  1. Low and Medium uses monochrome rays bouncing around the map to determine how hard it is for the light to reach each surface. This results in a highly noisy results, so a denoiser has to be used to smooth out the result. The denoiser used by the game is not very good, and instead of averaging the errors it results in a party of large black dots in any crevice, and even some passages wide enough to walk through.
  2. High and Ultra without a RT-capable GPU uses some kind of voxel grid for monochrome area light level, with screenspace sampling for color. This causes very strong light leaking, making areas change brightness depending on how you look at them. The color transfer has no concept of transparency, so light often goes to completely nonsensical places. The denoising is still low quality and exhibits constant boiling.
  3. High and Ultra with a RT-capable GPU is largely similar as above, but some passes use RT to be less approximate. Issues are reduced but not removed. This is the most favorable option in the game so far.
GI on Medium: boiling on a passage wall
GI on Medium: dancing dots in a crevice
GI on Ultra (no RT): light leaking through walls caused by the shifting voxel grid This is now fixed!
GI on Ultra (no RT): emitter light boiling and going to different places depending on orientation

This appears to be consistent with Unreal Engine’s Lumen GI, though I wouldn’t know if it’s customized in any way. It’s particularly strange that some settings in the game have different results depending on whether RT is available. This means that two people with the exact same settings will see different results if their hardware is different enough. It would be much preferable if the RT options were separate, and greyed-out if the hardware doesn’t support them.

There is unfortunately no setting that makes the image stable and free of distracting artifacts. Some improvements could be made by customizing the denoiser and engaging RT harder when available, but ultimately Croteam just tried punching above their weight here. Their idea of a fully dynamically lit world can’t be realized well with current technology, unless you’re someone like CD Projekt RED who gets to collaborate with NVidia directly and use their state-of-the-art proprietary acceleration algorithms, and even then only the strongest gaming PCs available today could run that.

Screen-space reflections

This effect doesn’t need any introduction, really. If you played any game within the last 10 years, you noticed the weird reflections that disappear if you look downwards. We all know it’s not good, but there just aren’t any good alternatives without RT, other than cubemaps which are basically just “something” remotely believable for the reflection to be with no basis in what’s actually there, and, again – they would need to be precalculated like lightmaps which is a bother during development. TPP2 has no option to engage RT for reflections, and if it did it would be really expensive. SSR is only aware of solid geometry, it doesn’t know about transparent things. It’s obvious in examples like this:

Note how the laser is sometimes reflected in the puddle and sometimes not. In reality, the laser is never reflected – the effect just reflects the wall, and sometimes it thinks it’s green

Unfortunately, the game is filled with lasers and semitransparent gates. Personally, I’d much prefer an option to disable SSR entirely and rely on cubemaps, mostly because the game’s visual design is particularly incompatible with it.

Emitter – reflected! Laser – not

Antialiasing

AA is the longest-running unsolved problem in all of realtime graphics. The current state of the art is temporal upscaling (TAAU), which most people playing games probably heard of – mostly from the DLSS vs FSR debacle. To Croteam’s credit, they offer the entire gamut of all current TAAU implementations:

  1. TAAU: Unreal Engine’s old algorithm. Low quality.
  2. FSR: AMD’s algorithm. Medium quality, optimized for AMD GPUs.
  3. DLSS: NVidia’s algorithm. High quality, available only for Nvidia GPUs.
  4. XeSS: Intel’s algorithm. Medium quality, optimized for Intel GPUs.
  5. TSR: Unreal Engine’s new algorithm. Medium quality, optimized for console hardware (AMD).
TSR provides a smooth and temporally stable image on all hardware

Their implementation is not perfect, but good. Ghosting is weak to nonexistent, fine details don’t flicker much. Some transparency issues could be improved by providing better masks to the algorithms.

My recommendation is to use DLSS if you have an NVidia GPU, XeSS if you have an Intel GPU and TSR if you have an AMD GPU. (I found TSR to be more stable on disocclusion and less oversharpened than FSR, while also being better at preserving thin details.) Set all other settings to Ultra, and upscaling preset to Balanced. If framerate is low, lower other settings. If framerate is very high, change upscaling preset to Quality or even Native.

Even DLSS is not perfect, though

Verdict

TPP2 appears to be a result of extreme passion and effort, somewhat bogged down technically by an overly ambitious graphical vision, unfamiliarity with the engine and possibly a looming deadline (Who still says “2023” until two months before the year ends…?) It will never look as good as the promo screenshots in every situation you can get yourself into, but the artifacts that are there can be mitigated to some degree, and I sincerely hope this will happen for the main game’s release, or in post-release patches. Either way, I can’t wait to dive in when it releases on November 2nd.

C++23: deducing “this” into Rust traits

C++23: deducing “this” into Rust traits

C++23 is here, and compiler developers are slowly metabolizing the standardese into something we can use. Bless these poor souls. One of the main highlights of this release is a language feature called “Explicit object parameters,” which is a traditionally boring way to say that the horrible mess known as the this keyword is finally on its way out. The solution they came up with not only cleans up the syntax of the typical uses of this, but allows for brand new constructs that help tie up loose ends introduced by earlier language features, deduplicate code and help untangle template spaghetti. A creative use of the “explicit object parameter” (ugh) can even implement Rust-style traits with no boilerplate! We’ll get there, I promise. Most of the sections below will build up to why explicit object parameter (EOP? anyone?) is useful, in case you have only a passing familiarity with C++ – experts are advised to skip ahead for risk of a nagging feeling of condescension.

Stating the warts

C++ is not an elegant language[citation needed].

class Ghost {
    int peekaboo = 5;
public:
    void setToEight(int peekaboo) {
        peekaboo = 8; // (1)
    }

    void setToEightAgain() {
        peekaboo = 8;  // (2)
    }
};

Lines (1) and (2) are identical. However, (1) changes the value of the function parameter and (2) changes the value of the class member! This is because when an identifier doesn’t refer to anything in its current scope, the compiler checks if it happens to be a class member. Nobody likes this, it makes code ambiguous and needlessly dependent on context. To disambiguate, you can prepend this->:

this->peekaboo = 8;

Or add a prefix to class members:

class Ghost {
    int m_peekaboo = 5;
public:
    void setToEight(int peekaboo) {
        m_peekaboo = 8;
    }

Prefixes are ugly. this works, but why does it have to be a pointer? Probably some historical reason, oh well. And we still have the fact that it’s entirely optional, many people don’t like including it when it’s not required, and we didn’t eliminate the rule of implicit this-> and thus are still susceptible to bugs caused by accidentally referring to the wrong thing.

Public survey

How do other languages handle this? Let’s go back in time, and have a look at emulating OOP in C:

struct Ghost {
    int peekaboo;
};
void Ghost__setToEight(Ghost* this) {
    this->peekaboo = 8;
}

Pretty self-explanatory, the name lookup is trivial. If we wanted to pretend that function members exist, we could even keep a pointer to the setToEight() function inside the struct. Let’s see about Python:

class Ghost:
  peekaboo = 5
  def setToEight(self):
    self.peekaboo = 8

Alright, if a member function wants to modify the instance, it needs that self parameter. Um, Rust…?

struct Ghost {
    peekaboo: i32,
}
impl Ghost {
    fn setToEight(&mut self) {
        self.peekaboo = 8;
    }
}

More or less the same here, we just need mut because in Rust everything is const by default unless specified otherwise.

The majority of programming languages settled on making the this/self parameter explicit. It’s a little more typing, but name lookup rules become simpler and less magical, making code more readable at a glance.

You cannot escape this

C++ tries to hide the existence of this parameter, but with some more advanced features you need to be aware of its existence and reason about it anyway. One example of this is member function ref-qualifiers:

class Ghostbuster {
public:
    void bust() &;
    void bust() &&;
    void bust() const &;
    void bust() const &&;
};

What’s going on here? How can a function itself be const, or a reference? Why are multiple functions with the same name and argument list even allowed?

The answer are that these are the qualifiers of the implicit this. If the instance itself is const or an r-value (for example, it’s a temporary that was just returned from another function by value), the name resolution chooses the function with the best matching qualifier. If you assume that this is a real parameter, it makes perfect sense why the overloads are allowed – the parameter lists are all different.

Ok, one more example. In order to make C++ more functional, you can #include <functional>. You then get a bunch of great features, like partial application:

#include <functional>

void frobnicate(int x, int y) {}

struct Spline {
    void reticulate(int z, int w) {}
};

int main() {
    auto frobnicateWithXOfEight = std::bind_front(&frobnicate, /* x = */ 8);
    frobnicateWithXOfEight(42); // x = 8, y = 42

    Spline spline;
    auto reticulateWithXOfEight = std::bind_front( // *record scratch* what do we do here?

std::bind_front takes an existing function and creates a functor (something that acts like a function, in that it can be called with brackets), “prefilling” some of the arguments starting from the left. Can we bind a member function, though? Do we use spline.reticulate or Spline::reticulate? How does it know the instance to be called on, can we provide it later? If only this parameter existed, and we could simply bind it like any other…

Spline spline;
auto reticulateWithXOfEight = std::bind_front(&Spline::reticulate, spline, 8);
reticulateWithXOfEight(42); // this = spline, z = 8, w = 42?

Wait a moment… This works. Again, C++ hides this parameter, and then asks you to pretend that it didn’t. With EOP (it won’t catch on, will it…), we can stop this charade.

Check your self

Take off that COVID facemask and greet our AI overlords, because we’re fast-forwarding to 2023. The future is now, and we can do this:

class Ghost {
    int peekaboo;
public:
    void setToEight(this Ghost& self) {
        self.peekaboo = 8;
        // peekaboo = 8; // (1)
        // this->peekaboo = 8; // (2)
    }
};

Really, that’s it. You just slap this before the first parameter, and it becomes the instance. You can make it Ghost const&, Ghost&&, whatever you want. As an added bonus, implicit member access like in (1) is now a compile-time error in any function where the explicit this is present! this itself cannot be used like in (2), either. They didn’t even have to do that for us, but they did. What a slam dunk of a feature.

It does feel kind of pointless to put in Ghost as the type every time though, it’s the only type that’s valid there anyway. What if we make it a template?

class Ghost {
    int peekaboo;
public:
    template<typename T>
    void setToEight(this T&& self) {
        self.peekaboo = 8;
    }
};

When you call the method, the type T gets correctly deduced to Ghost, and everything works the same. Don’t mind that &&, it’s there so that the function magically works for any reference type, due to C++’s arcane reference collapsing rules. You don’t really need to know the details.

If you prefer, you can also use the shorthand for templated function parameters:

class Ghost {
    int peekaboo;
public:
    void setToEight(this auto&& self) {
        self.peekaboo = 8;
    }
};

This is the form you’ll see most often in code that uses this feature.

With the type of this being explicit and controllable with a template, we can remove duplication of const/non-const versions of member functions, create recursive lambdas, or even pass this by value. I’m not going to be providing examples, since Microsoft did a better job than I would. I recommend having a look at this article before continuing to get a better grasp on EOP.

Done? Welcome back, time to do traits.

Superpowers

First, a quick primer on Rust traits. They are a kind of interface that a class can choose to implement – a named set of functionality. A trait called Meowable might require anything that implements it to have a meow() method. Traits can also provide a default implementation, which is used if the class doesn’t override it. Any random function can now require its arguments just to have a specific trait instead of locking down the type entirely. It’s a clean way of achieving polymorphism without the complexity cost of full-blown OOP. You can have a look at some examples in Rust docs.

This sounds a lot like C++ inheritance, doesn’t it? The trait is a base class, the required method is pure virtual, and then the function argument’s type is a reference to the base class. There’s one major difference, though – Rust traits are entirely static dispatch. There are no vtables, no virtual calls, it’s all resolved at compile-time. Like templates, without the templates.

This could of course be achieved in C++ before with enough template magic, but the art is making it readable enough that it’s a net positive for your code. Thanks to EOP (pronunciation guide: it sounds like a hiccup), this is now possible. Here’s a C++ trait:

class Doglike {
public:
    auto bark(this auto&& self) -> std::string requires false;
    auto wag(this auto&& self) -> std::string { return "wagwag"; }
};

Big whoop, it’s just a struct that uses EOP for its function members. bark() is a function that every trait haver must implement, and wag() has a default implementation so implementing it is optional. The only new bit there is requires false, we’ll explain that later. You’d think we’re just setting up an abstract class, but virtual is nowhere to be seen.

Now, let’s “implement” this “trait”:

class Fido:
    public Doglike
{
public:
    auto bark(this auto&& self) -> std::string { return "bark!"; }
};

This really couldn’t look any simpler, we’re just inheriting from it, and implementing the required method by copypasting the signature. We’ll analyze this usage code:

Fido fido;
std::print("{}\n", fido.bark());
std::print("{}\n", fido.wag());

It shouldn’t be too difficult to see why we can call wag(). The name is not found within the class but it’s found within the base class, so that one is used. Now with bark(), the name is found in both, so just like with normal inheritance, name resolution prefers the deriving class’s version. So far, so good. Nothing is different from standard inheritance yet, but now it’s time to invoke static polymorphism:

void barkTwice(std::derived_from<Doglike> auto dog) {
    std::print("{} {}\n", dog.bark(), dog.bark());
}

int main() {
    Fido fido;
    barkTwice(fido);
}

The X auto param syntax is a shorthand for constraining a template argument. auto dog would match any type, std::derived_from<Doglike> auto dog matches only types T where the concept std::derived_from<T, Doglike> is true – so, only classes that “implemented” the “trait”.

With inheritance, this function would accept a reference to Doglike instead. Then, to find Fido‘s overridden definition of bark(), the call would be a virtual call – the vtable of the class would be used to call the correct function. However, because this is instead a template, the argument dog is always its real type, in this case Fido. The lookup of bark() in the function is then the same as in the earlier usage code. We could change auto to auto& to accept whatever type is passed by reference instead of copying it in.

Now, let’s imagine what would happen if Fido never implemented bark(). At the call site, the compiler gathers a set of all possible overloads and templates that could be used, which in this case is only the declaration in Doglike. The SFINAE process starts – all variations that result in invalid code are removed from the set. requires false is a constraint that simply fails the SFINAE test immediately, so when the declaration in Doglike is removed from the set, the set is now empty. We get a compiler error – out of all the functions that can be called, none of them are valid.

If requires false was removed, the code would actually compile! The call site would just be referring to some nonexistent specialization of the template, hoping for it to be implemented somewhere else. It’s not, so the linker gives us an undefined symbol error. It would be very much preferable to catch this at compile-time instead, so requires false is the equivalent of = 0 to make the function “pure virtual”.

Oh, and you might want to sugar up the concept a little:

template<typename T, typename U>
concept implements = std::derived_from<T, U>;

void barkTwice(implements<Doglike> auto dog) {

There are always caveats

Honestly, as often as C++ features come out of the oven half-baked, EOP comes surprisingly complete. It’s just a shame that it can’t be used in constructors or destructors, but that doesn’t affect the “trait” construct.

A small issue is that implementing a trait but not all of the required methods is not an error by itself – it only becomes one when the unimplemented method is called. This shouldn’t be too bad in practice, since any callsite that requires a trait is going to be calling that trait’s methods anyway.

Another problem, which it shares with a lot of modern C++ features, is that every function involved here – both the trait member functions and the functions that require a trait – are now templates, thus, their definitions can’t be neatly hidden away in a .cpp file. This problem will of course be resolved by modules, where non-template functions and template functions can both be exported the same way. But… there is a problem. As of the time of writing:

  • The only compiler that supports EOP is MSVC.
  • The only compiler that supports modules (in a usable state) is MSVC.
  • MSVC doesn’t support both EOP and modules at the same time.

😔

Hello world! Glitch showcase

Hello world! Glitch showcase

Spending long, frustrating hours writing complex graphics code and then finally seeing the results of your work come to life in front of you is the second best feeling in the world. The best feeling is, of course, admiring the flashy accidental art that happens when the code doesn’t quite work correctly. After every “mainline” post I’ll be sharing the wildest bugs I managed to capture around the same time as the events of the post. Enjoy! (Consider all videos of this series to have an epilepsy warning.)

Surprise edge detection
Whenever making any graphical change, make sure you have bloom enabled
A classic case of vertexplosion
Oh. Okay
Mind your layouts, kids
They’re trying to summon Batman
Gameplay in the shadow realm
Do not dip your blocks in acid
Almost correct tetris
Glitch-as-you-gather
I don’t know what this texture is
Every vertex’s gone to the rapture
It’s at the bottom of the well, and it’s angry
I recommend not clearing your color buffers once in a while
Qbert says Hello world!
Bloom taken to the limits of floating-point
Party in the shadow realm
The prettiest image I have ever saved

Hello world! The story so far

Hello world! The story so far

Ever since my code started producing pretty cool bugs and glitches, I have been repeatedly asked to start documenting my misadventures in computer graphics. And finally, here it as – a freshly deployed blog, a blank slate, yet to be filled with the broken hopes and dashed dreams of software development. How exciting.

Post written to the introspective ambient dub of Idlefon

The plan is to produce a write-up of every interesting topic that I research while working on the Minote project. If I keep at it, hopefully I will catch up to the present before it runs too far ahead again. So, without further ado, let’s start with the part that all things start with.

Shapely beginnings

Minote was initially created with one specific goal in mind: reimplement the arcade game Tetris: The Grand Master with an easily accessible coat of modern UX design and attractive graphics. TGM is a videogame which should absolutely not be overlooked by anyone with interest in game design, as it’s one that could legitimately be called a sequel to Tetris. The purity and elegance of the original ruleset is preserved, while refining the controls with frame-precision and expanding it with movement features that, past a certain speed, transform the classic puzzle into a fresh new challenge that extends all the way to human limits and beyond. If at any point in your life you, through no action of your own, find yourself with a copy of MAME and the ROM of the game, make sure to check it out. A community-produced version called Shiromino is also a great option.

This is still relatively beginner-level gameplay!

The community around the game was quite small – it’s hard to get people to play a game which is not only notoriously difficult, but requires non-trivial effort to even start playing. Even after dropping a few plays, it can be discouraging to hit the initial wall of rapidly rising gravity. As is standard for any arcade game, no guidance is ever provided to help the player advance further. Instead, players are expected to talk among themselves to find the techniques and strategies that allow one to eventually reach the end. This seemed to me like a problem, a fixable one – we just needed a version of the game which makes it easy to get started, and provides the encouragement and progress tracking which is expected by players of modern games. The core of the game is timeless, and only needed an updated presentation. With this in mind, I opened my IDE for the first time in years.

At that time I’ve done a small bit of hobbyist programming, but had next to no knowledge about modern graphics, and the game absolutely needed to make use of full power of the hardware to have the visuals that would interest a casual gamer. I chose to go with OpenGL, mostly because of this brilliant tutorial, which explains not only the API but the basic graphics programming techniques and the theory behind them. After a few weeks, which extended into months, a prototype was ready.

Prototype release. Shiny!

By the end the game was a nearly frame-perfect recreation of the original, with a slick presentation, particles, and glorious bloom. The road seemed clear to add more features, polish it up, and prepare for full release. If you like what you’re seeing, feel free to give it a try!

So why not?

Around this time, it started feeling like the project was taking a bit too much effort for a game that’s just a copy of something someone else made. TGM is very well designed but not perfect, and during hours of testing (and not just playing it repeatedly because it’s so addictive,) I came up with many ideas for potential changes which could make the game more intuitive, or at the very least shake things up a little. Implementing them would bring the game out of the questionably legal territory of being a clone. Considering The Tetris Company’s history of being litigation-happy and sending takedown requests to fan projects, this was not a comfortable position to be in. A new version was released which implemented some of these changes, like a unique randomizer, more intuitive and symmetrical rotation system, and further improved animations and effects. Some of the more outlandish ideas however would require changing the game on a very fundamental level.

Caption test.

Another source of issues was from the technical side. OpenGL is an API with very long history, and despite best efforts to keep it updated with changes to GPU hardware, it still has a lot of ambiguity in the API specification which results in code producing different results on different graphics drivers. Cross-vendor testing, and fixing all the bugs caused by a loose API specification and GLSL miscompilations, was starting to eat up a lot of development time. Something had to change, or I would never get out of bugfixing hell.

2nd attempt (ongoing) (actually 4th)

I’m no stranger to rewrites, and I realized they’re never as rosy as they appear. Old problems go out, new problems come in. Instead of throwing out all code, I decided to replace the renderer first, with one that could handle all the wild ideas of marrying visuals and gameplay. A properly modern one, in Vulkan, with PBR stuff and GPU-driven. It turned out that I quite enjoy graphics programming, perhaps more than I ever enjoyed making the game… It’s been almost 2 years, and the renderer is still in progress, and so many weird and interesting things came out of it. Friends in the wonderful Graphics Programming community, finding out which C++ features aren’t all that bad, computer locking up with colorful stripes flashing on the screen. Modern graphics programming feels like the comeback of writing ultra optimized code to extract the full potential of the raw hardware; there is minimal OS supervision, just you versus the GPU, a horrifying beast which can unleash incredible power should you manage to tame it. A completely different paradigm of thinking about problems, where even basic constructs like the “if” statement are executed differently on a fundamental level. It’s fascinating, it changes your thinking, and it made me rediscover the joy of programming.

But I’m rambling, and you don’t care about this. Minote is a work-in-progress renderer with some unique goals of fully procedural shading (no textures), having no visible artifacts that we typically associate with the “videogame” look of CG, and handling very high instance counts; to be eventually used for videogame and digital art projects. Now that I think about it, I could’ve started with that and spared you the barely relevant leadup. Well, I hope you like reading about tetris.