The new best way to train your aim.
Guide

The new best way to train your aim.

Viscose
Back to Resources

Everybody grinds Benchmarks in aim trainers. What if we made that into the optimal practice method?

#aim-training#benchmark#viscose

Play the Benchmarks here!

Watch on YouTube

The State of Benchmarks

If you’ve been active in the aim training community at any point over the past… 4 years or so, something like this the Voltaic Benchmarks should be pretty familiar to you. Going back as far as I can remember, these have been considered the default way to test your skill when training. Over time they have taken on a bit of a different role though, with the benchmarks themselves becoming the training tool, not just something to test your skill with.

I imagine a lot of you are probably expecting me to say something about how awful that is and how grinding the benchmarks is bad for improvement and, it certainly can be, but I don’t think it has to be.

The voltaic benchmarks have continued to be iterated upon over the years but there never was any question about what they were being designed to do.

They were to test aiming skill in a way that paints the most complete picture possible, something they have become very good at.

Other benchmarks have been made over the years too but in my experience, most of them are either structurally identical to the voltaic benchmarks or hyperfocused on a specific category, and that’s not to say that all of these are bad but I wanted to try something a little different.

Turning a problem into the solution.

So, I made the Viscose Benchmarks and the idea they started from was “what if a benchmark was designed to be a training tool”.

My goal was to essentially trick people into playing scenarios that are good for practice, instead of just good for evaluating your own skill level.

What this means is that they might not be as consistent as Voltaic’s benchmarks or cover as many areas of aim, but instead they focus on the scenarios and categories that I have found most useful to my own personal aim development no matter what game I’m playing. The score targets are slapped on as a motivator but they aren’t really the point.

The goal here instead is to develop strong fundamentals and good technique, which will transfer over to any game. You can play other game specific training methods on top of using these to benchmark if you’d like to as well, but I have tried to cover everything here.

If you just want to give them a try now theres a link to the spreadsheet above, in the top left you can click file -> make a copy and you’re good to go!

But I want to address how these actually differ from Voltaic’s benchmarks and how those differences better facilitate improving your technique.

Categories

I think the biggest factor is the categorisation of scenarios.

Voltaic’s primarily focus on the type of aim involved with even numbers of scenarios for each type. The thing is, the main three types of aim are not equally important for in game translation, and some very important scenario types like pokeball are absent here since they don’t really fit into any of these categories, plus in my opinion target switching is fairly overrepresented. Like… what unique skill does driftTS train?

What I’ve done instead is distribute the scenarios unevenly based on what I personally have found the most useful for in game translation and split those scenarios into more technique-focused categories.

Instead of categorising tasks based on what they are, I group them based on what they do.

There is no ”target switching” category anymore, those scenarios all fall under ”flicking technique” and are subdivided into which area of your flicks they focus on. Same with control tracking being split into which muscle group the scenario targets. This both gives you an idea of how to approach each scenario as well as what area your weak at if you are struggling with specific ones.

For example, a lower relative score on the fingertip subcategory in control tracking doesn’t just tell you general info like “work on tracking”, it is very specifically a weakness with fine control over small movements, either with your fingertips or wrist depending on grip and playstyle.

And while I have designed the benchmarks in a way where grinding the scenarios you are weak at will help address those weaknesses, if you want to vary your practice more you can pick out extra scenarios to help isolate the issue.

How I like to do this is just searching for the name of the base scenario to find variants. For example cloverrawcontrol: typing that in brings up heaps of versions of the scenarios with slight variations like slower, faster, larger, adding blinks and each change will help work on different things. You can also check out other benchmarks on this website that have more narrow focuses or look through playlists or Voltaic’s recommended scenario list for tasks that will help you improve at areas you are weak in.

How are these meant to be played?

About playlists though, I’ve heard that a lot of people play these benchmarks as if they were one and this isn’t something I recommend personally. You can if you’d like and I’m sure it’s still beneficial, but in my opinion:

The value in benchmarks is that they make it easy for you to motivate yourself to push yourself to improve category by category.

How I approach every task is a test of optimisation: What am I doing wrong from a technique perspective that this task is forcing me to improve on? The most recent example for me was pasu; I wasn’t leading my shots properly and trying to push my score further forced me to improve my technique.

This is, to me, what makes benchmarks better than playlists and why I chose to make this list of scenarios into a Benchmark instead of just posting it as a playlist.

By encouraging you to look at your score on each scenario, set a goal and figure out what is preventing you, technique-wise, from reaching it is why I think benchmarks have become such a good tool to improve and, until now, have only been held back by being created to be benchmarks. It prevents you from autopiloting and forces you to actively analyse your own improvement which is exactly what I wanted when making these and dictated every decision from subcategory names to rank difficulty.

The Ranks

Rank Difficulty

On that note, the most common question I’ve been asked about this benchmark is, “why X rank is so hard if it is on the easier benchmarks”. A lot of people have told me that “fuchsia is way way harder than wool, even though fuchsia is classified as medium and wool is hard” and they’re right! This is an intentional design change from Voltaic and actually something I suggested to them during the development of season 5.

I think the best way to think of it is that these terms: easier, medium and hard, represent the difficulty of the scenarios included in these benchmarks, NOT the skill required for any given rank on those benchmarks. There will be players who I would consider advanced who can’t get the top rank on the medium scenarios here and that isn’t just to make these benchmarks really hard for no reason. (EDITOR’S NOTE: shade is indeed being thrown)

Rank Overlap

The Voltaic Benchmarks have a bit of a problem where the jumps between difficulties is so drastic that it feels completely insurmountable for a lot of people. When trying to go from Master to Grandmaster you need to not only get a better score but get that score on a significantly harder scenario.

I think this is why Master became cited as the cutoff point for when aimtrainers lose their value even though this isn’t true in my opinion, it’s just that the time investment required per rank grows so much after this point, partially because you have to get those scores on incredibly difficult scenarios.

What I’ve done to try and alleviate this issue is to have roughly 2 ranks of overlap for each difficulty. So ordering the ranks in terms of how hard they are supposed to be would look something like this:

A chart displaying 3-4 ranks of overlap between difficulties.

This means that while someone is climbing to the higher ranks on the medium scenarios for example they can try out the harder ones and still have scores to shoot for there.

In my opinion less challenging scenarios always have their place when aimtraining no matter how good you get.

I think of easier scenarios as refinement of your technique and harder scenarios as a stress test of that technique—you really should be playing both and this isn’t something conventional benchmarks encourage.

Once you are done with Intermediate on Voltaic, you move on to Advanced. Whereas here it’s highly encouraged to keep playing the set of benchmarks below the highest set you can get scores on. I’m not too far off Silk Complete on these and I still get a lot of value from playing the Medium difficulty scenarios.

The Number of Ranks

This is also a part of the reason why there are so many ranks here, but not the only reason.

Another goal I had with these was to make starting out on them more approachable than Voltaic—I want most people who are brand new to aim training to be able to get a rank in a couple of tries. This means that there are a lot more ranks for the easier scenarios than voltaic has, and since Medium overlaps with the Easier and Harder scenarios there is a bunch of ranks there too.

As for the ranks themselves, I’ve strayed away from Voltaic’s contemporary energy system and went back to their old system of requiring one score per subcategory to get a rank. My reasoning for this is twofold: making people play a wider range of scenarios will naturally lead to them become more well rounded and it also allows the individual score targets to be lower for each.

Since you can get a rank in voltaic with only 3-4 scores at that rank in total, the individual scores are going to be a grind. Whereas here, you need at least 14 scores at a certain rank to achieve that rank, meaning even if I want Fuchsia to be equal to Voltaic Nova as a rank, the Fuchsia score targets can be much more reachable which I personally find more motivating.

Closing Notes

I think that’s everything I have to say about the benchmarks themselves, but if you’d like to know more about how to play each scenario specifically I’m working on adding little notes to each scenario when you click it with a link to a vod as well as a small writeup, and if you want more information I have a video up on my second channel talking through similar things with my own and Matty’s gameplay to demonstrate what proper technique looks like for each.

Also, thank you so much to pinguefy (omg hi i also formatted this :3) for making the spreadsheet for this. A lot of my stupid ideas like bringing back Volts or having seemingly random numbers of scenarios per subcategory would not have worked at all using a benchmark template and I love the rank names and colour scheme which are all pingu’s doing.

Join our Discord!

Connect with the community

Join Discord