Okay, let me walk you through this little experiment I ran, comparing what I call the ‘Draper’ way and the ‘Fritz’ way of predicting stuff. Wasn’t anything super scientific, just something I cooked up to see what happens.

Getting Started
It all started when I was looking at some game results, could be sports, could be anything really, doesn’t matter much. I noticed people talk about fancy models, but I wondered if simpler approaches had any merit. So, I decided to test two methods side-by-side. I just named them ‘Draper’ and ‘Fritz’ for my own reference, kinda like giving nicknames to tools in the shed.
First step, I needed something to predict. I picked outcomes of local club matches over a few weeks. Simple enough, right? Yes or no, win or lose. Didn’t want to overcomplicate things from the get-go.
The ‘Draper’ Method
For the ‘Draper’ method, I tried to be a bit more systematic, you know? I gathered some basic past data – like the last few game scores for each team. Nothing too deep, just recent history. My thinking was simple: teams that have been winning might keep winning.
- I looked at the last 3 matches for each team.
- Counted the wins and losses.
- Made a prediction favouring the team with the better recent record.
- If records were tied, I just flipped a coin, metaphorically speaking. Had to make a call somehow.
I wrote down these predictions in my notebook. Felt kinda methodical doing it this way, even if the ‘method’ was super basic.
The ‘Fritz’ Method
Now, ‘Fritz’ was different. This was more about gut feeling and simple rules of thumb, less about the hard numbers. It was more like how you’d chat about it down the pub.
My process here involved things like:
- Thinking about team rivalries – sometimes history matters more than recent form in grudge matches.
- Considering if a key player was out injured (if I happened to hear about it).
- Just a general ‘feel’ for which team seemed more motivated or ‘up for it’.
Yeah, pretty subjective. I jotted these predictions down too, right next to the ‘Draper’ ones. Sometimes they agreed, sometimes they were wildly different. That was the interesting part.
Tracking and Comparing
So, week after week, I collected the actual results of the matches. Then came the moment of truth: comparing my predictions. I marked each prediction as right or wrong for both ‘Draper’ and ‘Fritz’.

It wasn’t a landslide victory for either, let me tell you. The ‘Draper’ method, the data-driven one, was maybe a bit more consistent. It got a decent number right, especially when one team was clearly on a roll.
But ‘Fritz’, the gut-feel method, had its moments too! It occasionally picked upsets that ‘Draper’ missed completely. Those were the surprising ones. It felt like sometimes, the ‘story’ or the ‘feeling’ around a game actually mattered more than the raw stats from the last couple of matches.
Final Thoughts
At the end of my little tracking period, ‘Draper’ had a slightly higher success rate overall. Not by much, though. It was steadier, less likely to make really bad calls, but also less likely to catch those surprising wins.
‘Fritz’ was more volatile. More wrong predictions, sure, but also some predictions that were spot-on when the numbers suggested otherwise.
So, what did I learn? Well, for me, it showed that even simple data can give you a slight edge, makes things a bit more predictable maybe. But relying only on basic stats means you miss out on the unpredictable stuff, the human element or context that ‘Fritz’ sometimes captured, even if less reliably. Neither method was perfect, but doing this little exercise was quite revealing. Just goes to show, sometimes the simplest tests teach you something valuable. It was a fun process, just observing and recording, seeing how things played out.