Get involved in Repair Data with FaultCat! 😺

Get involved with community repair data in a simple online task - fault categorisation with FaultCat.

Have a go here: FaultCat :cat: or read on for more info :arrow_down:

Volunteers collect data about the devices that are brought into our repair events. The data is uploaded to and shared as open data.

We held two data dives this year where other volunteers sorted repair records into types of faults. This helps us to report and visualise our data.

:bulb: We’d like to review and improve the categorisations - and this is where you can help!


FaultCat is a web app that collects opinions about the type of faults in computers brought to community events such as Restart Parties and Repair Cafés.

What do I do?

FaultCat loads a single random record that describes a faulty device brought along to a repair event. Here’s what you can do:

  • First, read the problem.
  • Then read the fault type.
  • If you agree with the fault type, press the ‘Yes / possibly’ button or ‘Y’ key.
  • If you are not sure, press the ‘I don’t know, Fetch another repair’ button or ‘F’ key.
  • If you think the suggestion is wrong, press ‘Nope, let me choose another fault’ or ‘N’ key and pick one of the fault type buttons. Then press ‘Go with [your new fault type]'’ or ‘G’ key.

That’s it! Keep going through as many as you like - we think it’s a really fun way to discover the data. Remember - every single fault you see is an item brought to a repair event and looked at by a fixer. :hammer_and_wrench:

Why computers?

All kinds of small electrical and electronic devices are brought to repair events. Right now we are focused on computer repair data – desktops, laptops and tablets - because the EU is set to draft rules about repairability. The data we collect at repair events and through apps like Fault Cat will provide policymakers with useful information.

Read more about why we collect data and our previous work on why computers fail.


click here to see Frequently Asked Questions

:question: Do I need an account?

No - you don’t need to sign up to to play with FaultCat. We’d love it if you did create an account and tell us what you think of FaultCat. You can then also get involved in events, data collection and discussions about community repair.

:question: What if there’s not enough info to decide on a fault type?

Sometimes it can be hard to choose a fault type because there is not a lot of information recorded. The data has come from a lively, sociable community repair event where volunteers are busy trying to fix things and don’t always have the time to write down a lot of the detail.

Don’t worry if you can’t decide - just press “I don’t know”. It is in fact very useful for us to know where we lack information as we are looking for ways to improve our data collection.

:question: What do you do with my answers?

They are pooled together to see if we can reach consensus on the problem we saw during a particular repair attempt. Once we have a good level of confidence in the faults, we can use this information to help complement existing knowledge on why things fail and what are the barriers to repair.This will feed into campaign work for the Right to Repair.

:question: This is cool! Any other stuff I can do?

We’re working on more repair data apps - stay tuned! In the meantime, there are lots of ways to get involved with our data work.

:question: What if I find something weird or have a suggestion?

Please share it in this discussion :slight_smile: :arrow_down: (you will need an account to do so)


We’ve had nearly 700 opinions given on faults so far, from 34 people - thanks, and keep them coming! The more opinions we get, the better consensus we can get on why devices have failed.

Share the FaultCat app around to help us get more opinions - or if you have any suggestions of places for us to share it, do let us know :slight_smile:


We have nearly 3000 attempted repairs on computers/tablets that you might see in FaultCat :hammer_and_wrench:

We thought it would be fun to try and get 9000 opinions on those repairs’ faults - an average of 3 opinions for each fault.

So far, adding in data from our previous FaultCat pilot, we are 19% of the way finished!

:cat: :cat: :white_large_square: :white_large_square: :white_large_square: :white_large_square: :white_large_square: :white_large_square: :white_large_square: :white_large_square:

Keep going!

(a stretch goal would be 5 opinions per fault…)


1 Like

Over 3000 now :muscle:

:cat: :cat: :cat: :white_large_square: :white_large_square: :white_large_square: :white_large_square: :white_large_square: :white_large_square: :white_large_square:


hey all, just tried out a few of these. We seem to be missing a category for stuff sitting “above” the operating system.

For example, “deleted multiple antivirus software and now it’s running fine!” - this isn’t the operating system, it was the fact that they probably had that many AV programmes running concurrently. Wouldn’t that need an “application” category? (I can only see “operating system”, and “configuration” is a bit too ambiguous).



Thanks @Andrew_Olson for the feedback!

I think with to that particular example, the fault could perhaps be regarded as ‘Performance’. But the categories are definitely open for discussion - this is a great benefit of having many eyes on the data :slight_smile:

For reference, here’s the existing list (which we should add to the FaultCat FAQ)


with some context in this blog post, particularly the ‘Lessons learned’ section:

What do others think?


To make FaultCat really useful, IMHO we should decide what we’re trying to achieve. There are several possible objectives, but they will drive different ways of categorising faults. Which of the following are we trying to achieve (we probably can’t manage all at once)?

  • Identifying diagnostic techniques that we need to develop or improve
  • Sharing our experience of faults we’ve come across, to make it easier for others to spot them
  • Identifying additional repair resources (diagnostic equipment, tools, spares) that we should have in our armoury
  • Sharing repair techniques
  • Developing strategies for keeping old computer equipment running, after the software is declared obsolete
  • Pressuring manufacturers to behave more ethically or environmentally
  • Making the case to regulators for tighter regulation of manufacturing
  • Making the case to regulators for fuller disclosure of service information

Each of these drives different ways of reporting faults. We should be asking repairers to record their experiences with one or two of these specific objectives in mind.


Chris’ post makes sense - what’s the objective of faultcat?

I’d go for “Sharing repair techniques” + “Sharing our experience of faults we’ve come across”


FaultCat builds on the concept of the Open Data Dive events held in 2019. Perhaps the post-event blog post best sums it up:

The aim of Open Repair Data fault classification is not necessarily to pinpoint each exact fault cause, but to group repairs into streams that can be reported and visualised. We’ll be working with our policy partners in Brussels to explore this data further.

The other goals that you identify are all valid and are part of a general roadmap. We’re already testing pre-configured spreadsheets for data entry at repair events, analysing product categorisation and looking at ways to make sharing repair data easier.

FaultCat is our first experiment in online public involvement with repair data so we wanted to keep it really simple. It has generated a nice bit of public interest so far which feeds our motivation.

Feel free to start new threads to share ideas and suggestions for other tools. :slight_smile:


i’ve started playing with faultcat (good distraction from the report i’m supposed to be writing :slight_smile:

any way: some faults could be more than one thing, e.g. problems connecting to the wifi could be to do with the wifi hardware installed or could be a configuration issue, and, as it happens with one of my tablets it’s the operating system that doesn’t have the correct driver…

is there a way to say fault a OR fault b ?



If the faultcat is focused on root cause, then there should be a single option (i.e. the categorisation system that’s currently in place)

However, if faultcat is there to take the repairer through a series of possibilities in order to find the problem, it would need a many-to-many relationship implemented (via tags?) so that multiple
options could be selected.

P.S. I’m not sure how flexible the data model is on the backend of faultcat – is there a git repo?



:+1: :heart_eyes_cat:

Welcome Ruth! Currently we only want a single “main” opinion from each person. Some of the fault types are deliberately vague because the problem text is often very vague, so we’re expecting quite a lot of “unknown”, “performance” and “configuration” choices. We’re not trying to guess the exact cause, we’d just like to sort the records into buckets of fault “types”.

FaultCat collects a maximum of 5 opinions per record, plus it has an opinion from the data dive events for many of the records. Using these opinions we can get an idea of the most likely fault “type”.

1 Like

Taking a repairer through a series possibilities is in the domain of data entry rather than retrospective microtasking. Its an area we started to look into last year with an ‘alpha’ trial of spreadsheet templates for a few interested groups.

FaultCat results, as well as all this feedback and discussion, will help to inform our thinking in regard to data entry at repair events. If we can improve data entry then tasks to retrospectively “fix” data won’t be so necessary.

There is a git repo on GitHub :slight_smile:

1 Like

Approaching 60% now :slight_smile:

:cat: :cat: :cat: :cat: :cat: :cat: :white_large_square: :white_large_square: :white_large_square: :white_large_square:

(the stretch target starting to look closer now too…)


Great to get the excellent feedback and discussion, @Chris_Moller , @ruth1 and @Andrew_Olson .

And thanks for zooming it out to the bigger picture - @Chris_Moller. You’re spot-on to flag all of the different use cases, and how/if to address all of them. With regards to the most immediate/current purpose of the work on FaultCat:

  • as part of the Right to Repair Europe Campaign, we’re aiming to bring citizen repair data to EU policymakers to influence future regulations, starting with computers
  • we expect there to be an opportunity to influence regulations of design for repairability for laptop computers in 2020
  • we’re collaborating with ECOS and other organisations to submit evidence based on our data - an initial paper will be out later this winter, with a more technical submission coming in the near future

This is touching the last 3 use cases you listed Chris.

For this analysis, when we get to a significant number of opinions on faults from FaultCat users, we’ll be redoing the analysis of faults we did previously.

This is in two main areas - one obviously being looking at the patterns we’ve seen in the faults reported. But the other is the reflecting on the available categorisations themselves, and if they need refining - so that includes quantitative analysis of where consensus was or wasn’t achieved on the opinions, and why; it also includes all of the excellent qualitative feedback you are giving, so keep it coming!


(Now, all of the other use cases @Chris_Moller listed are definitely part of our wider strategy and data collection - e.g. sharing repair techniques: when entering data in to the Fixometer, it can be flagged as potentially useful for the Wiki, and we recently had a Wikithon to go through these records to incorporate them in. Or useful repair info can be logged against a repair attempt. This isn’t in FaultCat itself but is definitely part of the overall picture.

The challenge is always balancing the use case of the data vs the overhead of capturing it in a busy volunteer environment. So there’s a number of pieces there around looking at the different data to capture, for which purpose, making that clear to everyone, and not making it a confusing user experience!)

thanks to everyone who replied to my query - it’s helping my understanding.

1 Like

The cat-o-meter is at 70% :muscle:

:cat: :cat: :cat: :cat: :cat: :cat: :cat: :white_large_square: :white_large_square: :white_large_square:

can we get to 9,000 repairs reviewed by the end of the month?


There seems to be such a wide spread in the quality of data that has been initially recorded. For example, some items have a very clear description of the fault (either as initially presented, or as diagnosed), followed by description of what remedial action was taken; whereas some items simply say stuff like “Notebook, exact weight 1.5kg”, which is just weird. (And I just went through about 50 entries in Fault Cat and came across many that just weirdly specifically say how heavy the laptops are and nothing else)

Why would that be? It can’t be a coincidence. Different people capturing different data but somehow at the same time not capturing any details of the problem. Or the fix?

1 Like