Get involved in Repair Data with FaultCat! 😺

i’ve started playing with faultcat (good distraction from the report i’m supposed to be writing :slight_smile:

any way: some faults could be more than one thing, e.g. problems connecting to the wifi could be to do with the wifi hardware installed or could be a configuration issue, and, as it happens with one of my tablets it’s the operating system that doesn’t have the correct driver…

is there a way to say fault a OR fault b ?



If the faultcat is focused on root cause, then there should be a single option (i.e. the categorisation system that’s currently in place)

However, if faultcat is there to take the repairer through a series of possibilities in order to find the problem, it would need a many-to-many relationship implemented (via tags?) so that multiple
options could be selected.

P.S. I’m not sure how flexible the data model is on the backend of faultcat – is there a git repo?



:+1: :heart_eyes_cat:

Welcome Ruth! Currently we only want a single “main” opinion from each person. Some of the fault types are deliberately vague because the problem text is often very vague, so we’re expecting quite a lot of “unknown”, “performance” and “configuration” choices. We’re not trying to guess the exact cause, we’d just like to sort the records into buckets of fault “types”.

FaultCat collects a maximum of 5 opinions per record, plus it has an opinion from the data dive events for many of the records. Using these opinions we can get an idea of the most likely fault “type”.

1 Like

Taking a repairer through a series possibilities is in the domain of data entry rather than retrospective microtasking. Its an area we started to look into last year with an ‘alpha’ trial of spreadsheet templates for a few interested groups.

FaultCat results, as well as all this feedback and discussion, will help to inform our thinking in regard to data entry at repair events. If we can improve data entry then tasks to retrospectively “fix” data won’t be so necessary.

There is a git repo on GitHub :slight_smile:

1 Like

Approaching 60% now :slight_smile:

:cat: :cat: :cat: :cat: :cat: :cat: :white_large_square: :white_large_square: :white_large_square: :white_large_square:

(the stretch target starting to look closer now too…)


Great to get the excellent feedback and discussion, @Chris_Moller , @ruth1 and @Andrew_Olson .

And thanks for zooming it out to the bigger picture - @Chris_Moller. You’re spot-on to flag all of the different use cases, and how/if to address all of them. With regards to the most immediate/current purpose of the work on FaultCat:

  • as part of the Right to Repair Europe Campaign, we’re aiming to bring citizen repair data to EU policymakers to influence future regulations, starting with computers
  • we expect there to be an opportunity to influence regulations of design for repairability for laptop computers in 2020
  • we’re collaborating with ECOS and other organisations to submit evidence based on our data - an initial paper will be out later this winter, with a more technical submission coming in the near future

This is touching the last 3 use cases you listed Chris.

For this analysis, when we get to a significant number of opinions on faults from FaultCat users, we’ll be redoing the analysis of faults we did previously.

This is in two main areas - one obviously being looking at the patterns we’ve seen in the faults reported. But the other is the reflecting on the available categorisations themselves, and if they need refining - so that includes quantitative analysis of where consensus was or wasn’t achieved on the opinions, and why; it also includes all of the excellent qualitative feedback you are giving, so keep it coming!


(Now, all of the other use cases @Chris_Moller listed are definitely part of our wider strategy and data collection - e.g. sharing repair techniques: when entering data in to the Fixometer, it can be flagged as potentially useful for the Wiki, and we recently had a Wikithon to go through these records to incorporate them in. Or useful repair info can be logged against a repair attempt. This isn’t in FaultCat itself but is definitely part of the overall picture.

The challenge is always balancing the use case of the data vs the overhead of capturing it in a busy volunteer environment. So there’s a number of pieces there around looking at the different data to capture, for which purpose, making that clear to everyone, and not making it a confusing user experience!)

thanks to everyone who replied to my query - it’s helping my understanding.

1 Like

The cat-o-meter is at 70% :muscle:

:cat: :cat: :cat: :cat: :cat: :cat: :cat: :white_large_square: :white_large_square: :white_large_square:

can we get to 9,000 repairs reviewed by the end of the month?


There seems to be such a wide spread in the quality of data that has been initially recorded. For example, some items have a very clear description of the fault (either as initially presented, or as diagnosed), followed by description of what remedial action was taken; whereas some items simply say stuff like “Notebook, exact weight 1.5kg”, which is just weird. (And I just went through about 50 entries in Fault Cat and came across many that just weirdly specifically say how heavy the laptops are and nothing else)

Why would that be? It can’t be a coincidence. Different people capturing different data but somehow at the same time not capturing any details of the problem. Or the fix?

1 Like

Maybe some groups have been doing a ‘weigh in’ when the device is first presented? The total weights of prevented e-waste are calculated from category averages rather than this unstructured data, but there is a field for exact weight in the schema.

If groups have the capability to measure exact weight, we should encourage that to be recorded and used in the totals calculation rather than the category average? And weights that are in the description should be moved into the weight field as a data clean up.

1 Like

I did a little digging and found that 294 records have the weight in the problem field, all recorded by the same group over 2016/2017. About a third of these records have only the weight in kg while the rest record the item type as well.

I wasn’t involved in the data at that time but I would hazard a guess that it could have been group policy or some issue with uploading data to the Fixometer. It might be worth asking that group if they still have their own records and if it would be possible to improve those records.

Recording of problem text varies greatly between groups and individuals for many reasons. We’ve done a little work in the area of data recording/data entry and will be expanding efforts in 2020.

We’ve got a few volunteers testing a spreadsheet template and workflow for bulk uploading of data. Drop me a message if you’d like to try it out.


Some repair groups certainly do - yep usually as part of a more comprehensive ‘check-in’. From memory @Club_de_Reparadores weigh each item - maybe you have some more detail CdR on the way you do it at your events?

:+1: Certainly an argument for this if groups are recording it anyway and given we have a field for it (just currently only used for Misc devices). It wouldn’t require too much change in the calculations, but would want to think through how many groups do it / how much of an improvement it makes to the e-waste/CO2 figures / how it affects various data collection UIs…

We’ll be doing a group FaultCat session online tomorrow (Wed 29th) - join us if you can!


:cat: :cat: :cat: :cat: :cat: :cat: :cat: :cat: :cat: :white_large_square:

Just 10% to go :slight_smile:

1 Like

Sorry I’m a bit behind the curve - only just seen this, it’s great! “Scharnier kapot” had me stumped, saved by the Translate button :-). Done a few.


But I haven’t quite figured out how it can help me/my repair cafe…

Hi @Ian_Barnard , nice to see you!

  • thinking about the long-game, it can help by taking it upstream to policy discussions, if we can be more precise about what’s failing and what are the barriers to repair
  • as a repair knowledge base I think it’s useful - if we can sort through previous repair attempts by fault type.

I’m keeping my own wiki notes at the moment on laptops, organised by fault type…


Yes I can see how FaultCat should be very useful to support arm-waving about the need for r2r and design for repair with something closer to evidence.

I’m no expert on the terminology but isn’t “root cause” analysis about understanding (and potentially trying to stop/minimise impact of recurring problems) what caused the fault - so it’s more relevant to design for repair, whereas “fault” is what actually needs fixing, which is more directly relevant to fixers.

Is that right?

Root cause might be that user dropped their laptop, or it got rained on, or a bearing achieved its design life and wore out on the hard drive/fan, or the user installed a second AV on top of the first one, or unknown. The resulting fault could be on the system board, storage, or configuration or Operating System, … or on several of these. So I disagree that there must be only one fault - yes there often is only one but no reason there can’t be multiple - if that worn out fan made the power supply overheat that could have also blown the system board and storage, for example. I suppose what I’m saying is that the definition of “fault” here (i.e. for a repairer facing a non-working item) is “what has to be fixed for the item to work”.

There is only one root cause, though, so we need to be careful about terminology, and we aren’t attempting to attribute root cause in FaultCat.

But don’t get me wrong, because FaultCat is simply brilliant - and it could be even better if we had better data going into it - and if we could collect more info then perhaps @Monique and the team could work on RootCauseCat :slight_smile:


Thanks Ian, I’m glad you enjoyed FaultCat!

Because of the quality of the problem description text, FaultCat can’t really pinpoint either “root cause” or “what has to be fixed”, rather, as a baby step, it is trying to simply sort the problems into buckets of fault “types”, some of which are deliberately vague and ambiguous such as “performance” and “configuration” (so as to at least sort out many of the “Unknowns”).

This experiment is a first step in identifying what sort of information we can glean from existing data and how we can improve future data entry. We will be reporting on FaultCat’s outcomes soon and we’re already chatting about drilling further into fault types to produce more granular information, including identifying patterns that might reveal “root causes” and “solutions”. Already we’re getting some insights into issues related to device performance, maintenance and user misuse/misadventure - issues that perhaps can’t all be attributed to “fault” or “failure” (although they might reveal areas for design or manufacturing improvements).

Given the popularity of FaultCat, particularly with regard to the discussions it has sparked, further online tasks are in the planning. :slight_smile: