iâve started playing with faultcat (good distraction from the report iâm supposed to be writing
any way: some faults could be more than one thing, e.g. problems connecting to the wifi could be to do with the wifi hardware installed or could be a configuration issue, and, as it happens with one of my tablets itâs the operating system that doesnât have the correct driverâŠ
If the faultcat is focused on root cause, then there should be a single option (i.e. the categorisation system thatâs currently in place)
However, if faultcat is there to take the repairer through a series of possibilities in order to find the problem, it would need a many-to-many relationship implemented (via tags?) so that multiple
options could be selected.
P.S. Iâm not sure how flexible the data model is on the backend of faultcat â is there a git repo?
Welcome Ruth! Currently we only want a single âmainâ opinion from each person. Some of the fault types are deliberately vague because the problem text is often very vague, so weâre expecting quite a lot of âunknownâ, âperformanceâ and âconfigurationâ choices. Weâre not trying to guess the exact cause, weâd just like to sort the records into buckets of fault âtypesâ.
FaultCat collects a maximum of 5 opinions per record, plus it has an opinion from the data dive events for many of the records. Using these opinions we can get an idea of the most likely fault âtypeâ.
Taking a repairer through a series possibilities is in the domain of data entry rather than retrospective microtasking. Its an area we started to look into last year with an âalphaâ trial of spreadsheet templates for a few interested groups.
FaultCat results, as well as all this feedback and discussion, will help to inform our thinking in regard to data entry at repair events. If we can improve data entry then tasks to retrospectively âfixâ data wonât be so necessary.
And thanks for zooming it out to the bigger picture - @Chris_Moller. Youâre spot-on to flag all of the different use cases, and how/if to address all of them. With regards to the most immediate/current purpose of the work on FaultCat:
as part of the Right to Repair Europe Campaign, weâre aiming to bring citizen repair data to EU policymakers to influence future regulations, starting with computers
we expect there to be an opportunity to influence regulations of design for repairability for laptop computers in 2020
weâre collaborating with ECOS and other organisations to submit evidence based on our data - an initial paper will be out later this winter, with a more technical submission coming in the near future
This is touching the last 3 use cases you listed Chris.
For this analysis, when we get to a significant number of opinions on faults from FaultCat users, weâll be redoing the analysis of faults we did previously.
This is in two main areas - one obviously being looking at the patterns weâve seen in the faults reported. But the other is the reflecting on the available categorisations themselves, and if they need refining - so that includes quantitative analysis of where consensus was or wasnât achieved on the opinions, and why; it also includes all of the excellent qualitative feedback you are giving, so keep it coming!
(Now, all of the other use cases @Chris_Moller listed are definitely part of our wider strategy and data collection - e.g. sharing repair techniques: when entering data in to the Fixometer, it can be flagged as potentially useful for the Wiki, and we recently had a Wikithon to go through these records to incorporate them in. Or useful repair info can be logged against a repair attempt. This isnât in FaultCat itself but is definitely part of the overall picture.
The challenge is always balancing the use case of the data vs the overhead of capturing it in a busy volunteer environment. So thereâs a number of pieces there around looking at the different data to capture, for which purpose, making that clear to everyone, and not making it a confusing user experience!)
There seems to be such a wide spread in the quality of data that has been initially recorded. For example, some items have a very clear description of the fault (either as initially presented, or as diagnosed), followed by description of what remedial action was taken; whereas some items simply say stuff like âNotebook, exact weight 1.5kgâ, which is just weird. (And I just went through about 50 entries in Fault Cat and came across many that just weirdly specifically say how heavy the laptops are and nothing else)
Why would that be? It canât be a coincidence. Different people capturing different data but somehow at the same time not capturing any details of the problem. Or the fix?
Maybe some groups have been doing a âweigh inâ when the device is first presented? The total weights of prevented e-waste are calculated from category averages rather than this unstructured data, but there is a field for exact weight in the schema.
If groups have the capability to measure exact weight, we should encourage that to be recorded and used in the totals calculation rather than the category average? And weights that are in the description should be moved into the weight field as a data clean up.
I did a little digging and found that 294 records have the weight in the problem field, all recorded by the same group over 2016/2017. About a third of these records have only the weight in kg while the rest record the item type as well.
I wasnât involved in the data at that time but I would hazard a guess that it could have been group policy or some issue with uploading data to the Fixometer. It might be worth asking that group if they still have their own records and if it would be possible to improve those records.
Recording of problem text varies greatly between groups and individuals for many reasons. Weâve done a little work in the area of data recording/data entry and will be expanding efforts in 2020.
Weâve got a few volunteers testing a spreadsheet template and workflow for bulk uploading of data. Drop me a message if youâd like to try it out.
Some repair groups certainly do - yep usually as part of a more comprehensive âcheck-inâ. From memory @Club_de_Reparadores weigh each item - maybe you have some more detail CdR on the way you do it at your events?
Certainly an argument for this if groups are recording it anyway and given we have a field for it (just currently only used for Misc devices). It wouldnât require too much change in the calculations, but would want to think through how many groups do it / how much of an improvement it makes to the e-waste/CO2 figures / how it affects various data collection UIsâŠ
Sorry Iâm a bit behind the curve - only just seen this, itâs great! âScharnier kapotâ had me stumped, saved by the Translate button :-). Done a few.
thinking about the long-game, it can help by taking it upstream to policy discussions, if we can be more precise about whatâs failing and what are the barriers to repair
as a repair knowledge base I think itâs useful - if we can sort through previous repair attempts by fault type.
Yes I can see how FaultCat should be very useful to support arm-waving about the need for r2r and design for repair with something closer to evidence.
Iâm no expert on the terminology but isnât âroot causeâ analysis about understanding (and potentially trying to stop/minimise impact of recurring problems) what caused the fault - so itâs more relevant to design for repair, whereas âfaultâ is what actually needs fixing, which is more directly relevant to fixers.
Is that right?
Root cause might be that user dropped their laptop, or it got rained on, or a bearing achieved its design life and wore out on the hard drive/fan, or the user installed a second AV on top of the first one, or unknown. The resulting fault could be on the system board, storage, or configuration or Operating System, ⊠or on several of these. So I disagree that there must be only one fault - yes there often is only one but no reason there canât be multiple - if that worn out fan made the power supply overheat that could have also blown the system board and storage, for example. I suppose what Iâm saying is that the definition of âfaultâ here (i.e. for a repairer facing a non-working item) is âwhat has to be fixed for the item to workâ.
There is only one root cause, though, so we need to be careful about terminology, and we arenât attempting to attribute root cause in FaultCat.
But donât get me wrong, because FaultCat is simply brilliant - and it could be even better if we had better data going into it - and if we could collect more info then perhaps @Monique and the team could work on RootCauseCat
Because of the quality of the problem description text, FaultCat canât really pinpoint either âroot causeâ or âwhat has to be fixedâ, rather, as a baby step, it is trying to simply sort the problems into buckets of fault âtypesâ, some of which are deliberately vague and ambiguous such as âperformanceâ and âconfigurationâ (so as to at least sort out many of the âUnknownsâ).
This experiment is a first step in identifying what sort of information we can glean from existing data and how we can improve future data entry. We will be reporting on FaultCatâs outcomes soon and weâre already chatting about drilling further into fault types to produce more granular information, including identifying patterns that might reveal âroot causesâ and âsolutionsâ. Already weâre getting some insights into issues related to device performance, maintenance and user misuse/misadventure - issues that perhaps canât all be attributed to âfaultâ or âfailureâ (although they might reveal areas for design or manufacturing improvements).
Given the popularity of FaultCat, particularly with regard to the discussions it has sparked, further online tasks are in the planning.