hey all, just tried out a few of these. We seem to be missing a category for stuff sitting âaboveâ the operating system.
For example, âdeleted multiple antivirus software and now itâs running fine!â - this isnât the operating system, it was the fact that they probably had that many AV programmes running concurrently. Wouldnât that need an âapplicationâ category? (I can only see âoperating systemâ, and âconfigurationâ is a bit too ambiguous).
I think with to that particular example, the fault could perhaps be regarded as âPerformanceâ. But the categories are definitely open for discussion - this is a great benefit of having many eyes on the data
For reference, hereâs the existing list (which we should add to the FaultCat FAQ)
with some context in this blog post, particularly the âLessons learnedâ section:
To make FaultCat really useful, IMHO we should decide what weâre trying to achieve. There are several possible objectives, but they will drive different ways of categorising faults. Which of the following are we trying to achieve (we probably canât manage all at once)?
Identifying diagnostic techniques that we need to develop or improve
Sharing our experience of faults weâve come across, to make it easier for others to spot them
Identifying additional repair resources (diagnostic equipment, tools, spares) that we should have in our armoury
Sharing repair techniques
Developing strategies for keeping old computer equipment running, after the software is declared obsolete
Pressuring manufacturers to behave more ethically or environmentally
Making the case to regulators for tighter regulation of manufacturing
Making the case to regulators for fuller disclosure of service information
Each of these drives different ways of reporting faults. We should be asking repairers to record their experiences with one or two of these specific objectives in mind.
FaultCat builds on the concept of the Open Data Dive events held in 2019. Perhaps the post-event blog post best sums it up:
The aim of Open Repair Data fault classification is not necessarily to pinpoint each exact fault cause, but to group repairs into streams that can be reported and visualised. Weâll be working with our policy partners in Brussels to explore this data further.
The other goals that you identify are all valid and are part of a general roadmap. Weâre already testing pre-configured spreadsheets for data entry at repair events, analysing product categorisation and looking at ways to make sharing repair data easier.
FaultCat is our first experiment in online public involvement with repair data so we wanted to keep it really simple. It has generated a nice bit of public interest so far which feeds our motivation.
Feel free to start new threads to share ideas and suggestions for other tools.
iâve started playing with faultcat (good distraction from the report iâm supposed to be writing
any way: some faults could be more than one thing, e.g. problems connecting to the wifi could be to do with the wifi hardware installed or could be a configuration issue, and, as it happens with one of my tablets itâs the operating system that doesnât have the correct driverâŠ
If the faultcat is focused on root cause, then there should be a single option (i.e. the categorisation system thatâs currently in place)
However, if faultcat is there to take the repairer through a series of possibilities in order to find the problem, it would need a many-to-many relationship implemented (via tags?) so that multiple
options could be selected.
P.S. Iâm not sure how flexible the data model is on the backend of faultcat â is there a git repo?
Welcome Ruth! Currently we only want a single âmainâ opinion from each person. Some of the fault types are deliberately vague because the problem text is often very vague, so weâre expecting quite a lot of âunknownâ, âperformanceâ and âconfigurationâ choices. Weâre not trying to guess the exact cause, weâd just like to sort the records into buckets of fault âtypesâ.
FaultCat collects a maximum of 5 opinions per record, plus it has an opinion from the data dive events for many of the records. Using these opinions we can get an idea of the most likely fault âtypeâ.
Taking a repairer through a series possibilities is in the domain of data entry rather than retrospective microtasking. Its an area we started to look into last year with an âalphaâ trial of spreadsheet templates for a few interested groups.
FaultCat results, as well as all this feedback and discussion, will help to inform our thinking in regard to data entry at repair events. If we can improve data entry then tasks to retrospectively âfixâ data wonât be so necessary.
And thanks for zooming it out to the bigger picture - @Chris_Moller. Youâre spot-on to flag all of the different use cases, and how/if to address all of them. With regards to the most immediate/current purpose of the work on FaultCat:
as part of the Right to Repair Europe Campaign, weâre aiming to bring citizen repair data to EU policymakers to influence future regulations, starting with computers
we expect there to be an opportunity to influence regulations of design for repairability for laptop computers in 2020
weâre collaborating with ECOS and other organisations to submit evidence based on our data - an initial paper will be out later this winter, with a more technical submission coming in the near future
This is touching the last 3 use cases you listed Chris.
For this analysis, when we get to a significant number of opinions on faults from FaultCat users, weâll be redoing the analysis of faults we did previously.
This is in two main areas - one obviously being looking at the patterns weâve seen in the faults reported. But the other is the reflecting on the available categorisations themselves, and if they need refining - so that includes quantitative analysis of where consensus was or wasnât achieved on the opinions, and why; it also includes all of the excellent qualitative feedback you are giving, so keep it coming!
(Now, all of the other use cases @Chris_Moller listed are definitely part of our wider strategy and data collection - e.g. sharing repair techniques: when entering data in to the Fixometer, it can be flagged as potentially useful for the Wiki, and we recently had a Wikithon to go through these records to incorporate them in. Or useful repair info can be logged against a repair attempt. This isnât in FaultCat itself but is definitely part of the overall picture.
The challenge is always balancing the use case of the data vs the overhead of capturing it in a busy volunteer environment. So thereâs a number of pieces there around looking at the different data to capture, for which purpose, making that clear to everyone, and not making it a confusing user experience!)
There seems to be such a wide spread in the quality of data that has been initially recorded. For example, some items have a very clear description of the fault (either as initially presented, or as diagnosed), followed by description of what remedial action was taken; whereas some items simply say stuff like âNotebook, exact weight 1.5kgâ, which is just weird. (And I just went through about 50 entries in Fault Cat and came across many that just weirdly specifically say how heavy the laptops are and nothing else)
Why would that be? It canât be a coincidence. Different people capturing different data but somehow at the same time not capturing any details of the problem. Or the fix?
Maybe some groups have been doing a âweigh inâ when the device is first presented? The total weights of prevented e-waste are calculated from category averages rather than this unstructured data, but there is a field for exact weight in the schema.
If groups have the capability to measure exact weight, we should encourage that to be recorded and used in the totals calculation rather than the category average? And weights that are in the description should be moved into the weight field as a data clean up.
I did a little digging and found that 294 records have the weight in the problem field, all recorded by the same group over 2016/2017. About a third of these records have only the weight in kg while the rest record the item type as well.
I wasnât involved in the data at that time but I would hazard a guess that it could have been group policy or some issue with uploading data to the Fixometer. It might be worth asking that group if they still have their own records and if it would be possible to improve those records.
Recording of problem text varies greatly between groups and individuals for many reasons. Weâve done a little work in the area of data recording/data entry and will be expanding efforts in 2020.
Weâve got a few volunteers testing a spreadsheet template and workflow for bulk uploading of data. Drop me a message if youâd like to try it out.