Maybe some groups have been doing a âweigh inâ when the device is first presented? The total weights of prevented e-waste are calculated from category averages rather than this unstructured data, but there is a field for exact weight in the schema.
If groups have the capability to measure exact weight, we should encourage that to be recorded and used in the totals calculation rather than the category average? And weights that are in the description should be moved into the weight field as a data clean up.
I did a little digging and found that 294 records have the weight in the problem field, all recorded by the same group over 2016/2017. About a third of these records have only the weight in kg while the rest record the item type as well.
I wasnât involved in the data at that time but I would hazard a guess that it could have been group policy or some issue with uploading data to the Fixometer. It might be worth asking that group if they still have their own records and if it would be possible to improve those records.
Recording of problem text varies greatly between groups and individuals for many reasons. Weâve done a little work in the area of data recording/data entry and will be expanding efforts in 2020.
Weâve got a few volunteers testing a spreadsheet template and workflow for bulk uploading of data. Drop me a message if youâd like to try it out.
Some repair groups certainly do - yep usually as part of a more comprehensive âcheck-inâ. From memory @Club_de_Reparadores weigh each item - maybe you have some more detail CdR on the way you do it at your events?
Certainly an argument for this if groups are recording it anyway and given we have a field for it (just currently only used for Misc devices). It wouldnât require too much change in the calculations, but would want to think through how many groups do it / how much of an improvement it makes to the e-waste/CO2 figures / how it affects various data collection UIsâŠ
Sorry Iâm a bit behind the curve - only just seen this, itâs great! âScharnier kapotâ had me stumped, saved by the Translate button :-). Done a few.
thinking about the long-game, it can help by taking it upstream to policy discussions, if we can be more precise about whatâs failing and what are the barriers to repair
as a repair knowledge base I think itâs useful - if we can sort through previous repair attempts by fault type.
Yes I can see how FaultCat should be very useful to support arm-waving about the need for r2r and design for repair with something closer to evidence.
Iâm no expert on the terminology but isnât âroot causeâ analysis about understanding (and potentially trying to stop/minimise impact of recurring problems) what caused the fault - so itâs more relevant to design for repair, whereas âfaultâ is what actually needs fixing, which is more directly relevant to fixers.
Is that right?
Root cause might be that user dropped their laptop, or it got rained on, or a bearing achieved its design life and wore out on the hard drive/fan, or the user installed a second AV on top of the first one, or unknown. The resulting fault could be on the system board, storage, or configuration or Operating System, ⊠or on several of these. So I disagree that there must be only one fault - yes there often is only one but no reason there canât be multiple - if that worn out fan made the power supply overheat that could have also blown the system board and storage, for example. I suppose what Iâm saying is that the definition of âfaultâ here (i.e. for a repairer facing a non-working item) is âwhat has to be fixed for the item to workâ.
There is only one root cause, though, so we need to be careful about terminology, and we arenât attempting to attribute root cause in FaultCat.
But donât get me wrong, because FaultCat is simply brilliant - and it could be even better if we had better data going into it - and if we could collect more info then perhaps @Monique and the team could work on RootCauseCat
Because of the quality of the problem description text, FaultCat canât really pinpoint either âroot causeâ or âwhat has to be fixedâ, rather, as a baby step, it is trying to simply sort the problems into buckets of fault âtypesâ, some of which are deliberately vague and ambiguous such as âperformanceâ and âconfigurationâ (so as to at least sort out many of the âUnknownsâ).
This experiment is a first step in identifying what sort of information we can glean from existing data and how we can improve future data entry. We will be reporting on FaultCatâs outcomes soon and weâre already chatting about drilling further into fault types to produce more granular information, including identifying patterns that might reveal âroot causesâ and âsolutionsâ. Already weâre getting some insights into issues related to device performance, maintenance and user misuse/misadventure - issues that perhaps canât all be attributed to âfaultâ or âfailureâ (although they might reveal areas for design or manufacturing improvements).
Given the popularity of FaultCat, particularly with regard to the discussions it has sparked, further online tasks are in the planning.