Sanitation in Khayelitsha is politically controversial, with a lot of work being done on the area by both the City of Cape Town, and various civic organizations. A group of these, consisting of the Social Justice Coalition (SJC), Ndifuna Ukwazi (NU), residents of Khayelitsha and various other partners, conducted a ‘Social Audit’ of the state of sanitation in Khayelitsha. This blog entry is an analysis of some of the methods behind the audit.
Despite having a budget of R60-million for its janitorial services project in this financial year (2014/2015), employing 900 janitors in 160 communities of informal settlements across the city and servicing 11 000 toilets, the social audit found that a lack of adequate planning, management and monitoring of the service has undermined the quality of its delivery, resulting in toilets that are often too dirty to use. – [M&G]
The article linked above gives a summary of the Social Audit results:
1. A third of residents say janitors clean their toilet only one day a week. The toilets are supposed to be cleaned every day. But 50% of residents interviewed said that janitors never cleaned toilets in their area.
2. Almost half (49%) of the toilets inspected were classified “dirty” or “very dirty”. In the worst case, this means that the toilet pan is blocked by excrement or rubbish so residents can’t use it, and the floor is covered with rubbish;
3. The ground surrounding more than half the toilets inspected was “dirty” or “very dirty”. Rubbish, rotting food and sewage pollute the toilets’ surroundings;
4. One in four flush-toilets audited were not working, which according to the report “increases the usage rate of other toilets, putting much strain on existing infrastructure”. Janitors are responsible for fixing minor faults, and reporting major faults to the city; and
5. Prior to starting work, janitors are supposed to be inoculated against disease. Only 13% of janitors have been inoculated.
At first inspection of the results it appears that 54% of the sampled toilets were indeed ‘dirty’ or ‘very dirty’. According to the SJCs classification, a ‘dirty’ toilet is defined as one that has “dirt or excrement but you can still use it carefully”. A ‘very dirty’ toilet is one that “is blocked with excrement or rubbish and you cannot use it”.
First, it is important to distinguish between a toilet blocked with excrement, and one blocked with rubbish. In that, one is a bad bowel movement, and the other is a clear act of vandalism. I would imagine that, if doing a Social Audit, such a difference should be noted.
Then, according to the results, 54.73% of the toilets were deemed to be ‘working’. According to the definition of ‘very dirty’, of which 25% of the toilers were classified as being – something is off with the results. Now, this does not necessarily nullify the entire survey, but it does mean the results have to taken with a pinch of salt.
Moreover, the survey emphasizes that only 13% of janitors were inoculated. One should bear in mind that, on average, janitors were employed for only 3 months at the time of the audit. We have no idea how long it takes to get inoculated. Additionally, the City employs 600 janitors, whereas the survey audited only 31 of them. This is hardly a representative sample. Across the audit results, one can safely assume a sample error of about 6% (this is worse for the janitors survey, but that is neither here nor there. Salt still applies)
I have never come across the concept of a “Social Audit” before. The idea is actually brilliant. I am a huge fan of matching social causes with data. The SJC et al should be congratulated for performing the audit, and being incredibly open with the data they collected. You can access all the documents and data, including the questionnaires here.
Glancing at the documents, a lot of effort went in the reporting of results, and the data capturing. But the questionnaire seems to have been rushed. There are smart things, like defining the area outside a toilet to be limited to 2 metres, and than some rather ambiguous questions. The SJC kept to generally accepted standards by making a note of auditors and data capturers names when capturing the data, but then failed to format the data in a way that eases analysis.
What follows, are my comments on the data-related aspects of the audit. I have pointed out what I view as major issues with the audit with and a few results calculated from the raw data. I have made no comments about the politics surrounding the report, and simply hope that the next time such an audit is done (which I am looking forward to), it gets better.
If there are any errors in the analysis, consider my critique as the reason why.
Before one even begins designing a questionnaire, one needs to decide if you want exploratory data, or formalized and standard data. An example of an exploratory question would be “How do you think about the state of the toilets in your area?” where the response is free form, and open ended. Whereas a formalized question would be “Rate the state of the toilet, on a scale of…” and then a suitable scale given. Generally, exploratory questions are done before an audit is performed, because it allows you to decide what responses should be given in a formalized question. For large surveys, having free answer questions raises the difficulty when analyzing results, as each answer has to be encoded to give an analytical response. For the most part, the SJC audit is formal.
Questionnaire design is more art than science. It is also usually your first point of failure. For if your data is wrong, becaase of a bad questionnaire, you cannot continue. Bad data analysis can be fixed – you simply re-do the analysis. A lot of thought has to go not only into the text of the question, but even the order in which the questions are asked.
For the Scottish referendum, Ipsos was tasked with conducting tests on various question texts. There were four proposed versions of the question:
1. Do you agree that Scotland should be an independent country? Yes/No (originally proposed question)
2. Do you want Scotland to be an independent country? Yes/No
3. Should Scotland become an independent country? Yes/No
4. Should Scotland be an independent country? Yes/No
At first glance, there does not seem to be much difference between the 4 questions. Each question was measured according to clarity, and neutrality. Clarity, very simply, means the question needs to be fully understood. But neutrality is a little more complicated. The UK Electoral Commission guidelines state that a referendum question “should avoid encouraging voters to consider one response more favourably than another.” With regards to neutrality, it is important that there needs to be a perception of neutrality. As Ipsos says, “if the question is seen as tending to lead people in a particular direction, however subtly, that has the potential to undermine voters’ confidence in the process.”
With this in mind, the first question starting with “Do you agree..” is clearly misleading and biased, and thus rejected. Version 2 was rejected, because the word “want” was seen as being “inappropriate in the context of a referendum”, and Version 3 was rejected, as the word “become” was seen as, amongst other things, ambiguous – it could mean “at some point in the future”, instead of now.
It was then recommended, that Version 4 be used in the referendum, as it was clear and unbiased.
The SJC Audit is hardly similar to or of the same severity as a referendum vote, but it serves as a good example of both the implications of questions, and notions of ‘clarity’ and ‘neutrality’ when designing questionnaires.
You can find the full report here. It is well worth reading.
The SJC audit consists of 3 separate questionnaires, one for janitors, residents, and a physical verification.
I am not aware as to how the questionnaire was asked – in that, from what I gather, it was not a self-reporting exercise, but instead trained(?) auditors approached the public and asked the questions. One would hope that was the case, otherwise a question like “Date: 16th July, 17th July, 18th July” has no meaning.
Compare questions 15 and 16a in the Janitor survey:
15) Zeziphi izinto zokusebenza onazo? What personal protective equipment do you have at the moment? tick all
16a) Yintoni onayo ngoku okanye okwazi ukuyifumana? Tikisha What do you have at the moment or can you easily get? tick all
From Q16a, I would assume “or can you easily get” should also apply to Q15? Question 15 lists items such as “2 Uniforms”. Clarity is needed as to whether or not the Janitor was issued with the 2 uniforms, or currently have 1, and can actually get a replacement should it be needed. It could even been interpreted as “are you carrying 2 uniforms right now”
Question 16a should also be more clear and ask “What equipment do you have at the moment”. There is the option to give an “other”, but for a questionnaire that is measuring the management and effectiveness of the service, it may have been worth asking:”What equipment do you need, but are not supplied?”
There is also a matter of consistency, in that, Q15 does not have “Tikisha”. This would not normally be a problem if the auditor was administering the questionnaire, but one should never over-estimate an auditor’s abilities. It is better to have clear instructions, than hope that instructions are carried out clearly.
I cannot comment on the Xhosa text, because I do not speak Xhosa. So my comments pertain only to the English text of questions.
Text such as the following:
19a) Have you been inoculated? yes / no
Q19b, though a clear follow-up question, it would be better to say:”When were you inoculated?”
Though the questions are functional, I would have to recommend rewriting most of them.
Now, it is not just clarity of questions that is important, but also order. The first 5 questions of the Residents survey are as follows:
1) Name of auditor:
2) Name of resident (optional):
3) Gender: man/woman
5b) Date: 16th July 17th July 18th July
5c) Section: BM/PJS/BT/ENKANINI________________
Questions 1, and 5 need to answered by the auditor? And questions 2 through 4 pertain to the resident? So surely it would make sense to re-order the questions, so that the auditor can fill in their parts ahead of time, and then start right away with the resident?
In the Resident Survey, one of the last questions are
How satisfied are you with the janitorial service?
This is after asking about how often the toilets are cleaned, and where janitors get keys, etc. Though in itself, the question is fine, what should have been done, is to ask the question first, and then interrogate the residents’ responses with further questions. If one were to assess the residents feelings about the janitor service, you should not get them worked-up before hand. This could be seen as biased ordering, that can to lead to the resident being primed before answering – either for good or bad.
Another aspect of questionnaire design, in the physical design of the questionnaire.
Compare the following designs; first is the SJC questionnaire, and the second is my quick mock-up.
The first thing, clarity must apply as much to layout as it does the text. The SJC questionnaire, contains no instructions, no heading, no talking points for the auditor. It is unclear if you are supposed to tick, or circle, or cross out what applies to you if given options. Again, one should never over-estimate an auditor’s ability.
It is also good to clearly delineate what is a question and what is a space for an answer. One very simple thing to do, is to simply provide lines on which a responder can write their answers throughout. For instance, in the SJC questionnaire, there is a line for Q6b, but not Q4 or Q5.
One final thing to consider for a survey, are the possible ethical issues. Best practice, is supplying auditors with a script, that amongst other things, includes the following three statements:
1. Do you agree to do the survey?
2. Are you doing so voluntarily?
3. You can withdraw and stop at any time.
I must commend the SJC for making the data of their survey publicly available. For the most part, they followed generally accepted guidelines for data capturing. However, there are a few issues with the way the data was captured that make analyzing the results particularly difficult. Since the sample was small, these problems can be manually fixed. Had the sample been any bigger, I would not have performed any analysis and deemed the data very dirty.
Take a look at the tables below. On the left is a the data supplied by the SJC, and on the right is the correct formatting of the data:
The data shown are the results of the Janitor Survey. Question 9 asked what the part of the toilets the janitors cleaned, and were given options to select, and then asked for “other” if there was anything else they cleaned. Firstly, if there are options that can be selected, it helps to have separate columns for each option. In the current format, if I wished to count how many janitors cleaned the “toilet pan”, I would have to do a manual count. Where as if you had a column for “toilet pan” that contains some mark – be it an ‘x’ or ‘1’ – counting is quick and basically automatic.
Secondly, you will notice that the SJC has recorded, I am assuming, errors as ‘999’. Either because the result could not be read, or the interviewer refused to answer? While any sequence can be used as a error mark, it is best to use the standard of ‘NA’ for ‘not applicable’. In this example, the ‘999’ makes no difference. But if there was a column of numbers (as there are), such as “How often are the toilets cleaned” and an error was recorded as ‘999’, any automatic analysis would read ‘999’ as a valid numerical result and would dramatically skew the average.
As I said earlier, the questionnaire is your first and most likely point of failure in data acquisition. If you look under the ‘other’ column, the fourth statement is ‘and next to outside the toilet’. Correct me if I am wrong, but surely that, by the sound of it, should have been ‘floor outside’ ? And so is not strictly an ‘other’, but one of the given options. This is something the auditor should have picked up when recording the results.
There are also three janitors that mention cleaning around the pipes. This could mean that ‘cleaning around the pipes’ is something that should be cleaned. How many other janitors would have said that where it given as an option and they were asked? If there had been an exploratory survey before this one, such an option may have been discovered and added to the list.
With this in mind, one must be careful not to prompt or prime reviewers when giving options. Very often, if someone is given options to choose from, even if given the choice of ‘other’, they will very often stick to the options given and fail to adequately use the ‘other’ option.
When analyzing results from a survey, amongst the many things to evaluate for integrity, the two common errors are sample size, and sample distribution. You want the sample size to be as large as possible, and ideally include the entire population.
Here are the sample sizes in comparison to the population:
|Residents||193 People||160 communities||<1.2%|
I have assumed a community contains at least 100 people.
When it comes to sample distribution, for this particular survey, there are two things to consider: 1. The spatial distribution of samples, and 2. An accurate sample of residents.
The SJC not only opened their data, but even supplied a map of the toilets sampled! Here is that map:
Now, I am not familiar with the geographic distribution of toilets in Khayelitsha, but I am assuming that they are more scattered than shown above. This is particularly important because one would assume, that if toilets are in groups of 5, say, if 1 is dirty, what are the odds that the others will be dirty? And if a group of toilets are dirty and uncleaned, could it be a local community issue? Be it the janitor not visiting that specific group, or they having a member or two prone to vandalism. I would think such questions are important for a Social Audit.
By sampling a greater spatial distribution, you not only account for these issues, but you could even pin-point if there were specific issues in specific communities.
On point 2 about an accurate sample of residents, we usually use race, age or gender. The sample is too small for age, but gender is given. In the residents survey, gender of the residents were given as 68% female, and 32% male. This means that we are possibly working with a skewed sample (assuming a more even split of gender), and on that assumption, if any result is given based on gender it would have to be scaled appropriately.
I wish to now look at two sets of questions from the Residents Survey:
9a) Is your toilet normally locked?
yes / no
9b) If YES, do you have a key?
yes / no
10a) What do janitors do when a toilet is locked?
don’t clean / find key / wait for resident / other
This is an interesting set of questions. Imagine it from a Janitor’s perspective. If you arrive at a locked toilet; you can either leave or you can try to find the key. Now, if you leave, the toilet then goes uncleaned. But what if you attempted to find the key and thus waited for a resident, but the resident never arrived or you searched for the key and could not find it. Would the resident then say that you never cleaned the toilet? Does the fault then lie with the resident or with the janitor? If the questions were clearer, and were correlated with the Janitor survey, we could get meaningful results from this.
The second set is:
11) How often does a janitor clean the toilet each week?
never / 1 day / 2 days / 3 days / 4 days / 5 days / 6 days / 7 days
13a) How satisfied are you with the janitorial service?
very satisfied / satisfied / don’t know / unsatisfied / very unsatisfied
If you look at the results of question 11, what is very striking (besides lack of question clarity) is that no one said the toilets are cleaned “4 days”. It is actually incredibly difficult for people to answer questions like this, especially when the events are sporadic. It is easy to say 1, 2 or 3 days, because that is ‘very often in a week’. Likewise, ‘5, 6 and 7 days’ is ‘not very often in a week’. What should have been asked is:
Did the Janitor come today?
Did they come yesterday?
How often do you they clean? not very often / often / very often etc.
And then, based on that question, we can then infer the regularity of janitor cleaning. Likewise, this information should have been corroborated in the janitor survey.
One could even have pondered whether if the janitor only cleans the toilet twice a week, is that because the toilet is actually clean the other days of the week and thus not needing to be cleaned?
When you view Q11 and Q13a together, encoding “2 days” as ‘2’, and “never” as ‘0’. Likewise ‘very satisfied’ as 5, through to ‘very unsatisfied’ as ‘1’, we have the following correlations:
Circle size is essentially the percentage of residents that fall in the pair. As you can see, if toilets are cleaned 0 times a week (i.e. ‘never’), residents are very unsatisfied. However, if toilets are cleaned once a week, residents are generally neutral on average, leaning slightly to ‘unsatisfied’. But if one averaged toilet cleaning on 1 to 3 times a week, then residents are generally happy. Basically, if toilets are cleaned at least once a week, residents will be happy – according to the data. This doesn’t seem the match the reported perceptions of residents.
Resident satisfaction has been removed for 4 to 6 times a week, because results were inconclusive (that ‘999’)
Something that I would want the data to show, is a comparison with how often a toilet is cleaned with how clean it actually is, as mentioned before.
If one breaks down satisfaction by community, I would rather stay in Section PJS than BM. This issue of ‘clean toilets’ and ‘resident satisfaction’ varies by each community.
In the Physical Verification survey, what’s most interesting, when looking at questions 8 and 9
8) How clean is the toilet inside?
9) How clean is the ground outside the toilet (2m)?
Is that if a toilet is clean, the ground outside is also most likely to be clean. If the toilet is dirty, the ground outside is most likely to be dirty, and likewise for very dirty. Now correlation doesn’t imply causation, but I would be willing to bet that if a community invests in keeping the outside of a toilet clean, invariably, the inside will be clean as well.
Out of the 528 toilets that were checked, only 426 were able to be inspected. That is, the rest were locked. One needs sympathy for the janitors. If a toilet was locked, the auditors should have documented how difficult it is to find a key, and what steps they took with time.
Working 8 hours a day, with 600 janitors and 11 000 toilets, if a janitor spent 26 minutes on each toilet, then they would be cleaned at least 5 times a week. This is a quick an dirty back-of-the-envelope calculation and does not take into account sick janitors, or time to find keys, etc. The issue of unclean toilets, may be attributed to lack of janitor resources.
The SJC et al, must be commended for their efforts. This is the first time that I have seen a South African NGO apply data to a civic cause. They must be commended for not only conducting to Social Audit, but the open data access of the results. Yes there were some mistakes here and there. But nothing that is overly erroneous, and they are issues that can be fixed. One, however, is left with more unanswered than answered questions. Something, I hope, which leads to a second social audit. With a little more planning, the results could be taken incredibly seriously and make data-backed meaningful change.