Print

Print


Hi - I second the remark about the global indicator. I have worked closely for a couple of years with my university’s accessibility group to assess, identify and then fix accessibility issues within our public-facing interfaces. Our experience is that there is no efficacious method for automated / without human interpretation or intervention processes. There are several problems. 

First, you have to have a target level of accessibility, generally. There are a variety of standards, with differing levels of compliance within these standards. I suppose you could publish just a raw ranking of ’scores’ for libraries in relation to various standards, but is that meaningful to anyone anywhere? I am not sure.

Primarily because of issue 2, which is that you need to use multiple tools to acquire appropriate impressions of your libraries’ accessibility, in relation to your target. Many of these tools are commercial, some are browser plugins, some are server-based, with differing compliance assessment tools being used by different universities and the government. The bottom line for most universities is ‘are you in compliance with what the federal government requires for institutions receiving federal funding’ in a realistic sense, so that is the accessibility target, the Section 508 items as well as compliance with other laws and statutes that relate to web accessibility, some being state laws, depending on your state.

Issue 3 is that an automated review off accessibility issues will be - In my experience - highly erroneous depending upon the complexity of your site, and the technologies used. For instance, one site I assessed and ‘corrected’ was a a page that used Bootsrap / Angular /Node.js to implement large drop down menus that were effectively hidden from, but also organized for,  screen readers using various tags and positioning within the code. It was 100% effective in all supported browsers, and was parsed correctly by common screen readers, but was yet indicted by 1 or 2 assessment tools as being a serious issue, but they were not, and the automated claims of the assessment tools were wrong, and it required a human assessment of the reporting function to make a correct assessment. There were/ are many instance of this across all of our properties. If that had been a strictly automated process, we would have recorded an important false-positive, and given the interface a much lower score than it actually warranted.

Assessment of library accessibility is a mix of qualitative and quantitative. An accurate review would contain elements of both, not strictly one or the other., IMO.

> On Oct 23, 2020, at 8:37 AM, Caffrey-Hill, Julia <[log in to unmask]> wrote:
> 
> Hello Dr. Parthasarathi Mukhopadhyay,
> I can provide some partial thoughts, and there are other members who have strong, knowledgeable perspectives that may want to chime in also.
> 
> Re: 2. 
> - For ARIA, there's consensus that a high number of ARIA found on a page is not necessarily an indicator of accessibility and, to the contrary, a high score is a red flag that may indicate abuse of ARIA tags. They are easily mishandled. There are others in this community, namely Katherine Deibel, who are prolific on this topic that I hope can chime in or link to part presentations/resources.
> - For your study, as it relates to ARIA specifically, I recommend AXE browser extension (https://www.deque.com/axe/). I don't think an API is available for it, but it is good for validation, and I believe is suited to a quantitative study. There is a learning curve on understanding it. Deque Systems, according to their training, split off from the team behind WAVE, and built out the tool's capacity for testing ARIA tags.
> 
> Re: 3
> - In terms of a globally recognized quantitative indicator, I'm not aware of one. A combination of different tools is recommended, and they do have their weak spots. I prefer mixed methods to test for web accessibility. 
> - For a large number of websites at a time, I understand the need for a framework. For auditing our e-resources for accessibility, Towson University adapted a framework from Princeton University, who in turn adapted it from another library. My colleagues and I recently presented on how to do this approach (Description: https://wp.towson.edu/tcal/one-step-at-a-time-assessing-e-resources-for-accessibility-compliance/ Recording: https://www.youtube.com/watch?v=zQZjTeW-69E&feature=youtu.be  - 40 mins) - I hope that's helpful and if so, I'd be interested to hear about it.
> 
> All the best,
> Julia Caffrey-Hill
> Web Services Librarian
> Towson University
> 
> -----Original Message-----
> From: Code for Libraries <[log in to unmask]> On Behalf Of Parthasarathi Mukhopadhyay
> Sent: Thursday, October 22, 2020 7:55 AM
> To: [log in to unmask]
> Subject: [CODE4LIB] Web accessibility and ARIA
> 
> [EXTERNAL EMAIL - USE CAUTION]
> 
> Hello all
> 
> We are trying to measure web accessibility of some Indian institutes/universities/libraries in the form of a score and then rank those institutes/universities/libraries against the score (still at the idea plane). The plan is to fetch data through API in a data wrangling software for further analysis. My questions are as follows:
> 
> 1) Are there other services (apart from WAVE) that provide results in JSON format through API?
> 2) What is the significance of *ARIA* in determining such a score for web accessibility? Does a higher number of ARIA indicate a better accessibility? Or is converse true?
> 3) Is there any globally agreed-upon indicator for web accessibility?
> 
> Best
> 
> -----------------------------------------------------------------------
> Dr. Parthasarathi Mukhopadhyay
> Professor, Department of Library and Information Science, University of Kalyani, Kalyani - 741 235 (WB), India
> -----------------------------------------------------------------------