Hello all! I wasn't really expecting much responses for this but as I see that someone is curious and wants to know a little about accessibility software, I will help and give all that I can.
This post will be made in a quote arrangement to make it easier to understand and will be quite long for my explanation so if you're not into that and want to have your info given in voice or can't understand tone through text that well, I have made a video demoing what I can do using accessibility software and general software as well as WaveTracker.
https://drive.google.com/file/d/1_qLm0eeV4ro5khf1HDNGi5rwn6BI9AQU/view?usp=sharing
I took care to highlight the screen and keep the speech of the computer slow so those with transcription can use it if they wish, but at some parts the mic audio will be loud.
Side note; I really need to get mic filters to protect from clipping.
If you use the video please listen through speakers to prevent some ear damage!
The following quote from user @SRB2er was the thing that brought my attention to this and thus accessibility UI implementation information is required to help understand what's actually going on a bit better.
"@Juan Reina
this may sound like me being a dumbass but how can some software not allow screen reading? can't it just... read the text on screen? "
Actually, no it can't, although I could understand you coming to this conclusion.
screen readers don't actually read the text on the screen. See the way how UI works when you make it for accessibility is it has to creat what's known as an accessibility tree, and this basically holds all the information that the screen reader can see and will thus be posted to user when they do something that activates that certain control. This is called semantic information, and it's given when the screen reader gains that information via the application or UI toolkit.
You might not think about it but I actually get that question a lot. Doesn't a screen reader just read the screen?
Well that's sorta misleading in the name but a screen reader also works in tandem with the opperating system to try and relay information to the user with some UI software and over the years has made solutions to try and give information to the user in the form they require. This is called an accessibility API.
So no, it's not a press play and hear all your info given to you kinda thing, that's text to speech and that's something elce.
Screen readers use tts, but they aren't tts. :)
Back to the accessibility tree.
Just like a normal tree that has roots that tell planters what kind of tree it is, accessibility software gets information from the start of the tree as well.
We need to know from the get-go what we're dealing with be it a window or document, and that's what the root's for.
Next we need the controls that the accessibility software will see.
This can include things like: buttons; checkboxes; lists; and forms for example.
After that we need to know where those items are located on screen, the text assigned to those items and all items within those items if anything. These starter items before the root are called the nodes, and the items within those items are called child elements or just children.
for a more in depth understanding of the accessibility tree, you can see this link by Sophie Beaumont who describes a bit more of how this works.
https://sbeaumontweb.medium.com/the-accessibility-tree-understanding-and-debugging-fab9df75a1d0
Now, to WaveTracker.
(if you use something like gtk, qt, electron, etc, those do communicate properly with the screenreader on each operating system, so that's not a problem.
However, with custom UI, such as what can be made with opengl directly, sdl2, tk, etc, lack any semantic information which could tell the screenreader what type of control that is, where it is located on the screen, etc.
So then, an intermediate layer has to be put between the application and the screenreader, which would generally not be cross-platform, include lots of hacks and so on. However, there's a new framework to do that generically, it's called access kit.
https://accesskit.dev/
It's designed for imediate mode UI, so less state to keep around for accessibility specifically, and while it doesn't offer a c# API, the C API is relatively straight-forward and easy enough to either generate or bind manually. It does go through rust, several other abstractions and then it reaches the native platform APIs, so it will be a bit slower than code written against the native API, but this way, there wouldn't need to be a huge amount of effort when writing accessibility for UI like this, especially if you want to target multiple desktop platforms. )
He's not on here but GitHub user Esoteric Programmer was the one that recommended AccessKit and also works on the Linux screen reader called Odilia and gave very important help for this post.
https://odilia.app/
As for me myself I've done a demo with WaveTracker letting people understand what I can do with it, it's in the drive link of this post.
At the moment next to nothing but the window title reads, and to open files with it you have to use the open with dialog, look for wavetracker.exe and then press F5 to play and that's it.
Now to the next post by user @nitrofurano.
"@Juan Reina please let us know, as far you tried up to now, which trackers, daws and whatever that does and does not support screen readers, and which operating systems are you using - i'm quite curious about - as from what i can understand, screen readers only can read text used from window manager libraries (like gtk, qt, motif, wxwidgets, etc.), and not those rendered as picture (like sdl, opengl, etc.), so i guess there is why you probably can use text readers on some trackers (like openmpt, famitracker, etc.) and not on some other trackers (like furnace, milkytracker, etc.) - actually, i hope such implementation will not affect that future cross platform support, since i only use gnu/linux (mostly) and macos-x (rarely.)"
I've used audio editors like: WavePad and GoldWave; Tried and found DAWs extremely cumbersome to use but love the realtime listening; and absolutely dislike mml for both being cumbersome and that lacs the realtime listening to hear my music as I change it.
I like trackers because: most of them have a small interface; are easy to make music for; some of them have the slow ability to put your music down and think everything out and you can listen to every thing as you change it.
I want to work with Openmpt and help them with accessibility but because I haven't heard anything from them in a long time about improvements to that, I am unsure as we would practically need to start from square 1 because Openmpt currently doesn't even allow you to set keyboard shortcuts with out the accessibility software like screen readers getting stuck in the keyboard creator dialog, and this is due to the main program not realising that accessibility software should have handle over the keys as well and that only the first set of key presses are to be passed through to the application.
Along with that they're much more bugs from things like: the sample editor not even being usable; the instrument editor is non focusable as well as the sample list; and the fact that to load a sample doesn't even have a keyboard shortcut to do so along with other bugs.
It's due to this along with the fact that as far as I know Openmpt no longer has accessibility talks going to it has made me not want to engage with it, as I don't think they would like to start from quite square 1 to make sure that all access bugs are done and taken care of.
To put it in a way I love you all and I want to be a part of you people, but I am not a programmer. I can and will help with accessibility but I can't code in the way that blind programmers can.
Over the years I have taken time to write these requests for accessibility within the programs I care for like Lemuroid which is an android emulator project, but when I do work like this I am very serious and well, all I'm saying is I have been given a no on things like this before, and I tend to not really want to work with projects that may give me the run around with very little as I am not that good of a writer and writing in this manner does take time for me.
UI things like ImGUI do make it much easier to make applications sure, but a reason why the blind and other disabilities fear it is through it, more and more programs may not be accessible to those who use screen readers and this has been the main reason why the disabled programmers made AccessKit, but even then it's yet to see if that has an impact on the latency of that info appearing to the screen reader, thus possibly making the program slower in the process as a result of overhead.
Hell screen readers weren't even made via those with sight in the beginning, so that's also why screen readers can also not just, *read the screen*.
This has taken me seven days to write and collect, so I hope this info was helpful.
If I can help with anything more, please do ask!
The screen reader that has the most market is NVDA, a screen reader for Windows.
https://www.nvaccess.org/