There are a number of Speech Recognition Programs, Microsoft Windows Speech Recognition, Word Dictation, Voice to Text (Google Chrome) among others. But the Gold Standard is still Dragon Naturally Speaking.
The South Carolina Assistive Technology Program is offering a Webinar on the basics of using Dragon Naturally Speaking on May 19, 2020, starting at 10:00 AM Eastern Time.
Microsoft has created free, accessible tools to help support creation of content for learners of all abilities. Learn how to access these features built in and across platforms to support remote learning. The presenters will cover the following features built into Word and Word Online: Dictation, Word Prediction, Translation, Editor and the ability to customize accessibility features such as color filters, mouse, cursor etc.
As many as two-thirds of students in classrooms today score below
proficiency in reading, writing, and STEM. This includes students who
speak English as a second language, students with disabilities, and many other
students who do not yet possess the skills needed to meet today’s rigorous
standards. While today’s digital environments provide great tools for
presentation of materials, many lack STEM and literacy supports needed by these
In this session, attendees will see a demonstration of
Read&Write for Google Chrome, EquatIO, and WriQ. These are powerful
programs with over 25 million users. These ELA and STEM-focused products
work with BOTH Office 365 as well as G-Suite programs. Participants will learn
more about common technology supports such as text-to-speech, word prediction,
dictation, text and picture dictionaries, annotations… and more… that can help
ALL students and especially those who struggle with reading and writing.
These tools can be used through the Chrome, Edge, Explorer browsers on PCs,
Macs, IPads and Chromebooks.
Learn to accommodate different learning styles using tools from
the Read&Write and EquatIO Toolbars
Create customized vocabulary lists and study guides
Find new resources for accessing digital text
Discover a great tool for grading writing with customizable
Learn how to access a “Free for Teachers” account
Helping Students Struggling with
Executive Function Build Organization Skills for Transition Webinar
As covered in a previous blog, making sure website accessibility
is included in the design process is probably the simplest way to ensure it
But there is still a need for testing.
We need to know what to test for. The guidelines issued by the W3C give a starting place
The standards outlined in W3C Web Content Accessibility Guidelines are organized into three level; A, AA and AAA.
Level A cover the most basic web accessibility features and
is the minimum standard a website should meet. For example, the WCAG guideline All non-text content that is presented to the
user has a text alternative that serves the equivalent purpose, except for the
situations listed below. A solution at this level would be providing a
transcript for pre-recorded audio.
Level AA requirements ensure that content achieves a
greater degree of accessibility People with disabilities will have an
easier time accessing content that meets Level AA criteria than they would with
content that only meets Level A. A solution at this level. A solution at this level would be providing
an audio description for pre-recorded audio.
Level AAA is the highest and most complex level of web accessibility. This level includes additional requirements, some of which enhance those established in Level AA criteria. For pre-recorded audio, sign language interpretation should be implemented.
Remember, Level A is the minimum a website should meet for
the page to be considered accessible.
Level AA is the level most websites should strive for, this should also ensure
compliance with the 508 Standards. Level
AAA is the “Holy Grail” of accessibility, but whenever possible the WCAG standards
at this level should be implemented.
The next post will cover methods and tools to test a website
for the WCAG standards.
Word Camp Atlanta 2018 (a weekend for learning about Word Press) was held the weekend of April 14 and 15th. The theme this year was Diversity. As part of that, there were several presentations on accessibility. The presentations were in addition to the Keynote from Aimee Copeland.
The three sessions covered Reasons for website accessibility, how to evaluate a website for accessibility and how to build an accessible website.
Previous posts looked at automating Word by using the AutoCorrect option. Typing a character string and having it replaced by strings of text and then adding a button to the Quick Access Toolbar to simplify the process.
Macros will be the next step to automate Word.
In its simplest form, a macro is simply a series of actions that are recorded, then with a single command execute the recorded steps
One thing a student might do repeatedly is to highlight information in a Word document, either for reference or later editing. It would be helpful to be able to quickly identify the highlighted sections. If it is a short document, a quick inspection can identify the highlights. A multi-page document will require more time and the possibility of missed sections.
By using the Advanced Find feature, all of the highlighted sections can be identified, selected, copied and then made available for pasting.
We’ll break this into two pieces. One, to walk through the steps to select the highlights, copy them and paste them into a new document.
The second part will be to add the Developer tab to the toolbar and then record the macro.
To begin open your document with the highlighting you want to extract. The document we’ll use is Taming of the Shrew downloaded from Gutenberg.org with selected passages highlighted.
The first step is to click the Find button in the upper right of the Ribbon on the Home Tab.
The Navigation pane will then appear. Click the down arrow next to the Search box and select Advanced Find from the dropdown list.
The Find and Replace Dialog box will come up.
Click the Button labeled More to get all the options.
Click the Format button in the lower left corner and select Highlight. The word Highlight will then appear under the Find What text box next to Formatting:
Click the Find In: button and select Main Document.
Close the dialog box and all of the highlighted sections will be selected.
Press Ctrl+C, then Ctrl+N to open a new Word document and finally Ctrl+V to paste the selections in the new document.
The new document is created with just the highlighted text that was selected.
The video below will also show the steps.
The second part will be creating a macro to do the same thing but with either a keyboard shortcut or a button.
When Microsoft created the Ribbon for Word, it was based on the concept of placing more options in the front of the user. Most of the options are now more readily available with only one click.
Unfortunately, the AutoCorrect option was buried fairly deep into the menus, requiring several clicks to bring up the Dialog box.
What we’d like to do is add a shortcut to the Quick Access Toolbar so AutoCorrect is quickly available with one click.
Start by going to the Quick Access Toolbar, clicking so the menu appears. On the menu, select More Commands.
When the customize Ribbon dialog box appears, Click the Show Commands From box and select Commands Not in the Ribbon.
This will show all the command that are NOT on the Ribbon alphabetically in the left box and the command in the Quick Access Toolbar in the right box.
Scroll down to find the AutoCorrect Option – It will be the one with Lighting Bolt icon.
Click the Add button to place the AutoCorrect command in the Quick Access Toolbar. Then click OK.
The AutoCorrect button with icon has been added to the Quick Access Toolbar
AutoCorrect is now only a click away. You can see the steps in the video below.
So far we’ve covered the basics of Balabloka. Let’s look at some of the extra things that can be done with Balabolka.
The speech engine for Balabolka relies on an API (Application Program Interface) built into the Windows Operating System. The most current Speech API is SAPI 5. SAPI4 and its voices could be installed on the computer but the voices are not as high quality.
Microsoft David and Microsoft Zira are the default US English voices in Windows 10. Windows 7 had Microsoft Anna 64 bit. There are other voices which can be installed, some free and some purchased. The bit rate, either 32 or 64 bit needs to be matched with the operating system bit rate. Otherwise, the 64-bit programs might not be able to access 32bit voice.
The voice in Balabolka can be changed as we’ve seen, by going to the Menu option for Voice, selecting voice and selecting from the list of recognized voices. This will change the speaking voice. There is a way to change the voice and its properties for selected lines. You can alternate between male and female voices.
Because SAPI5 allows the use of XML tags, there are a number of things we can change including a different voice, Volume, Rate, Pitch, Emph, and Spell.
For example the XML tag to change the voice is <voice required=”Name = voice_name“>. The placeholder, voice_name, is replaced by the full name of the voice, such as Microsoft David Desktop. The completed tag is placed before the line you want the voice to speak. The full tag will look like this:
<Voice Required =”Name = Microsoft David Desktop”>
“The question won’t be what devices are connected – it will be what devices are not connected.” Tony Fadell, CEO of Nest
I recently attended a symposium on the Internet of Things (IoT). Depending on how you count, this is either the Second or Third Industrial Revolution or the second part of the information age. Most likely, it is the merger of the industrial and information ages. It will also be disruptive.
The major theme for IoT is Connectivity.
Everything communicates with everything else.
When your thermostat talks with your alarm clock and your alarm clock talks with your coffee pot and the candle stick dances with…. oops, that was another movie. But everything is connected so when you set your alarm clock, the thermostat automatically sets the temperatures and the coffee pot time and orders the driverless car to arrive at the appropriate time. All you need to do is say to Alexa/Siri/Cortana is, “Get me up at 07:00 AM”
When this technology becomes widely available, it will be less expensive than devices that might be just for AT more mainstream.
What does this mean?
It’s a given that the US Population is aging. From 2000 to t2010, the population grew at a faster rate in the older ages than in the younger ages. The over 62’s accounted for over 16% of the population.
With more AT available at a lower cost, the older population will be able to stay in their own homes longer with less physical monitoring needed.
Worried that your aging parents didn’t take their meds today? Their medical bottle will tattle on them – not only will it be able to tell if the cap has been opened but can count the number of pills as well.
Worried about them falling at home and not being able to summon help? Their floors and walls will be able to alert emergency services if they’re sensed not moving or in a strange position. They won’t even have to tell Alexa/Siri/Cortana to call for help.
They’ll also be more mobile, with driverless cars, they won’t have to worry about keeping their driver’s licenses.
There will also be a reduced need for doctor’s visits since wearable technology will be monitoring their vital signs.
It’s going to be a connected world whether we want it or not. For those with special needs, it will be a boon.