Courtesy of Dr. Frank Bowe, I am distributing the report of a conference on "Access to the Information Superhighway," held at Hofstra University earlier this year. To help one browse or print the report, I've inserted a line of 10 dashes, a page break, and the appropriate file name between the disk files I combined. Jamal Mazrui National Council on Disability Email: 74444.1076@compuserve.com ---------- File: README This disk contains the entire contents of the Hofstra University publication, "Access to the Information Superhighway" -- a report on the 1996 conference of the same name. If you wish to read the disk in the order of publication, proceed as follows: 1. Inside. The file "Inside" contains some quotes from people including then-Senator Bob Dole. 2. Contents. This is the table of contents. No page numbers appear here, as the page numbers were added later. 3. Intro. The file "Intro" contains the Introduction by conference chairman Frank Bowe. This outlines the conference and updates information, making the material current as of September 1996. 4. Krolak. The file "Krolak" contains the keynote address by Maureen Kaine-Krolak, then with the Trace R&D Center at the University of Wisconsin in Madison. The speech was based on a paper, then in draft form, that she and Gregg C. Vanderheiden of the Center prepared at the request of the National Council on Disability (NCD), a small independent federal agency. That paper was far longer than her speech, so the file is an abbreviated version of the paper. For the full document, contact the Council or the Trace Center. Contact information appears in the file "Where". 5. Larry. The file "Larry" contains Larry Goldberg's keynote presentation, which followed Krolak's in the morning of the conference. 6. Caption. The file "Caption" contains excerpts from the Federal Communications Commission's (FCC) late-summer 1996 report on captioning and video description. The video description material appears in the file "DVS". Mr. Goldberg discussed these issues in his keynote address. 7. DVS. The file "DVS" contains the Federal Communications Commission's (FCC) late-summer 1996 report on captioning and video description. The captioning material appears in the file "Caption". Mr. Goldberg discussed these issues in his keynote address. 8. Act96. The file "Act96" contains a summary of the luncheon address by the Hon. Daniel Frisa (Republican - Hempstead, NY) and excerpts from the Telecommunications Act of 1996, which he discussed. 9. Panels. The file "Panels" contains summaries of the afternoon panels. There were three panels. One was by NYNEX to explain its universal design principles and other services, one was by two New York State government employees to explain state rules and policies, and the third was by educators to outline local training programs for people with disabilities who are interested in learning how to surf the Internet. 10. NYNEX. The file "NYNEX" contains the NYNEX Accessibility and Universal Design Principles, as of August 1996. These principles were discussed in the NYNEX panel at the conference. 11. Where. The file "Where" contains contact information for sources such as the Trace Center, the National Council on Disability, and the National Center for Disability Services, as well as some World Wide Web and email addresses for more information about access to the information superhighway. ---------- File: INSIDE [Inside Cover] "This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." -- Western Union internal memo, 1876 "I think there is a world market for maybe five computers." -- Thomas Watson, chairman of IBM, 1943 "640k ought to be enough for anybody." -- Bill Gates, 1981 "Mr. President, one can hardly open a newspaper or turn on the TV these days without hearing about the Internet--the worldwide hookup of thousands of computers. For the price of a local phone call, an individual can retrieve information from almost anywhere on the planet. "But for Holly Haines, the Internet is about a job. Holly lives in rural Pennsylvania. The nearest traffic light is 8 miles away --a lot like western Kansas where I grew up. Because of muscular dystrophy, Holly rarely leaves home. "Several years ago Holly called my office, asking for some help in getting access to the Internet through a local university. She had a job offer at a national database company, but to call the company's computer directly every day would have meant huge, unaffordable long-distance phone bills. "Well, Holly got on the Internet and went to work. And about a year ago the Microsoft Network called to offer her a job as supervisor of Chat World. "Every day hundreds of network subscribers talk on-line in the virtual town square of Chat World. Life in the virtual world can get pretty wild, and Holly is Chat World's mayor and Miss Manners rolled into one. She oversees a staff of 75 people. "By the way, Microsoft never had a clue that Holly was disabled when they hired her. And here's the important lesson. For Holly, and for millions of Americans with disabilities, the Internet is both a great equalizer and a great opportunity. "In the future, we can expect even more astounding devices--such as systems that will allow blind people to freely navigate city streets using signals beamed from global positioning satellites overhead. And sophisticated voice recognition systems that will automatically closed-caption videophones of the future. "The bottom line here is simple. For people with every kind of disability--whether sensory, cognitive, motor, or communication --technology can provide tools to speak, hear, see, learn, write, be mobile, work, and play--in short, to live as fully and independently as possible. Technology increasingly allows people with disabilities to make the same choices about their lives--good and bad--that other Americans often take for granted. -- Bob Dole, April 19, 1996. Each year, on the anniversary of his initial speech on the Senate floor, Dole gave a disability-related talk. The 1996 edition was on the information superhighway. The text was taken from the April 19 Congressional Record. ---------- File: CONTENTS TABLE OF CONTENTS Introduction by Conference Chairman Frank Bowe x Keynote Address by Maureen Kaine-Krolak x Keynote Address by Larry Goldberg x Exerpts from Federal Communications Commission's (FCC) July 29, 1996 Report on Video Programming Accessibility x Luncheon Speech by Rep. Daniel Frisa x Excerpts from P.L. 104-104, the Telecommunications Act of 1996 x Panels x NYNEX Universal Design Principles x Where to Learn More x ---------- File: INTRO INTRODUCTION On January 11, 1996, Hofstra University hosted a conference, with generous financial support from NYNEX Corporation and technical assistance from the National Center on Disability Services. NYNEX has adopted principles of "Universal Design," by which it means the Corporation's services and products will be as accessible to and useable by people with disabilities as reasonably possible. The National Center on Disability Services has on its premises the Nathaniel H. Kornreich Technology Center, which showcases products and software that make information accessible for people with disabilities. The name of our conference was "Access to the Information Superhighway." What do we mean by this title? The answer has two parts. First, let us define "information superhighway." Maureen Kaine-Krolak, basing her Hofstra conference keynote address on a paper she wrote with Trace R&D Center director Gregg C. Vanderheiden, explained that the information superhighway is a system of information networks and information services that connect people -- across the country and internationally -- and provide them with a variety of information resources and services. Connect them how? Ms. Krolak said that a full range of connectivity tools -- including terrestrial, satellite and wireless networks -- is in use. Information transmitted over those means includes print materials, sound recordings, graphics/pictures, movies, databases and software. People can get these media in the home, school, workplace, library or other community center. Other services available include telemedicine, distance learning, publishing and telecommuting. Soon, federal, state and local government services will be provided via these networks. Much of this, especially distance learning, is of great interest to universities such as Hofstra. All this raises an important question. With what machines will people get this information? Ms. Krolak reported that people will use the computer, the telephone, the television (via set-top boxes) and public information systems such as touchscreen kiosks. Many companies, universities, libraries and other organizations place kiosks in building lobbies. These are easy- to-use computers that offer directory assistance, maps, and other important information. At Hofstra, for example, students can use kiosks located throughout the campus to identify courses they want to take and to learn when those courses are offered. This introduces the second part of the answer to what we mean by "access to the information superhighway." By the word "access," we mean that people with communication-related disabilities can use these machines -- and can understand the information they convey. In her keynote address, Ms. Kaine-Krolak outlined how people will be using ever-more-simple devices to surf the 'Net. One example, already on sale in Europe and expected in the U.S. early in 1997, is the Nokia 9000 terminal. Made by the Finnish telecommunications giant, the Nokia 9000 is a small, handheld digital phone that lets people send and receive alphanumeric pages, faxes, and email -- as well as voice communication. Users can also connect to the Internet, including data services ranging from sports scores to stock quotes. The challenge for those of us concerned with disability is to make sure this machine, and the others like it that follow, are or can be made accessible. Access to the information superhighway also means captioning of videos that people get while they are surfing the 'Net. It also means video description of those video clips, for the benefit of blind 'Net surfers. Today, assistive technology devices let people with a wide range of disabilities use computers. Thousands of special-purpose peripherals let people type by puffing-and-sipping, even by moving an eyebrow or other small muscle, or by talking to the computer, or by any of a dizzying variety of other means. Similarly, people who cannot read the screen can use speech synthesis (computer talk) to listen to the screen's display or convert the information to Braille and print it out. But all of this is with a personal computer. What if World Wide Web (WWW) home pages are received via a TV set, as with the Philips Consumer Electronics product, the Magnavox WebTV? On a kiosk? We are going to need adaptive capabilities (hardware, software, etc.) that assist with input and output on this wide range of machines. And it means taking full advantage of the video capabilities of the information superhighway. For example, on July 26, 1996, Olympic gold medal swimmer Tom Dolan (who was in the Olympic Village in Atlanta) talked to Tommy Parker, a deaf Gallaudet University student. They connected via a new technology called Video Relay Interpreting (VRI). Everything Dolan said was translated by a sign-language interpreter, who happened to be in Houston, Texas. Parker, sitting in the offices of the Federal Communications Commission (FCC) in Washington, DC, saw the interpreted words. His responses, in American Sign Language, then were translated by the Houston-based interpreter into speech so that Dolan could hear the answers ("Gallaudet Student and Olympic Gold Medal Swimmer Tom Dolan Demonstrate New High Tech Phone Service for People Who Are Deaf," FCC News Release, July 26, 1996). After the demonstration, others used the VRI capability with equal success. Someday fairly soon, Tommy Parker and others who are deaf will be able to sign to each other, directly, over phone lines -- signing and watching just as hearing people speak and hear. We will need Integrated Services Digital Network (ISDN) and fiber-optic cable to make real-time signing on the phone a reality. The conference-closing address was delivered using ISDN and fiber: the conference chair, sitting in his office a half mile from the conference center, spoke and signed the address. ISDN and fiber are not yet widely available. Over today's copper wires, though, deaf people can use a feature of many Internet services called, variously, "chat" or "phone". With it, users can have a real-time conversation with other people. The PC screen divides horizontally, with the top half reserved for one person and the bottom half for the other. It is as if they were having a TTY conversation: both type, and both read. No hearing is required. However, when video or graphics are used, Mrs. Kaine-Krolak said in her keynote address, access means that it is readable by people who use speech synthesizers, whether because they are blind or have low vision or because they have a learning disability or other limitation that interferes with reading. Graphics are appearing everywhere now -- in public information kiosks, on cellular phones, on cash machines, and on everyday consumer electronics products. Perhaps nowhere is this more irritating for speech synthesizer users than on the Internet, precisely because it used to be completely textual -- and very, very easy to read via a synthesizer. The recent influx of WWW style sheets, icons, pictures, videos and complex graphics has rendered much of the 'Net unreadable by synthesizers. Fortunately, there is some good news. Graphical user interfaces (GUI) made more sense when PC's had limited memory than they do with today's highly visual WWW pages, which can overwhelm the viewer with page after page of graphics. Accordingly, some programmers are reconsidering GUI as a "friendly" user interface. In addition, Microsoft's Internet Explorer 3.0 for Windows 95 and Windows NT 4.0, released in August 1996, is a web browser that has accessibility features built in. The browser is free (to download it, open http:www.microsoft.com/ie/download/ in your current web browser. Among the features are ALT Text for text description for users of video description, variable font sizes, and the ability to disable style sheets. For more information, contact http://www.microsoft.com/windows/enable/ Access to the information superhighway also need not require typing. Brooks Davies, of Santa Maria, CA, uses DragonDictate for Windows, with a sound card, to "type" by voice. With DDW, she navigates on the Internet and uses Prodigy and CompuServe, all by talking to her computer (see "Taming the Dragon," New Mobility, vol.1, no. 26, November 1995, pp. 32-33, 55). In future, access to the 'Net may also mean such now-exotic things as brain emission control. Researchers at Wadsworth Center for Laboratories and Research, in Albany, New York, are experimenting with human trials in which people control a cursor's movement with their minds. According to Jonathan Wolpaw and Dennis McFarland at the Wadsworth Center, the technology might help people who have had strokes or who have amyotrophic lateral sclerosis (ALS; Lou Gehrig's disease), as does theoretical physicist Stephen Hawking. TELECOMMUNICATIONS ACT OF 1996 The Hofstra conference occurred just a few days before President Clinton signed into law the Telecommunications Act of 1996 (PL 104-104). U.S. Representative Daniel Frisa (R-NY), who served on the House-Senate Conference Committee that wrote the final version of the law, discussed the legislation at the Hofstra conference luncheon. As Rep. Frisa told us, the Act has a number of remarkable provisions (these are presented, verbatim, in this report). One such feature, section 713, requires that most video programming be captioned. The only previous mandates for captioning were the Americans with Disabilities Act of 1990's (PL 101-336) call for all federally funded public service announcements (PSA's) to be captioned and section 504 of the Rehabilitation Act of 1973's (PL 93-112) requirement that video programming produced with federal grant funding be made accessible to people with hearing loss, via captioning if that is reasonable. Not many people realize that virtually all other captioning that is now performed is not due to any federal mandate. Thus, the new Telecommunications Act is a landmark law on the road to an accessible information superhighway. Section 713 of the Act also directs the FCC to complete an inquiry within 180 days of enactment of the 1996 Act, that is, by August 8, 1997. A report on the this inquiry is to be submitted to Congress by then. The FCC will also publish regulations and implementation schedules to ensure that video programming is fully accessible through closed captioning, again by August 8. Among other things, the Commission then will announce who bears the financial responsibility to pay for the captioning. It seems likely, at this writing, that the owners and distributors of the video programming will have that burden. The Television Decoder Circuitry Act of 1990 (PL 101-431) requires that all television broadcast receivers with screen sizes 13 inches or larger that were manufactured or imported on or after July 1, 1993 be capable of receiving and displaying closed captions. (However, the TDCA says nothing about captioning of broadcast programming.) As a result of that Act, between 50 and 60 million U.S. homes can currently receive closed captioning, according to the FCC. Each year, another 20 million to 25 million caption-ready TV's are sold in the United States. Most owners of such TV sets, however, probably don't know it -- and don't know how to use and benefit from captioning. In particular, captioning can help children and adults learning to read, and people learning English as a second language. Section 713(f) requires the Commission to begin an inquiry no later than August 8, 1996 "to examine the use of video descriptions of video programming in order to ensure the accessibility of video programming to persons with visual impairments." The FCC has announced that it believes the pattern followed by closed captioning -- a pattern in which voluntary efforts paved the way for later mandates -- appears reasonable as a general guide for stimulating video description. Larry Goldberg focused his keynote address at the Hofstra conference on captioning and video description. Section 255 of the Act requires that new telecommunications equipment and new telecommunications services be accessible to and useable by Americans with disabilities, if readily achievable. Section 256 permits the FCC to work with standards- setting bodies and others to work toward ensuring that new telecommunications interconnectivity standards also support accessible. (Interconnectivity refers to how local, long distance, and other telephone companies connect to each other so that customers experience a "seamless" service.] And section 251 forbids telecommunications companies to install network features, functions, or capabilities that do not comply with the guidelines and standards established pursuant to section 255 or 256, including accessibility guidelines. All of these parts of the Act will help us to create an accessible information superhighway. A WORLD OF POSSIBILITIES Many of us, when we think of the information superhighway, immediately think of the WWW. A good example of a WWW home page that is accessible to people with disabilities who need captioning or video description is "BigYellow," the Internet shopping directory service sponsored by NYNEX Information Resources Co. This home page offers access to more than 16 million phone numbers and addresses of businesses in 300 U.S. and foreign directories. It is advertiser-supported, as are the printed Yellow Pages, so it is free to Internet users. Working with Larry Goldberg, who was a keynote speaker at this conference, NYNEX has designed site for accessibility. For example, graphics have speech-synthesizer-readable text descriptions. In addition, a text-only version of the service is also offered. NYNEX is also urging its advertisers to make their information accessible, including captions for any video materials. The BigYellow home page receives more than 300,000 "hits" daily, including many from people with disabilities. The address is: http://www.bigyellow.com Users can also connect via NYNEX Interactive Yellow Pages at http://www.niyp.com The National Center for Accessible Media (NCAM), headed by Hofstra conference keynote speaker Larry Goldberg, maintains a list of web sites that use a symbol for accessible WWW pages. The Web Access Symbol (with its description) appears on the cover of this report. The symbol is offered free of charge by NCAM to web sites that follow guidelines developed by individuals and organizations interested in accessibility for people with disabilities. Help on designing accessible WWW pages is offered by NCAM's Model Accessible World Wide Web Site project [http://www.wgbh.org/ncam]. Another excellent source for guidelines and information on access is the Trace R&D Center, at http://www.trace.wisc.edu Another kind of accessibility is that of NYNEX's VoiceDialingsm service. This feature, which was demonstrated at the Hofstra conference by Dr. Sara Basson of NYNEX Science and Technology, allows people to dial the phone simply by speaking into the receiver. For individuals who have had a stroke, or who have arthritis or other conditions severely limiting fine-motor control in their fingers, or for people who are mentally retarded and have difficulty remembering phone numbers, VoiceDialing makes everyday use of the phone both possible and convenient. Emergency calls ("911") are well-known to most Americans. If they place a 911 call from a regular telephone ("wireline," because the signals travel of phone lines), the local emergency service personnel can, in most instances, use "Enhanced 911" (E911). About 85% of communities with 911 services have E911, according to the FCC. E911 allows the caller's telephone number automatically to be identified (called "Automatic Number Identification," or ANI) and the phone's location to be known (called "Automatic Location Identification," or ALI). This information can speed assistance to the caller. Of course, it is a major help for people who cannot speak or who (perhaps because of mental retardation) cannot explain their location. However, if the person places the 911 call from a cellular ("wireless") phone, only the basic 911 services are available, not the E911 enhancements such as ANI and ALI. In addition, wireless phones often are incompatible with Telecommunications Devices for the Deaf (TTY's or TDD's). The FCC recently ordered that the wireless industry, standards-setting bodies, and consumers get together to resolve these problems, so that TTY users will be able to gain access to 911 services via wireless phones. [Readers interested in more information about E911 and TTY's may request "CC Docket No. 94-102; RM-8143; Report and Order 96-264" from the FCC, 1919 M Street NW, Washington, DC 20554. The "Report and Order" also is available at: http://www.fcc.gov/Bureaus/Wireless/Orders/FCC96264.wp]. OUTLINE OF THIS REPORT This report follows the conference schedule. We begin with the two keynote speakers, Maureen Kaine-Krolak and Larry Goldberg. That is followed by excerpts from the Federal Communications Commission's (FCC) July 29, 1996 report on captioning and video description, about which Mr. Goldberg spoke. Then the luncheon address, by Rep. Frisa, and the legislation about which he spoke are presented. Summaries of the afternoon panel discussions about telecommunications services, NYNEX offerings, and training programs for people with disabilities who are interested in learning more about the information superhighway follow. The NYNEX panel discussed the Company's universal design principles; these appear following the panel discussion summaries. At the end is "Where to Learn More," a compilation of resources. The speeches have been edited. Where material has been deleted, usually because it became dated, this is indicated by a line having two periods: .. Where material has been added, usually to clarify or elaborate upon something that was understandable in person but might not be in print, this is indicated by the use of brackets: [insert here] The panel discussions are summarized, drawing from contemporaneous notes prepared by the panel coordinators. Disks (3.5") with IBM-readable ASCII versions of this report are available upon request. Send a blank disk to: Professor Frank Bowe, CRSR, 124 Hofstra University, Hempstead, NY 11550-1090. You may also send questions you have after reading the report or after listening to the disk. The editor may be reached via email at serfgb@hofstra.edu and by fax at 1-516-463-6503. ---------- File: KROLAK KEYNOTE ADDRESS: MAUREEN KAINE-KROLAK Mrs. Kaine-Krolak's speech was based on a much-longer paper she and Gregg C. Vanderheiden of the Trace Center prepared. The full report to the National Council on Disability is available in electronic format via ftp, gopher, or WWW at: trace.wisc.edu and via mail from the Trace Center or from the Council, under the title "Access to the Information Superhighway and Other Emerging Technologies" (see "Where to Learn More"). I. Briefly, what is the NII, and what can I do with it? What is the NII (National Information Infrastructure) or "information superhighway"? When people speak of the "NII" or "information superhighway," they often have different ideas of what the terms refer to. In general, the NII refers to a system of information networks and information services which will connect people across the country and internationally, and provide them with a variety of information resources and services. Connectivity will be possible in the home, school, workplace, community and beyond, and will be accomplished via terrestrial, satellite and wireless networks. Information that will be available includes print materials, sound recordings, graphics/pictures, movies, databases and software. Services such as telemedicine, distance learning, publishing and telecommuting will be possible, and federal, state and local government services will be provided via these networks. Although many people equate the NII with the Internet (our current electronic information highway system), the envisioned and developing NII system is much broader than this. The information superhighway will allow information to be accessed in a variety of environments and via a variety of technologies, including the computer, the telephone, the television (via set-top boxes) and public information systems such as touchscreen kiosks. As NII development progresses, a diverse range of industries will be involved in development efforts. A sampling of industries represented includes: * broadcast television; * cable television; * cellular and personal communications; * computer hardware and software; * consumer electronics; * infotainment and advertising; * publishing; * local and long distance telephone companies; * electric utilities; * satellite providers; * wireless cable. There is an extremely wide variety of things that can be done on the current NII (let's call it a limited access highway), and even more that will be possible on the NII that is emerging (let's call that the information superhighway, plus all of its feeder roads and the driveways leading up to your house, school, company, etc.). .. II. Components of the NII In order to make the NII accessible and to better recognize existing and potential access issues, it's important to understand the different components. Although there are numerous ways of classifying the NII components, for this discussion the NII is basically divided into four categories: 1) Sources of information; 2) Transmission mechanisms (pipeline); 3) Translation and other services during the transmission process; 4) Viewer/Controllers. 1) Sources of Information The first component of the NII is basically the information provider. These are the people who create the information or data which is sent over the NII to others. Information must either be produced in accessible formats, or in formats which can be easily translated into accessible formats. Examples of information sources include: Publishers (books, magazines, newspapers, special newsletters); Libraries; Government services (information on employment, financial aid, taxes, hours of service, services available, etc.); Most commercial companies (information on products, prices, deliveries, stock, hours, etc.); Companies whose products can be sent "over the wire" (movies, advice, newsletters, product or topic information); Local schools (homework assignments, homework aids, schedules, meetings, school lunch menus, etc.); Universities (course schedules, financial aid, program descriptions, research opportunities, jobs); Clubs (announcements, newsletters, meetings); On-line information services (e.g., CompuServe, Prodigy, Genie, eWorld, etc.); Your family (plans, schedules, coordination of emergencies, group letters/updates, gift lists at holidays); You (things you want to sell, your resume, services you can provide to others, personal newsletters, advice or information on a variety of topics). 2) Transmission Mechanisms Once you are connected to the information highway, you will have no idea exactly what channels the information will take, either coming to or going from you. In most cases, the information will travel over many different transmission mechanisms along the way. Some examples of different transmission mechanisms include: Telephone line; The Internet; Cable television wiring; Special fiber optic links; Microwave; High-speed telephone/data lines (ISDN); Satellite; Cellular telephone; Radio carrier or subcarrier. 3) In-Transmission Services As more general NII services unfold, there may be many different ways that information is translated between the sender and the receiver. In many cases, these mechanisms will increase accessibility options. Some examples of translations include: Translation of fax or e-mail to voice; Translation of voice to e-mail or fax; Translation of fax into e-mail; Translation of e-mail into fax ; Translation from one language to another; Translation of TDD to voice, or voice to TDD (providing more direct, secure, and confidential communication); Frequency shifting (to better match the hearing profile of the receiver); Speech filtering (to increase the intelligibility of some types of speech. With these translators, information can be made available in the form most convenient at any particular time (e.g., via voice for someone who is driving a car, but who might want the information in printed form if they were at home or at the office). It is also possible to convert information from a form which is inaccessible to some people into other forms which are accessible (e.g., converting a fax into electronic mail or voice for someone who is blind). 4) Viewer/Controller This category includes all systems or devices used to receive and display information. (If you are sending information, you would be a source, as described above.) In order to be accessible, the viewer/controller must both be able to display the information in a form compatible with the person receiving it and have controls which are compatible with the individual's physical, sensory and cognitive capabilities. Viewer/controllers can take a wide variety of forms, including: Computers; Television sets (with special "set-top" adaptor boxes); Standard telephones; Telephones with video or touchscreens; Kiosks (public information systems which look like a touch-sensitive television screen mounted in a cabinet of some type); Ordinary fax machines; Cellular telephones with built-in display screens; Text telephones (TDDs); Special information appliances. .. III. Where the NII is going No one knows exactly where the NII is going, although a lot of money is exchanging hands as people try to find out. There are, however, a number of trends which are clear, most of which have rather significant implications for individuals with disabilities, as will be covered in a later section. In this section, we will briefly introduce some terms and directions which will be used in later discussions. Convergence One definite trend is toward a converging of the different telecommunication and computing fields. While once we might be carrying a cellular phone, a daily organizer, and a pocket or notebook computer, and using fax machines or desk computers for e-mail, printing, or writing, we are seeing a merging of these technologies developing. Small, portable devices will have the ability to connect you with others via voice, picture, transmitted documents (fax and e-mail), etc. In some cases, the functions will be carried out by small multifunctional portable computers. In other cases, the devices will look more like touchscreen telephones with a touch-sensitive display screen. Movies, telephone calls, documents, television shows, and more will all be going over the same channels, which may be run by the cable company, the phone company, via satellite, or combinations of these. The user, however, will not be aware which mechanism is being used, and most won't care, as long as they can maximize quality and minimize price. As this convergence occurs, we will begin thinking of fax machines, phones, e-mail, computers, etc., less as devices and more as functions. We may use multiple devices or the same device to carry out these functions at different times and in different environments. .. IV. Implications for People with Disabilities The NII has the potential to "level the playing field" in many areas of life for people with disabilities. Because it has this high potential the consequences of not making next-generation information system technologies and systems accessible is very serious. This section will consider the benefits and advantages which will be available to people with disabilities in a highly accessible NII. It also looks at what might be some of the disadvantages for individuals with disabilities should the NII not be accessible to them, as well as the potential barriers to providing access. Advantages posed by the developments in the NII General Advantages: First and foremost, the advances in the NII have the potential for providing a vast number of benefits to everyone, including people with disabilities. If the systems are designed in a way to make them accessible, they will yield the same myriad of benefits discussed in the previous sections of the report. Disability-Related Advantages: In addition to advantages for the general population, the next-generation and emerging technologies will provide additional benefit for people with disabilities. These technologies (if acceptable and usable) will be able to address some of the barriers and problems currently faced by individuals with disabilities, and afford them special advantages. These advantages include: * Drastically increasing the ability for individuals with some types of disabilities to access and use information. * Decreasing the personal isolation that some individuals experience, because of restrictions in their ability to move about, to communicate, or to easily congregate with others sharing their interests and situation. * Improving self image, by allowing individuals to interact with others in a way which makes their disability invisible or irrelevant. * Providing opportunities to participate in distance learning programs or receive medical services from a remote location when travel is difficult. Individuals with mobility or travel impairments will be able to do their shopping, learning, travel, medical services, and work from their homes or other facilities. In some cases, NII developments may simply allow individuals with mobility impairments to go to their local office and carry out business which normally would have required them and their colleagues to travel around the country, which would be more difficult for them. This new "mobility" can open new horizons for learning on all levels, and allow individuals to "travel," tour caverns, and explore other environments that they might not otherwise be physically able to explore. Individuals with physical manipulation difficulties can use the simulations or virtual environments to participate in activities they wouldn't otherwise be physically able to do. For example, an individual with severe athetoid cerebral palsy would not be able to easily construct mechanisms, operate delicate instruments, and carry out chemical experiments in the laboratory using fragile glassware, However, if the mechanisms, instruments, and glassware were all simulations on the screen (or in a virtual environment) these individuals would be able to participate in such activities using keyboard control or whatever other interface worked best for them. With today's technologies, it is already possible to create new circuits, designs, and experiments, using only simulators which can be replicated in real life with the same results. Thus, these strategies can be used not only in learning environments, but also in professional activities. Individuals with sensory impairments can access information which was previously unavailable to them. For example, the vast libraries of books which exist in printed form but which are inaccessible to people who are blind (except for the very small portion of the libraries which are available in braille or audio tape) will all be available and accessible when the primary mode for their distribution is in electronic form. Individuals with cognitive impairments can request that information be presented at different levels of complexity, or in different primary formats, techniques which will find increasing use as we try to create systems which are appropriate for individuals with a very wide range of cognitive and language skills. The new systems also present the opportunity to have on-line help available at any time while a user is operating any of these technologies. This on-line help can take the form of computer-based help files, artificial intelligence assistance, or live contact with an expert or resource person (for an extra charge). In addition to all of the standard uses that NII technologies are designed for, it is possible to combine these new technical capabilities in ways that can provide even more powerful new capabilities and opportunities for individuals with disabilities. Three potential examples are provided in the side bars labeled "Lean Cuisine," "Listening Pen," and "The Companion." Lean Cuisine As an example, an individual who was blind might sign up for a service offered by their phone company which would automatically convert any fax sent to them into electronic text, which is then sent to their e-mail; they might sign up for another service that provides voice access to their e-mail. Although such features might be used primarily by businesses wanting extremely high quality OCR translation and access to e-mail by phone, the individual who was blind could also use these features to get access to the cooking instructions on the back of his Lean Cuisine (tm) frozen dinner. He would simply take the Lean Cuisine (tm) dinner and fax an image of its back (where the directions are) to himself. The fax would automatically be routed through the fax-to-email converter (since that is the way he has it set up) and would be converted to e-mail. He would then dial up his e-mail and have the fax read to him. In this case, he would hear the directions from the back of his frozen dinner package read to him. Listening Pen An individual who is deaf may carry a small directional microphone which looks like a pen or is worn as part of their eyeglasses. When talking with someone, they would point the "pen" toward the individual's mouth. The speech would be picked up and sent out digitally over the net to powerful filter and speech recognition software running on a large computer. The result could be sent back and displayed on a small virtual display mounted to the deaf individual's glasses, but which projects the image focus in front of them. In this fashion, the person who was speaking would have their words literally "written all over their face." Using voice print technology, it would even be possible for the speaker to be identified, if for example the person was sitting at a meeting where different people around the room were speaking in a mixed fashion. By using remote computing connected via wireless network, the individual who was deaf could have much more powerful speech recognition algorithms working on the problem than they would be able to or care to carry with them all day. In fact, they may be paying a small service charge to use search recognition algorithms that are owned and maintained by a network service bureau and which are much too expensive for the individual to afford, and change too rapidly for the individual to keep up with. The Companion One hypothetical device, called the "Companion," brings together many of these concepts to show how a personal assistive technology in the future could assist people with cognitive impairments. The device provides the following functions: Calendar reminder system which can wake an individual up, remind the individual of appointments and schedule for the day, alert them to items on the schedule that are different from their routine (doctor appointment or a regularly scheduled event that doesn't occur on this day). Cueing system which can help sequence the individual through their morning routine -- dressing, simple meal preparation, etc. Artificial intelligence to adapt the above functions to what is actually going on, to help detect when the individual appears to be having a problem, to help the individual problem-solve when it detects a problem (or when the individual indicates they have a problem by pressing a "Help" button). Global Positioning Systems which use satellite information to pinpoint the individual's location at any time, so that it can answer questions from the individual and better understand where it and the individual are. Access to the city and major building maps so that it can help provide directions on request. Camera and optical character recognition system which the individual can point at any sign or text, press a button, and have the sign or text read to them. Infrared link for communicating with similarly equipped computers, kiosks, information systems, ATMs, etc. Electronic "smart card/debit card" for cashless money transactions. Communication link to a central resource service which has complete information about the user and can link the user to a live human resource person for more serious problem-solving and for all of the situations where the limited artificial intelligence of the current device is not able to help. The use of such a system could best exemplified via a short scenario: Tim is awakened in the morning by his Companion, which reminds him what day it is, and what the first thing he needs to do is. It also reminds him that he has a meeting tonight with his counselor, and that he is to supposed to show up at the alternate worksite this morning. Tim has worked out a routine with his Companion where he sort of mumbles what he's doing as he's going through his morning routine, and the Companion notes where any important activity seems to be missing or out of order, and asks him simple questions that also act as reminders. Tim walks out to the bus stop. As the buses pull up, he aims the Companion at the name on the bus windshield display and pushes the trigger; the Companion reads the name of the bus to Tim, and also notes the bus and can give Tim some cues about whether the buses seem o be ahead or behind schedule. Tim's Companion also knows exactly which bus stop they're standing at (from the satellite Global Positioning System), and what time it is, so that it can be sure Tim is where he should be, and also give Tim some idea of when to expect the bus. When the proper bus arrives, Tim gets on board, voice authorizes his smart card to transfer the proper fare to the bus, and takes his seat. On his way home from the meeting with his counselor, Tim is very tired, falls asleep on the bus, and rides past his normal transfer stop. The Companion detects this, and tries to wake him, but it is tucked between Tim and the wall of the bus, where it is muffled, and Tim doesn't hear the signal over the noise of street construction. When Tim wakes up, he is confused in an unfamiliar neighborhood. He panics and gets off the bus, which drives away. He further panics and presses the Help button on his Companion. The Companion runs through a standard set of questions and comments to calm Tim and help him apply his own problem-solving skills. Tim aims the Companion at a number of street signs, pushing the button to have them read to him. The Companion also knows where they are, but it is very late, and the Companion does not have any information for this neighborhood with regard to the safety or potential resources for Tim. It advises Tim to call in, so Tim pushes the button to contact the central resource point. A specially trained resource person appears on the Companion's screen; by using the Companion's camera, Tim is also visible to the resource person. All of Tim's information is also displayed directly on the screen in front of the resource person, along with whatever information the Companion can provide on the situation, including Tim's exact location. The resource person directs Tim to a local building that will be safe, and calls a cab, since there are no buses which will easily get him back home from that location at this time of night. Such a system might enable a large number of young or old individuals who are essentially able to live independently if they have some mechanism for helping them over rough spots, helping them with specific activities they may have difficulty with (such as reading), and helping them get out of situations when things go wrong. Great care must be taken, however, in designing these systems, to ensure that they function in the form of a benevolent companion who facilitates and amplifies the natural decision-making skills of the individual, and who operates in either facilitative or suggestive mode rather than directing the individual. While many disabilities can be facilitated through the use of a prosthetic device which replaces the lost function with an artificial version (e.g., an artificial arm, an artificial ear, artificial vision), trying to replace an individual's cognitive abilities with an artificial brain risks a situation where we are providing an artificial intelligence with a body, rather than providing an individual with intelligence. However, a device which helps to strengthen or maximize the abilities of the individual while minimizing the impact on free will and decision-making (or perhaps enhancing it) could significantly facilitate their functioning and enhance their opportunities in life. We don't have devices such as the Companion today, but we do have many of the components. Also, the same principles apply in terms of how we apply the assistive technologies, and how we set up the daily routines and support structures of individuals with cognitive impairments. Further, with the rate at which technology, miniaturization, and artificial intelligence is progressing, it is likely that we will have all of the capabilities described here early in the next century (which is just a few years off) -- and long before we are ready to program and effectively apply them. Disadvantages posed by the developments in the NII General Disadvantages: Whether it is because they cannot afford access, or because they do not physically have access, anyone who is unable to access and use these new communication, information and transaction systems while their colleagues (and competitors) can will be at a very severe disadvantage. As noted above, however, this loss generally has greater impact for people with disabilities, due to their inability to effectively use many of the alternate strategies available today. Disability-Related Disadvantages: In addition to this large potential disadvantage, there are a number of other disadvantages that arise which are unique to people with disabilities. One problem is that the extremely rapid rate of development in this area is making it difficult or impossible for third-party vendors to create access technologies to keep up with the new information technologies. For example, individuals who make screen readers for people who are blind have had extreme difficulty just keeping up with the different computer operating systems as they have been released. On the NII, we see even more rapid development, with Java, PDF, and Macromedia Director all being released as new presentation technologies on the Internet over a span of six months, with no access solutions, or even clear definitions of the potential access approaches, existing for any of them. Part of this problem can be addressed by having accessibility built in rather than relying on third parties to try to catch up and add it later. However, much work needs to be done in this area, as discussed below. Another problem is that some of the multimedia technologies are being developed in a manner which makes them extremely difficult to access for people with any type of disability. Strategies are being developed to address this, but awareness of the problem by those who are working on these new multimedia technologies is still extremely limited. If accessibility is not built in, the cost faced by individuals with disabilities in trying to secure the various third-party access hardware or software adaptations can be significant, and often exceeds the price of the software or hardware for which access is sought. Barriers and Potential Barriers to Access and Use by People with Disabilities Standard socioeconomic status barriers: Again, one of the common barriers faced by many individuals with disabilities results from the fact that many of these individuals are unemployed or under-employed, making it difficult for them to access and use some of these new technologies. However, this is not a problem unique to people with disabilities, and is therefore not explored in any greater depth here, since it is treated in many other studies. Complexity: A current problem with these systems is the level of complexity which is represented in their designs. Although some progress is being made in this area, the introduction of these electronic systems is currently increasing the complexity of carrying out many tasks. For example, individuals who were previously able to operate the simple fare machines are now finding the more modern and flexible, multi-purpose machines much more complicated and difficult (and in some cases impossible) to operate. The same thing is happening with phone systems; individuals who used to be able to call up and make a doctor's appointment or ask a question are finding themselves lost in touchtone-based phone routing systems which they do not understand. Graphic user interfaces: Graphic user interfaces represent a two-edged sword. On the one hand, they can greatly decrease the complexity and increase the familiarity of new systems for individuals who can see. At the same time, they can pose substantial barriers to people who are blind and must use screen reading technologies to access the systems. Interestingly, access by people who are blind to graphic user interfaces on public systems does not present the same barriers. On these systems, the individual is unable to install their own screen reading software in any event; as a result, voice-based access must be built directly into these systems. When the access is built into these systems, the problems faced by screen reading software do not exist, since the individuals who program the display of graphics on the screen are also the ones programming the voice access. As a result, they are completely aware of what is presented on the screen (while screen reading software packages have to try to figure it out on their own) and can create systems which present the visual information auditorially. Still, care needs to be taken if information is presented in pictographic form which cannot be easily expressed in words. For example, if a kiosk presents the temperatures across the country only as a color on a map of the United States, with different colors representing different temperatures, it can be difficult to figure out how to easily allow the information to be accessed by someone who is blind. For most applications today, however, access can be provided to even heavily graphic based user interface systems if the accessibility is built directly in. Touchscreen kiosks and products: Another potential barrier to access is the use of touchscreens on kiosks, portable electronic devices, cellular phones, etc. This type of interface has been particularly difficult in the past for individuals who are blind, since the number and location of the "keys" or hot spots on the screen continually changes during use, and there are no tactile indicators of their number or location. Recently, however, strategies have been developed to allow individuals with reading difficulties, low vision, and even complete blindness to directly and efficiently access and use touchscreen-based kiosks. Techniques such as the Talking Fingertip from the Trace Center at the University of Wisconsin-Madison provide a means to access virtually all of the information and interface types on present-day touchscreen-based kiosks. Industry-standard infrared data links are also now common on most new computers which can allow individuals with assistive technologies to easily link to and operate kiosks from their assistive technologies. .. Sound: One potential barrier is the increased use of sound on systems that used to be silent. Signs, building directories, and computers used to be completely silent or involve fairly simple alerting sounds. As the use of text-to-speech, digitized speech, and other sounds are increasingly incorporated into these systems, individuals who have hearing impairments or are deaf are finding it increasingly difficult to successfully use these systems. Trying to use add-on technologies is usually of limited value. Unless access is built into these systems directly, it is unlikely that most of them will be accessible or usable by very large and growing number of individuals with hearing impairments. Fortunately, strategies exist to provide access to almost all types of auditory information used in information and transaction systems. Advances in speech recognition also hold promise for providing increased access to communication systems. The use of these strategies, however, is spotty at present, except where there is some type of legislative mandate. Animation and interactive systems: As discussed earlier, animation is increasingly being used to create more interesting and attractive multimedia systems. Although the gratuitous use of animation is usually viewed as being of dubious lasting value (particularly if slows down the use of the system), use of animation and graphics within interactive sites and situations is growing. While some situations can be addressed using strategies similar to those discussed above under "Kiosks and Virtual Reality," there are other interactive environments where effective strategies have not yet been identified. This is a serious concern, and one that needs to be addressed if individuals with low vision or who are blind are not to face significant barriers in education, training, and employment environments. Sealed nature of public systems (making them difficult to adapt): As mentioned above, public systems present a unique access problem. Unlike personal systems which can be often be adapted to meet the individual needs of the owner, public systems must be directly usable by individuals with a wide range of abilities or disabilities, without requiring modification. For example, it would not be possible for individuals with a physical or sensory disability to open an ATM and reprogram the computer inside to install a screen reader or other specialized software to provide access. Similarly, those in charge of electronic building directories, fare machines, or even computers in libraries generally do not want individual users to be opening or otherwise modifying their systems. Even the cable companies who install set-top boxes on individual television sets in homes generally do not want their users to open or physically modify or reprogram those boxes. Where these boxes will be used to buy and sell products over the air or to provide information and services, this concern becomes even greater. Finally, for many small hand-held or mass-produced products (such as the communication tablets discussed previously), it may not be easy or even possible to open or modify the system. Two forms of accessibility In all of these cases, accessibility needs to be built into the product. This accessibility / usability usually takes two forms: a) A set of features or optional settings which allow the product to be directly used by individuals with a wide range of abilities. For example, the set might include a volume control and headphone jack; a feature to cause all speech to also be displayed on screen; a mechanism to allow any words that appear onscreen to be spoken for those with reading difficulties; the Talking Fingertip speed list with voice output to allow access by individuals with visual impairments or blindness. b) A mechanism to allow individuals with severe or multiple disabilities (e.g., deaf-blindness, very high level spinal cord injury, etc.) to easily connect special assistive input or display devices without having to open or modify the product (e.g., using very low cost infrared link). Recent work has shown that building cross-disability access features into a product can involve very little or no increased hardware cost, and very low software costs, if considered from the beginning of the design process. .. General Accessibility Guidelines Listed below are general access strategies which can be applied across all information systems, along with the major disability groups which would be affected by the use of these strategies. 1) Visual Information For all information which is presented visually (or stored as an image), have an alternate or supplemental presentation (or storage format) of the information which does not require vision (e.g., auditory format or ASCII text). * Blindness * Cognitive/language impairment 2) Auditory Information For all information which is presented auditorially (or stored as a sound file), have an alternate or supplemental mode of presentation (or storage format) which does not rely on hearing (e.g., visual mode or ASCII text file). (Auditory information includes beeps or any other sounds that convey information.) * Hearing impairment, deafness * Cognitive/language impairment 3) Eye-Hand Coordination Controls For all controls which require eye-hand coordination (mice, trackballs, ordinary touchscreens), provide an alternate or supplemental mode which does not require eye-hand coordination (e.g., keyboard, talking fingertip touchscreen). * Blindness * Physical impairment 4) Physical Requirements For any input or control mechanisms which require fine movement control, physical dexterity, reach or strength, provide an alternate mechanism which does not. Avoid mechanisms which require simultaneous activation of two buttons, latches, etc. Avoid timed responses, or provide a mechanism for making the times very long. * Physical impairment * Cognitive/language impairment 5) Operation Keep the operation of the device/system as simple, predictable and error tolerant as possible. * Cognitive/language * Low vision * Physical disability * Blindness 6) Connectivity Wherever possible, provide an external standard connection point which can be used to connect alternate displays and/or alternate input/control mechanisms (e.g. infrared link or RS232 port with alternate display and control capability). * Blindness * Physical impairment .. ---------- File: LARRY KEYNOTE ADDRESS - LARRY GOLDBERG Before we get too carried away into the FUTURE, let's remember a little about where we have come from and how ACCESS issues were handled in the past. I define the past as everything up until [check watch] [9:20am today] Remember TELEVISION? In so many of the discussions about the future of our Information Superhighway, it's almost as if this all-pervasive medium has already gone the way of the dinosaurs. And there is some truth there. This year, the three major commercial broadcast networks will command only 54% of the prime-time audience, down from almost 90% 20 years ago. That audience is being stolen not only by Fox, CNN, and home video, but increasingly by America On Line, CD-ROMs and the World Wide Web. When we only had CBS, NBC, and ABC to worry about, access issues (at least for deaf and hard-of-hearing viewers) were much simpler. If those guys didn't closed-caption their programs, you knew who to complain to. If they did, all your worries were taken care of. Starting in 1980, they began captioning their daily schedules and today they caption a large majority of their programs (large at least in comparison to some of the cable networks). Speaking of cable, I think it's time to stop a minute and look at some captioning on cable - in this case - MTV. [SHOW PAULA ABDUL VIDEO] [This video shows her singing, accompanied by animated Disney characters - ed.] Well, that's not exactly what captioning on MTV looks like. It's more what it would look like if we had a whole week to caption every music video that comes through the offices of WGBH's Caption Center. But that video does demonstrate what can be done with today's closed captioning technology. TV can also serve blind and visually impaired people. WGBH started a service called Descriptive Video in 1990. DVS adds narrations to broadcast programs and home videos so that blind people can follow and enjoy TV too. DVS adds these descriptions on the Second Audio Program [SAP] channel of stereo TV broadcasts of some programs on PBS (like NATURE, MYSTERY, MASTERPIECE THEATER, AMERICAN PLAYHOUSE, and MISTERROGERS). We'll be describing programs on some cable channels as well this year. You need a stereo TV or VCR to tune in these added descriptions. And the home videos are available by mail order and some Blockbuster stores and no special SAP equipment is needed for those tapes - they are "open" described. .. Okay, back to the future. You've heard from Maureen about what the Internet is and can be. For the moment, I will be referring to the World Wide Web when I say the Internet, since that's what's hot these days. How many of you have surfed the Web yet? [About one-third of the audience raises a hand.] Many of the access issues I'll talk about are the same for the commercial online services like America On Line, CompuServe and Prodigy. The access issues are clear: for people who are blind or visually impaired, navigation through graphic user interfaces is problematic, and anything that is not text-based must have alternate presentations. That includes still graphics, animations, and digital movies. For people who are deaf or hard-of-hearing, the problems are just now beginning: on the Web there is a growing use of audio clips and digital movies with sound. Those applications must have text transcriptions or captions as alternative outputs or displays. For people who have speech impairments, I haven't seen a web site yet that requires speech input, but I have no doubt that we will see that within the next year as well. I'm going to be showing some things on the World Wide Web using a relatively slow Macintosh PowerBook with a 14.4 modem, so bear with me. I'm using Netscape 1.1 as my browser and I have "auto load images" turned off. Here's a great web site with fantastic content - from the Exploratorium science museum in San Francisco. they recently asked for my help in making their web site accessible and here's some of the problems I encountered. [ EXPLORATORIUM WEB SITE] When you first log onto a web site, if you don't want to have to wait for the data-intensive graphics to download, you can turn off the graphics and see the "alt = text" descriptions of the graphics. These text tags can be read by a blind person's screen reader and so are essential for making graphics accessible. A small icon which means "there is a graphic here if you want to see it" acts as a placeholder. If you want to check out the graphic, you can always click on the icon to see what's there. Notice these graphics are just labeled "Graphic" so that's not very descriptive. They aren't hyperlinks to anything, so I guess the designer of this Web site decided these graphics are just decorative and don't need descriptive tags. By the way, I hope you don't get the impression that I am singling out this web site - I'm not and I know they want to be more accessible. This just happens to be a site so rich in content that they make for a good demonstration. It's still very early in the game of making the Web accessible. Let's see what's under this graphic... [CLICK ON WHAT'S NEW IN THE WORLD OF SCIENCE] Aha, "What's New in the World of science?" Good question - let's click on it and find out... Well, immediately we run into a few significant problems that should be obvious. First, a black background might be dramatic but can be very hard on anyone's eyes, let alone a person with low-vision. Second, the text is awfully small and uses a serif font. You can set Netscape's defaults to use backgrounds and font types and sizes of your own choosing, but with so many new users getting on line these days, it would be nice if the first encounters didn't require the setting of special user parameters. Then look at the columns - a screen reader usually doesn't know from columns and will read straight across the page. For example, to a screen reader this says: "In the November edition of "What's New" we ran an article on the confirmation of the existence of a planet The Exploratorium's Paul Doherty is interviewed by KRON, San outside of our solar system. The article Francisco's NBC, affiliate during the Exploratorium's Galileo made references to two independent teams Jupiter event." Got it? Didn't think so. Notice this graphic in the left-hand column. No alt.txt tag - what could it be? Let's click and find out. Aha - a headline in the form of a graphic. "Exploratorium Audio Interview with Astronomer Geoffry Marcy." Why was this headline an inaccessible graphic instead of accessible text? You would have to ask the webmaster, but I suspect they wanted a fancier looking font. Here's two more graphics - let's see what we have here. Another headline and a photograph. Well at least the photo has a very descriptive caption: "The Exploratorium's Paul Doherty is interviewed by KRON, San Francisco's NBC, affiliate during the Exploratorium's Galileo Jupiter event." The headline reads: "NASA's Galileo Mission Successful." Finally, we have one of the most interesting innovations on the Web - RealAudio. Some of you may be aware of the fact that clips of speeches, sounds, and music can be downloaded and played back from a Web site. Depending on the length of the clip, you may have to wait for many minutes for the entire clip to be downloaded to your computer. RealAudio (and a similar technology called TrueSpeech) will make audio available to you virtually as soon as you click the icon. It uses a data-streaming technique which feeds you the audio data a little bit at a time and starts playing it immediately. Let's click on this interview with the astronomer to see how it works - or if it works. That is a great way to get a feeling of immediacy - a live interview with an astronomer! Of course, if you're deaf, it's pretty useless. This interview is 9 minutes long and it wouldn't be too much of a burden to post the text transcription of the interview and make it available with the click of a mouse. Of course this would serve the needs of deaf people, but many students, teachers and researchers would probably love to have that transcript available for pasting into reports and other documents and for circulating by e-mail to friends with similar interests. The text file would probably be about 20K and the sound file, if you wanted to send it to someone would probably be about 20Meg and would require special software to listen to it. Okay, let's look at a Web site that has tried to make things more accessible. [GO TO WGBH] Whaddya know - it's WGBH! At the top of the first page is a line - "Access for users with Disabilities" which gives some background and information on the access features. This graphic here is a bit-map, meaning that you click on various parts of it to go elsewhere in this site. Completely inaccessible to blind users. But as you can see down here, one of the first rules for any web site - offer a text version of a graphic page. Voila! Accessible to screen readers. Let's look at "Programs we produce." The graphic has a proper tag. And here's a list of programs from WGBH. Let's see what NOVA has to show us. Here's the Nova logo, tagged as "Nova Logo." But let me download the logo to show you what an understatement that is. Oooh, aaaah. Quite a picture. Probably cost us a mint for the rights to use this picture. It would be a shame if blind people didn't get a chance to appreciate it as well. In the upper left corner, next to the graphic, you'll find the letter "D". If you click on the D, you get a description. It says: Image Description: In star-filled outer space, a title reads, "NOVA, The Star of Science Television" A glowing supernova gleams through a crack between the large black letters. So, you have a choice of a short description built right into the graphic as an alt text tag and a longer description you can hyperlink to for a more expansive description. Now, going back to the Home Page, I'll click on "(access for users with disabilities)" and go down to the section called: "Instructions for Deaf and Hard-of-Hearing Users." Here is one of those video clips I mentioned that you can download. It's 2.7Megs and would probably take about 10 minutes to download by 14.4 modem. But notice the CC here. If I click on it, I get another page that again gives me the option to download the clip (which, by the way, has a soundtrack as well). Or, I can scroll down and read the TRANSCRIPT of the audio from the clip. So with this technique we serve 3 groups of people: deaf people who need the transcript for access reasons, people who don't have audio-playback capability in their computers, and impatient people who don't want to wait to find out the content of the clip. Universal design at its best. There are more elegant ways to caption and describe these sorts of clips which I will show you in a minute. But first, let me point you to three Web sites that can give you detailed information on how to make a web site accessible: First is the Trace Center's document on Web Access. It can be found at: http://www.trace.wisc.edu Second is the Center on Information Technology Accommodation, a division of the federal government's General Services Administration. They have an excellent document on web access and it can be found at: http://www.gsa.gov/coca And a recent discovery for me is the Adaptive Computer Technology Centre in Canada which has set up an entire web site which demonstrates accessible web techniques: http://www.doe.ca For a good place to go for links to disabilities sites all over the world, try WebABLE! at: http://www.webable.com Now, I want to show you some good news. A new service about to be launched by Intel is called InterCast. It is a marriage of TV and the World Wide Web. Here's how it will work: Intel has developed a computer board which will enable you to watch TV on your computer screen. They've also established partnerships with some of the major TV networks and producers in the country (WGBH is one of them). These producers will encode web page data in their TV signals, so that as you watch the TV program in a window on your computer, web pages which relate to the program will pop up right next to the video window. WGBH [began] providing this service for Nova and This Old House [in April 1996]. Let me show you a screen shot which demonstrates what InterCast will look like. Over on the left is the video window and on the right is the web browser. On the bottom of the screen is additional information that has been downloaded. But I especially want to point you to this little button here on the left, part of the TV controls. It says CC and it turns on the closed captions which will also be downloadable to your hard disk. We're also working with Intel to be sure any existing descriptions are available as well. Okay, I mentioned that there are more elegant ways to caption and describe things on the Internet. Here are two clips which show you where some solutions will come from. We have been experimenting with them over the past few months and will be stepping up our efforts because of a grant we just received from the Telecommunications Funding Partnership for People with Disabilities. It will enable us to make GBH Online a "Model Accessible World Wide Web Site" and to then spread the word to other sites around the web. This clip, provided by the Berkeley Mac Users Group, shows two top computer programmers who are the victims of some sort of strange stress syndrome. [WUFF] Now, notice how I can pull down this menu and turn on or off the video, sound, and text tracks. I will turn on text and now you will see closed captioning of a QuickTime movie. With this technology, multiple text tracks in multiple languages can be prepared and chosen according to user preferences and additional audio tracks (such as a description track) can be made available as well. You can even use the text track as a search engine. Here I will ask it to find the word "Trout" and it takes me right to that frame. And recently posted on GBH Online was this demonstration of description - you may recognize the movie. [Lion King] So, there are some technologies that can be used to make the Internet more accessible. We'll be spreading the word about them and I hope you will, too. ---------- File: CAPTION Before the FEDERAL COMMUNICATIONS COMMISSION Washington, D.C. 20554 In the Matter of ) ) Closed Captioning and Video Description) of Video Programming ) ) MM Docket No. 95-176 Implementation of Section 305 of the) Telecommunications Act of 1996 ) ) Video Programming Accessibility ) REPORT Adopted: July 25, 1996 Released: July 29, 1996 .. I. INTRODUCTION 1. Section 713 of the Communications Act of 1934 ("Act"), as amended by the Telecommunications Act of 1996 ("1996 Act"), directs the Commission to conduct inquiries into the accessibility of video programming to individuals with hearing and visual disabilities. This report is issued in compliance with this statutory requirement. It is based on information submitted by commenters in response to a Notice of Inquiry ("Notice") in this docket and publicly available information. A. Statutory Requirements 2. Section 713(a) requires the Commission to complete an inquiry within 180 days of enactment of the 1996 Act to ascertain the level at which video programming is closed captioned. A report on the results of this inquiry shall be submitted to Congress. Specifically, Section 713(a) directs the Commission to examine the extent to which existing or previously published programming is closed captioned, the size of the video programming provider or programming owner providing closed captioning, the size of the market served, the relative audience shares achieved and any other related factors. The Commission also is required to establish regulations and implementation schedules to ensure that video programming is fully accessible through closed captioning within 18 months of the enactment of the section. The Commission will initiate the rulemaking required by the Act with the issuance of a notice of proposed rulemaking in the next several months. .. 4. Section 713 is "designed to ensure that video services are accessible to hearing impaired and visually impaired individuals." The legislative history of this section states that it is Congress' goal "to ensure that all Americans ultimately have access to video services and programs particularly as video programming becomes an increasingly important part of the home, school and workplace." The House Committee recognized that there has been a significant increase in the amount of video programming that includes closed captioning since the passage of the Television Decoder Circuitry Act of 1990 ("TDCA"). Nevertheless, the House Committee expressed a concern that video programming through all delivery systems should be accessible to persons with disabilities. .. In this report, we do not address issues ... regarding proposals for specific rules, standards and implementation schedules for closed captioning, as they go beyond the scope of the inquiry requirements of Section 713(a). These matters will be considered in the context of a subsequent notice of proposed rulemaking that we will issue to consider proposed rules to fulfill the Congressional mandate that the Commission adopt rules to implement closed captioning requirements by August 8, 1997. .. 10. This report encompasses all types of available video programming with closed captioning and video description delivered to consumers through existing distribution technology. We report on the availability of broadcast commercial and noncommercial networks, basic and premium cable networks, syndicated and locally produced broadcast and cable programming with closed captions and video description. In addition to over-the-air broadcast television service and cable television service, we examine the availability of the delivery of closed captions and video descriptions to consumers by other multichannel video programming distributors ("MVPDs"). Among these distributors are direct-to-home ("DTH") satellite services, including direct broadcast satellite ("DBS") services and home satellite dishes ("HSD"), wireless cable systems using the multichannel multipoint distribution service ("MMDS"), instructional television fixed service ("ITFS") or local multipoint distribution ("LMDS"), satellite master antenna television ("SMATV") and local exchange carrier ("LEC") video services. B. Summary of Findings 1. Closed Captioning 11. Captioning of video programming has existed since the early 1970s. Through the efforts of Congress, government agencies and a variety of private parties, captioned video programming has grown over the past 25 years so that it is now a common feature associated with the vast majority of popular prime time broadcast television programming. Congress' passage of the Americans with Disabilities Act of 1990 ("ADA") requiring the closed captioning of federally funded public service announcements, the Television Decoder Circuitry Act of 1990 ("TDCA") and the 1996 Act reflect a continuing national commitment to ensuring "that all Americans ultimately have access to video services and programs particularly as video programming becomes an increasingly important part of the home, school and workplace." 12. Beneficiaries of Closed Captioning: The principal beneficiaries of closed captioning are the approximately 22.4 million persons who are hearing disabled. In 1995, 25 million decoder-equipped television sets were sold in the U.S. It is estimated that between 50 and 60 million U.S. homes can currently receive closed captioning. 13. Technology: Closed captioning is distributed on line 21 of the vertical blanking interval ("VBI") of broadcast and other analog television signals. Commission rules reserve line 21 for this service. Pursuant to the TDCA, since July 1, 1993, all television receivers with screen sizes 13 inches or larger must be capable of receiving and displaying closed captions. Cable television systems retransmitting broadcast signals must pass through closed captioning to the receivers of all subscribers. For those whose television receivers are not capable of decoding and displaying closed captioning, separate decoders may be purchased. Existing technology, however, can only decode Latin based alphabets and symbols, so captioning of some non-English language programming (Chinese, Japanese, Russian, Arabic, etc.) is not possible using this system. This transmission and display system is generally well established and functions effectively. Digital transmission systems under development are being designed to include closed caption capabilities. 14. Notwithstanding the capabilities of this transmission system, a variety of problems can occur in the captioning process. Captioning of prerecorded programming involves adding a written transcription or description of the spoken words and sounds which is generally carefully prepared and checked for accuracy. In the case of live programming, however, the real time stenographic process of adding the captions increases the number of mistakes. 15. In addition, as programming is duplicated or prepared for transmission, improperly adjusted signal processing equipment can delete line 21, introduce errors or result in captions not being synchronized with the video portion of the program. Time compression of programming to fit it into specific time blocks may destroy captions. Finally, interference and poor quality reception may impair caption quality, sometimes causing individual letters to appear as square white blocks. Closed captions may also cover other written information on the screen, such as emergency weather or school closing announcements. 16. Cost: There is a wide range in the cost of closed captioning that reflects the method of adding the captions, the quality of those captions and the entity providing the captions. Organizations and suppliers that charge the most for their services are reported to provide the highest quality and most accurate captioning. For prerecorded programming, the captions are developed off-line using a script of the actual program. Estimates of the cost of this type of captioning range from $800 to $2500 per hour of programming. Captions for live programs can be created by specially trained stenotypists. Live captioning costs are estimated to be between $150 and $1200 an hour. Off-line captioning is typically more expensive than live captioning because additional resources are expended to edit and proofread the captions. Another method of captioning live programming uses computer software that converts a script into closed captioning. This method, known as electronic newsroom captioning, is virtually cost free once the equipment and software are purchased at a cost generally estimated to be between $2500 and $5000. For high budget programming that is distributed nationally and reused many times, such as theatrical films that may receive network broadcast, subscription, syndication, cable television and video tape distribution over a period of years, the costs involved represent only a minor portion of the total production expense and revenue flow. For less expensive programming, such as local cable originations, the cost of captioning could be a significant proportion of total expenditures. 17. Amount of captioning: There has been significant progress in the delivery of closed captioning of video programming, but the goal of making video programming through all delivery systems accessible to persons with disabilities is not yet realized. Virtually all nationally broadcast prime time television programming and nationally broadcast children's programming, news, daytime programming and some sports programming, both commercial and noncommercial, is captioned. New feature films produced in the U.S. that will be distributed by broadcast networks, cable networks, syndicators and local stations following their theatrical release are now captioned at the production stage. Many local stations caption their newscasts, at least the portion that is scripted. Many of the national satellite cable programming networks distribute programming containing closed captions. Cable operators also appear to provide some limited captioning of their local and regional programming. Other MVPDs essentially distribute programming that is produced for broadcast and cable use, and they generally deliver the programming with the existing captions intact. 18. Certain types of programming, however, are unlikely to be captioned, including non-English language programming, home shopping programming, weather programming that includes a large amount of visual and graphic information, live sports and music programming. Captions are less likely to be included in programming intended to serve smaller or specialized audience markets. Programming (e.g., sports), which is considered perishable because it may only be aired one time, is less likely to contain captions than programming that can be rerun by the original distributor or redistributed by others (e.g., in the syndication market). 19. Economic Support: There are four principal sources of economic support for closed captioning. Financial assistance provided by the Department of Education ("DOE") funds represents approximately 40% of the cost of all captioned video programming. This funding is available only for programming that reaches the largest audiences -- national news, public affairs, children's programming, movies and prime time specials. The remaining support comes from a combination of directly credited corporate advertising support, charitable and foundation support and producers and distributors of programming. Public service announcements produced or funded by the Federal government must be captioned, pursuant to Title IV of the ADA. 20. Little information appears to be collected in any systematic fashion about the size of the audience for closed captioned programming or about the economic demand for captioned programming when programming is distributed on a subscription basis. Not all advertisers caption their own advertisements even when the advertisements appear in conjunction with programming that is captioned. Some distributors, such as those offering subscription based services (e.g., HBO, Cinemax), appear to believe that the inclusion of captions is rewarded by the marketplace as they are able to attract additional subscribers. It also is likely that all programmers and program providers could increase their audience shares if their video programming is accessible to the deaf and hard of hearing community and therefore benefit economically through the inclusion of captions. .. III. CLOSED CAPTIONING OF VIDEO PROGRAMMING A. Introduction 25. Closed captioning is an assistive technology designed to provide access to television for persons with hearing disabilities. Captioning is similar to subtitles in that it displays the audio portion of a television signal as printed words on the television screen. To assist viewers with hearing disabilities, captions also identify speakers, sound effects, music and laughter. Captions were first used in the early 1970s in an "open" format, transmitted with the video so that they were visible to all viewers. PBS developed closed captioning in the 1970s. Closed captioning is hidden as encoded data transmitted within the VBI of the television signal, which, "when decoded, provides a visual depiction of information simultaneously being presented in the aural channel (captions)." A viewer wishing to see the closed captioning must use a set-top decoder or a television receiver with built-in decoder circuitry. 26. The Commission has long sought to promote closed captioning technology. In the 1970s, the Commission granted PBS a number of authorizations to conduct experimental transmissions using closed captioning, and in 1976, adopted rules that provide that line 21 of the VBI is to be primarily used for the transmission of closed captioning. The Commission's rules specify technical standards for the reception and display of such captioning. The Commission has also adopted technical standards for the cable carriage of closed captioning data that accompanies programming carried on cable systems. In addition, cable operators are required to carry the closed captioning data contained in line 21 of the vertical blanking interval as part of their must-carry obligations. 27. To implement the TDCA, the Commission adopted regulations requiring all television broadcast receivers with screen sizes 13 inches or larger that were manufactured or imported on or after July 1, 1993, to be capable of receiving and displaying closed captions. By mid-1994, decoder-equipped television sets were in nearly 20 million American homes. In 1995, 25 million decoder-equipped television sets were sold in the U.S. It is estimated that between 50 and 60 million U.S. homes can currently receive closed captioning. 28. In addition to these efforts to promote closed captioning technology, the Commission, in 1976, adopted a rule requiring television licensees to transmit emergency messages in a visual format. In 1990, Congress passed the ADA which requires all federally funded public service announcements to be closed captioned. Aside from these requirements, however, neither Congress nor the Commission has mandated captioning of television programming. Instead, Congress and the Commission have relied on the voluntary efforts of program producers and providers to make television programming accessible to persons with hearing disabilities. As far back as 1970, the Commission has urged broadcast television licensees to undertake these voluntary efforts. We have also "strongly encourage[d] cable operators to carry more closed-captioned video programming." B. Audiences that Benefit from Closed Captioning 29. Providing persons with disabilities access to the "tremendously powerful television medium" serves an important public interest. A recent study attests to the dominant role television plays in our society. It reports that nine out of ten Americans watch television on a regular basis. U.S. households spend an average of over seven hours every day watching television as a means of entertainment and relaxation and as a source of news and information. Most Americans depend on television to get their news, with 72% of Americans listing it as their primary news source. 30. Closed captioning makes television more accessible to persons with hearing disabilities. Indeed, the Commission on the Education of the Deaf has stated that "captioning of TV . . . is the most significant technological development for persons who are deaf." In enacting the TDCA, Congress found that "closed-captioned television transmissions have made it possible for thousands of deaf and hearing-impaired people to gain access to the television medium, thus significantly improving the quality of their lives." Closed captioning can thus offer great benefits to Americans with hearing disabilities. In addition, many other people, including children and adults learning to read, and people learning English as a second language, can also benefit from watching captioned programming. 1. Persons with Hearing Disabilities 31. The National Center for Health Statistics estimates that there are 22.4 million persons with hearing disabilities. According to the National Association of the Deaf ("NAD"), 80% of these individuals have irreversible and permanent damage to their hearing. People with varying degrees of hearing loss comprise 8.6% of the U.S. population. Closed captioned programming provides individuals who are deaf and hard of hearing access to information regarding national and worldwide current events, local and community affairs and entertainment. Without captions, this critical link is often lost, making it more difficult for these individuals to have basic access to the information and knowledge which the rest of society takes for granted. Many in the deaf and hard of hearing community view the issue of closed captioning in terms of basic civil rights and rights to equal access that should not be subject to a cost benefit analysis. 32. Of the persons with hearing disabilities, 3.7 million are children. Approximately 15 out of every 1000 people under the age of 18 have some type of hearing disability. When programs are captioned, children who are deaf and hard of hearing, as well as adults, do not have to depend on family members to interpret the soundtracks of such programming. Captioning may thus help facilitate healthy family interaction and provide greater independence to children and adults with hearing disabilities. Similarly, the ability to enjoy watching or discussing television shows with peers may advance greater acceptance of a child or adult with a hearing disability into his or her own community. 33. Senior citizens comprise approximately 29% of the total population. It is well established that the U.S. population as a whole is aging due to advances in health care and the aging of the "baby boom" generation, the first members of whom are turning 50 in 1996. As the average age of the total population increases, the number of elderly people with hearing loss is expected to grow as well. According to NAD, 415 of every 1000 people over the age of 75 have some type of hearing disability. Similarly, it is estimated that currently 22 million adults over the age of 65 have a hearing loss and that this number will nearly double to over 40 million within the next ten years as the baby boom generation ages. 2. Children Learning to Read and Persons Learning English as a Second Language 34. For both children with hearing disabilities and non-hearing disabled children learning to read, captioning can become an educational tool, turning the many hours of television they watch each week into a learning opportunity. Captioning is useful in exposing children to patterns of spoken English, such as slang and idioms used in everyday dialogue, that are not always found in literature. Studies have also demonstrated that captions can improve a student's reading comprehension and spelling, augment vocabulary and word recognition and increase overall motivation to read. Not only does captioned television capture students' attention, but its multi-sensory presentation of information makes learning new words and concepts easier. 35. Captioning can be useful as a key learning tool for the 30 million Americans for whom English is a second language ("ESL"). ESL students have two related needs that are addressed through closed captioned television. First, they need to increase basic vocabulary. Vocabulary researchers agree that the overwhelming percentage of words a person knows are acquired through the contexts in which they are used. Through captioning situational uses of words and idioms, and shades of meaning and nuance, can be conveyed visually as well as verbally. Furthermore, ESL students benefit from seeing an immediate spelling of words just uttered. 3. Illiterate Adults 36. There are 26 to 27 million illiterate adults in the United States. In addition there are 72 million adults who lack the basic skills to fill out employment applications or to follow written job directions. Only 2% to 4% of American adults requiring literacy services are reached by the present public and private literacy programs. Captioning can provide opportunities for the illiterate to increase their reading fluency, to participate in the workforce and to enjoy literature, magazines, and newspapers for both knowledge and recreation. 4. Others Who Benefit from Closed Captioning 37. Captioning also can help non-hearing disabled viewers understand the audio portion of television programs in noisy locations such as airports, hotel lobbies, waiting rooms, public exercise facilities, restaurants and bars. Additionally, captioning can help people understand dialogue in quiet areas where they may need to lower or to turn off the volume on the television set. For any reader, captioning can also be used to improve vocabulary skills and to help clarify dialogue that uses difficult vocabulary or dialogue in programming in which the speakers have accents that may be difficult to understand. C. Methods of Closed Captioning 1. Technical Issues 38. Closed captioning is transmitted on line 21 of the VBI along with the video and audio portions of a program. The VBI is the unused lines in each field of a television signal, seen as a thick band when the television picture rolls over usually at the beginning of each field. The VBI is an integral part of the television signal that usually includes information to instruct the television receiver to prepare to receive the next field and may be used to transmit other information, including closed captioning. A consumer with a television set that has a built-in closed captioning feature or a set top decoder can receive closed captioning information by activating this feature. 39. The introduction of advanced digital television ("ATV") may affect closed captioning in terms of both transporting and displaying relatively error-free closed captioning data. ATV could greatly improve the overall quality of closed captioning because it may permit more rapid transmission of data. With regard to ATV technology for transporting closed captioning data, the Commission has a pending proceeding soliciting public comments concerning the ability of ATV to include captioning and how the Commission should implement captioning requirements for ATV in the event it does not adopt a mandatory ATV standard. A draft standard for advanced television closed captioning ("ATV-CC") has been prepared by the Television Data Systems Subcommittee ("TDSS") of the Consumer Electronics Manufacturers Association ("CEMA"). This ATV-CC standards setting effort is being carried out in cooperation with the Grand Alliance (a group of electronics industry representatives) and the Advanced Television Systems Committee ("ATSC"). Provisions have been made in the ATSC standard to transport closed captioning information in the form defined by the TDSS at a fixed data rate of 9600 bits-per-second for closed captioning. This proposed transporting standard would significantly increase the data transmission rate from its current 480-bits-per-second, thereby facilitating faster transmission of both more and better quality closed captioning data. 40. In terms of the quality of closed captioning displayed, ATV could significantly increase user control over such display. Currently, the only control the user has over this display is whether to activate the closed captioning feature on his television set. The user has virtually no ability to customize the closed captioning display to his individual needs or preferences. The advent of ATV could permit major closed captioning enhancements, such as user selected caption sizes (i.e., caption "volume control"), a broader selection of type faces, fonts, character sets and symbols that could convey a wider range of meanings and a wide array of presentation options, including different colors and backgrounds. In addition, ATV, through its enhanced ability to transport more closed captioning data at a faster rate, could allow a user to select captioning from a variety of languages on a menu displayed on the television screen. 41. Despite its technological potential, ATV would not automatically resolve all technical or logistical problems with closed captioning. For example, current television receivers, which are based on analog technology, cannot receive the digitized ATV signal with all of its potential closed captioning enhancements. Therefore, the ATV technology would be of no practical use until television sets capable of receiving and displaying ATV signals have become available. It should be noted that such television sets may be available in the near future, even though the widespread market penetration of such technology may not occur for many years. Advocates of improved closed captioning emphasize that the initial limited availability of ATV should not overshadow its potentially significant enhancement of closed captioning. However, it should also be noted that, even when digital receivers become available, the procedures for captioning programming will probably be the same in terms of time, cost and labor intensiveness. Therefore, the development of digital television technology may not make closed captioning any less expensive or time consuming. 2. Types of Closed Captioning 42. There are essentially four major types of closed captioning. The first type is "off line captioning." Under this method, the captioning service gets an advance copy of the script, tape or film before the program is aired. The audio portion of the program, including sound effects as well as dialogue, is transcribed and added in synchronization with the video content. After the program is captioned it is sent to a post-production company or to the program producer on a computer disk or via modem. The captioning is encoded by the post-production company or the producer onto line 21 of the VBI of the master tape to be telecast. This method of captioning entails a labor intensive process to ensure that the captions are placed precisely where the corresponding audio appears and then locked into the proper position on the program tape. The captioners must ensure that the captions will appear at precisely the right moment in a precise location on the screen. This type of captioning is used for feature films and many prerecorded entertainment programming, including prime time series and children's programs. 43. A second type of captioning is live encoded captioning. This type of captioning is also created off-line for prerecorded programming, such as daytime dramas and late night entertainment shows in advance of the time that the program is aired. Despite the name of this form of captioning, these captions are not encoded onto the program tape, but rather are transmitted with the program at the time it is aired. These captions are less precisely synchronized than off-line captions and are rolled from the bottom of the screen rather than appearing at precise locations on the screen. Live encoded captioning is often used where there are only a few hours between taping and airing and the final edits for the program are not completed until close to air time. An example of a program that uses this type of captioning is the Late Show With David Letterman, where the broadcast occurs only a few hours after the show is taped. 44. A third type of captioning is automatic live-encoded captioning. Like live encoded captions, these off-line captions are not encoded onto the prerecorded program prior to airing, but are transmitted at the time of airing. However, these captions are encoded onto the program after the original airing so that the captions will be automatically transmitted when the program is rebroadcast. A variant of this type of captioning is called "electronic newsroom captioning" in which the captions come from the text in the station's news script computers. Only text transmitted from the scripting computers onto the teleprompters is captioned. Therefore, unscripted material that does not appear on the teleprompters is not captioned. The electronic newsroom captioning method is commonly used for local broadcast station newscasts. 45. The fourth type of captioning is "real time" or "live captioning." Live programming, such as news, sports and awards shows are typically "stenocaptioned." This method of captioning is used for breaking news and other types of live programming that are unscripted. Under this method, the captioner's computer is linked to the telecast operation center and the captioning material is created for telecast in "real time." A specially trained "stenocaptioner" transcribes the audio portion of the live program as it airs. Because of the transcription and computer processing required, real time captioning appears on the screen about three seconds after the corresponding audio content. D. Cost of Closed Captioning 46. The cost of captioning video programming is a related factor that affects the extent to which programming is currently accessible with closed captioning. The cost of closed captioning depends on the method used and a variety of other factors, including the format, the length of the program, the required turnaround time, the payment schedule and the volume of captioning, with discounts often given when contracts include multiple programs and hours. Captioning off-line of prerecorded programs is typically more expensive than captioning for live shows because it requires additional staff for editing and proofreading the captions. There are more than 100 suppliers of closed captioning services. According to several commenters, since 1990, the costs of captioning have declined due to increased competition among service providers. The larger, more experienced captioning agencies still charge relatively high rates, but are known for their level of quality. 47. A considerable amount of closed captioning is done under contract with outside vendors. Estimates of the cost of off-line captioning range from around $800 an hour to $2500 an hour. In addition, the encoding of the captions onto the program tape entails an additional expense of approximately $200 for a half hour program to $650 for a two hour program. For example, NBC states that it costs between $900 and $1800 to caption its prime time series, $1800 for a made for television movie or an episode of a miniseries and $1200 for a Saturday morning live action children's show. ABC indicates that it pays approximately $790 to $1200 per hour for off-line captioning. The magnitude of these costs is explained in part by the ratio of time needed to create the captions to the length of the program, which can be as much as 20 or 30 hours for a one hour program. In addition, the cost of captioning a commercial is estimated at about $250 per minute. It also is reported that the off-line captioning of music videos costs about $275 to $400 for a short form video or $2500 for a long form video of 60 minutes in length. 48. The estimated cost of contracting for the services needed to caption live programming ranges between $300 and $1200 per hour. For example, the National Captioning Institute ("NCI") states that this would cost $300 to $750 per program hour for a national program and $125 to $300 for a local program hour. VITAC, another vendor, states that its rate card indicates that real time captioning costs $810 for an hour program. Caption Colorado, states that it has been able to reduce the cost of real time captioning from between $600 and $700 per hour to $120 per hour by obtaining television audio programming and delivering encoded captions through telephone lines. Others estimate the average cost of live captioning to be between $150 and $800 per hour. 49. Captions often must be reformatted when programming is rebroadcast or distributed by a secondary video provider. For a secondary use, a program may be edited to fit a time period that is different from the original one and commercials may need to be inserted. This editing can ruin the timing of the captions and therefore reformatting is required. In cases where parts of the program are removed or rearranged, the captions must be removed or rearranged accordingly. The cost of reformatting is approximately one fourth that of the original captioning, or approximately $400 to $800 for a full length movie. Estimates of reformatting costs generally range between $350 and $450 per hour, depending on the amount of editing, although it is reported that the cost of reformatting can be as high as $750. 50. A program producer or provider also can do its own captioning in-house. An entity that does its own captioning must acquire equipment to add captions. For a station that does a significant amount of its own programming, it may be more effective over time to do the captioning in-house using stenocaptioners. A one time equipment expenditure would be between $50,000 and $75,000, although it would also require significant staff time to operate this equipment over the course of a year. For a local public broadcasting station, specialized captioning equipment to provide a work station and encoding equipment for one staff person costs between $12,000 and $22,000, in addition to a cost of approximately $2500 to train a person to caption. A station that distributes three and one half hours per week of locally produced taped programming, and captions 95% of that programming, may have to spend $40,000 on equipment, $5000 on training and $31,000 per year plus benefits for each of two stenocaptioners. After initial equipment and training costs, on-going captioning can represent between 5% and 8% of the local production budget, compared with outside contracting which can reach as high as 16% of a station's local production budget. 51. Depending on capabilities, the cost of the equipment and software needed for a local station to provide electronic newsroom captioning generally ranges between $2500 and $5000, but some estimates are as high as $10,000. The National Association of Broadcasters ("NAB") reports that the average cost of captioning for local stations responding to its survey is $514 per week, primarily for local newscasts. Since this figure includes stations that report no costs (which NAB assumes use only electronic newsroom capability), NAB asserts that the average cost is more likely to be $1007, exclusive of no-cost stations. NAB concludes that this represents stations that use stenographic captioning or a combination of stenographic and electronic newsroom captioning. 52. A primary concern for those not currently captioning their programming, especially local broadcast stations, cable networks and local cable systems, is the relatively high cost of captioning when compared to their total budgets. Commenters state that the cost of captioning local programming is likely to be a significant cost for local stations, even for major station groups and larger market stations. The Association of Local Television Stations ("ALTV") claims that it would cost an individual television station approximately $100,000 a year to caption one hour per day of its local programming. For many affiliated and independent stations, the costs of even limited amounts of captioning would exceed their annual pre-tax profits. Television station WSST-TV estimates that to close caption its daily six hours of local programming would cost approximately $7500 a day, added to the present daily operating cost of approximately $1650. 53. The National Cable Television Association ("NCTA") estimates that it would cost the cable industry between $500 and $900 million per year to caption all basic cable network programming that is not currently captioned. This cost would represent nearly one third of the basic cable programmers' current total annual programming expenditures. NCTA further claims that the cost of captioning just prime time basic cable programming would range from $58 to $116 million a year. Liberty Sports ("Liberty") states that closed captioning would add approximately 10% to the full production budgets of national sports events, which are generally in the $15,000 to $25,000 range. The F&V Channel ("F&V") estimates that it would cost approximately $4.5 million to caption programming for a year, an amount that exceeds its entire programming budget. The Weather Channel estimates that in order to caption its own live, often ad libbed, programming, it would need to have 12 real-time stenocaptioners on staff and acquire the equipment needed for two captioning work stations at an estimated total cost of $33,000. 54. Local cable programming is often transmitted on public, educational or government ("PEG") access channels. The Alliance of Community Media estimates that the average annual budget of a full service access center is $227,147. However, a typical access center, such as the one in Riverside, California, operates with a budget of $50,000 and serves a population of more than 350,000. At an estimated cost of $2500 per program hour, this center could caption only 20 hours of programming per year and have no funds left over for salaries, equipment and expenses. The Fairfax Cable Access Corporation states that it produces between 80 and 100 hours of programming a month. It estimates that it would cost $160,000 per month to add captions to all of its programming, assuming a closed captioning cost of $2000 per hour. 55. The City of St. Louis estimates that the cost of closed captioning its Board of Aldermen's meetings, which are carried by the local cable system, for one year, would exceed $20,000 if an outside vendor were used. Alternatively, the City states that if it were to develop its own captioning the equipment needed would cost more than $9000, with software alone costing $3995. In addition, encoding equipment would cost about $6300 and captionwriters would need to be hired at salaries of $30,000 a year plus benefits of an additional 26%. E. Current Availability of Programming with Closed Captioning 56. As indicated earlier, Section 713 of the Act directs the Commission to ascertain the level at which video programming is currently closed captioned. Specifically, we are required to examine the extent to which existing or previously published programming is closed captioned, the size of the video programming provider or programming owner providing closed captioning, the size of the market served and the relative audience shares achieved. The information provided in this section concerning the current availability of programming with closed captioning is responsive to these issues. 57. Programming is most likely to be closed captioned when it is distributed nationally and available to a significant portion of all U.S. television households. In addition to reaching a substantial number of homes, such programming is available during the times of day with the highest viewing levels. The most popular programs as determined by audience ratings also are the ones most likely to contain captions. Accordingly, we find that the market served by programming with closed captioning is potentially large in size. However, there is no information available from audience ratings services or elsewhere regarding how many individuals currently use closed captioning when watching television programming. Thus, we are unable to assess the relative audience shares achieved by programs that are closed captioned as a result of such programming being accessible to individuals who are hearing disabled. 58. We find that in recent years programming distributed by the national broadcast networks, both commercial and noncommercial, has generally been captioned. For example, virtually all prime time programs, children's programming, news, daytime programming and some sports distributed by the networks contain closed captions. Programming widely distributed by broadcast syndication is captioned. Local television stations in larger television markets are more likely to caption programming than other stations, especially local news broadcasts. Many of the national satellite cable programming networks include closed captions as do some local and regional cable programming services. In recent years, feature films produced in the U.S. that will be distributed by broadcast networks, cable networks, syndicators and local stations following their theatrical release are closed captioned at the production stage. In many cases, the cost of captioning these types of programming represents only a small portion of the total production budget. 1. National Broadcast Television Networks 59. Broadcast television networks produce or acquire programming for distribution by their local affiliates. Until now, all closed captioning has been done on a voluntary basis, with the exception of emergency broadcast information and government funded public service announcements. 60. PBS has been at the forefront in the development of captioning technology and services. PBS is a non-profit membership organization whose members are the licensees of public television stations. PBS has approximately 340 affiliates that reach almost all television households. PBS began distributing closed captioned programs to its member stations in 1980. PBS has voluntarily adopted the practice of requiring producers to provide closed captioning in all programming funded by PBS's National Program Service. All children's programs and prime time programming on PBS are closed captioned. In addition, the Newshour with Jim Lehrer is closed captioned each evening. The few PBS programs that are not closed captioned are visually oriented (e.g., ballet or other dance performances), or are non-verbal in nature (e.g., a symphony concert). Non-English language operas are not closed captioned since they already contain open English subtitles. 61. PBS Learning Media distributes videocassettes and video laser discs of PBS programs to educational users and the general public through PBS Home Video and PBS Video. Whenever a program is licensed to PBS for home and audio-visual distribution and is available with captioning, PBS Learning Media tries to include the captioning in the version it distributes. The PBS video educational collection has over 1200 titles in distribution, over 80% of which are closed captioned. 62. Each of the three oldest commercial broadcast networks -- ABC, CBS and NBC -- reach virtually all households through their approximately 210 affiliated local stations. The majority of programming on these three networks, including virtually all of prime time programming, is closed captioned. NBC provides an average of 94 hours of programming per week to its affiliates and captions a minimum of 72 to 80 hours of such programming, with an average of 83 hours per week, or 88.3%. This weekly total comes to about 3750 to 4150 hours per year of captioned programming. NBC has provided this level of captioning for approximately three years. ABC offers on the average about 90 hours of programming each week to its affiliates. All ABC-produced shows, including news, sports, children's and entertainment programming, with very limited exceptions, are captioned. In 1991, CBS captioned four hours of network programming per day. By the end of 1995, CBS captioned a daily average of 13.5 hours of programming provided to affiliates, or between 85 and 95 hours per week, depending on weekend sports programming schedules. With the exception of its overnight news service, all of CBS's network programming is closed captioned. 63. ABC, CBS and NBC, however, do not caption their overnight news programs broadcast between 2:00 a.m. and 6:00 a.m., such as World News Now on ABC, NBC NewsChannel and Up to the Minute on CBS. These overnight news programs are not captioned because their late night time slots provide relatively low ratings and limited advertising revenues which the networks feel do not justify the cost of captioning. They also are often a compilation of reports sent to the networks by their affiliates without captioning. Furthermore, even if some of the reports were initially captioned, the affiliates may retransmit only portions of these programs to the network for their use, thereby adversely affecting the flow of the captioning. 64. There are three newer commercial national networks -- Fox, United Paramount Network ("UPN") and WB Television Network ("WB"). The Fox television network has approximately 140 affiliates, and reaches almost all homes. It distributes 16 hours of prime time, late night and early Sunday morning programming. In addition, it distributes 19 hours of children's programming throughout the week. All of this programming is closed captioned. UPN has 156 affiliates, and covers 92% of the country. UPN distributes six hours of prime time programming, one hour of children's weekend programming and a movie on Saturday afternoons. All of this programming is closed captioned. Closed captioning is one of the network's "delivery requirements" for its programming. Accordingly, the captioning is done by the program producers. Commercials on UPN are generally not closed captioned. WB reaches 84% of the country. It distributes five hours of prime time programming and five hours of children's programming each week. All of WB's prime time programming is closed captioned. The children's programming also is captioned, except for some older cartoons. 65. While nationally broadcast sports programming generally includes captions, none of the three established networks regularly captions regional sports programming. One exception has been the regional games of the 1995 and 1996 NCAA Men's Basketball tournament, which were captioned by CBS through joint efforts with funding and captioning agency partners. The broadcast networks assert that there are several reasons why networks generally do not caption regional sports programming. First, there are technical and logistical problems associated with delivering different games to the affiliates in various parts of the country at the same time. Second, captioning services may not exist in the regions where particular games will be televised, so it is not possible for a stenocaptioner to "see" the game to caption it in real time. Third, there may not be encoding equipment at the game site from which the programming is transmitted by uplink. In addition, broadcast networks state that a sporting event is essentially visual, and statistical information and the progress of the game are often indicated by graphics, thereby reducing or eliminating the need for captioning. Finally, much sports programming is by its very nature perishable; sports events have substantial entertainment value only at the time of their occurrence. Since there is no residual market for such programming, commenters argue that production costs, including captioning, cannot be spread over multiple showings. Therefore, the networks claim that they have no real financial incentive to caption most sports programming. 66. Many commercials scheduled during and adjacent to network programs are captioned by the advertising agencies which produce them. These advertisers recognize that without closed captioning they may fail to reach potential consumers who are deaf or hard of hearing. Network promotions of upcoming network programs are generally not captioned. For example, NBC produces approximately 75 to 100 promotional spots a day which are 10 to 20 seconds in length and usually broadcast within 24 hours after being produced. In some cases, especially for news magazines with topical subjects, the promotional spots are produced just a few hours before being aired. These time frameworks may make captioning such spots logistically difficult or impossible. Networks such as NBC state that even for uncaptioned promotional spots, information about the name of the program and the time of the upcoming broadcast is often displayed visually by graphics contained in the spot. 2. Local Broadcast Television Stations 67. Local television stations distribute programming they receive from a network, if they are affiliates, purchase programming in the syndication market and produce or acquire programming locally. As discussed above, stations affiliated with a network carry captioned programming during a significant portion of the broadcast week. First run syndicated programming is not produced by or for any particular network and is distributed to stations irrespective of network affiliation. Off-network syndicated programming is programming that originally aired on a particular network and is now available in reruns to stations that wish to purchase it. Examples of such programming are I Love Lucy and M*A*S*H. The amount of captioned first run syndicated programming varies depending on who produces and who airs the programming. Certain first run syndicated programming such as Jeopardy!, Wheel of Fortune, and Oprah is closed captioned by the program producers and/or distributors. Newer off-network syndicated programming, especially that produced after the mid-1980s, is often closed captioned. Most off-network programming produced before the mid-1980s, such as Bewitched and Jackie Gleason, was not captioned when produced and remains uncaptioned. 68. According to a study conducted by NAB, 70% of the stations responding to its survey provide closed captioning for some of their non-network programming. This study further divides stations according to market size and indicates that market size plays some role in determining how much non-network programming stations caption. The study suggests that the highest percentage of stations captioning programs are those in the mid-sized markets (Nielsen designated market area or DMA market ranks 26 to 50 and 51 to 100), where over 75% of the stations reported that they provide captioning. The actual amount of captioned programming also varies according to the NAB study, with the stations in the largest markets (Nielsen DMA market ranks 1 to 25) airing an average of approximately 158 hours of captioned non-network programming over the last year. 69. Most commercial stations that caption local news use electronic newsroom ("ENR") captioning. Because ENR captioning is created from the text of the newsroom's teleprompter, the quality of ENR captioning depends on the amount, completeness, and accuracy of the information entered into the system. Live reports from the field or reports of breaking stories, much sports and weather reporting, and ad libs and banter by the anchors will not be captioned unless a verbatim script is added to the computer running the text from the teleprompter. According to NAB, 81.5% of stations caption their local news. All ten of ABC's owned and operated stations caption their local news. Eight of NBC's nine network-owned television stations caption their local news programs. However, any unscripted remarks by anchors are not captioned. Some stations sell captioning sponsorships that give the sponsors commercial mention as a means of defraying the cost of captioning. 3. Cable Television Systems 70. Cable television systems distribute the programming of local broadcast stations and cable programming networks, and their own locally produced programming to subscribers. To the extent that the broadcast programming they carry is captioned they are required by Commission rule to retain the captioning. There are more than 100 satellite delivered cable programming networks. In addition to carriage by cable systems, these programming services also are distributed to subscribers by other MVPDs. These networks range from those, such as CNN and USA, that are available to almost all cable subscribers, to many with more limited distribution, either because they are new or they offer programming aimed at more limited niche audiences. 71. According to NCTA, the overall percentage of captioned programming (for the top 20 basic and expanded basic cable services and the most widely distributed six premium networks) is nearly 24%. For premium services alone, NCTA asserts that the number is over 54% with individual premium services ranging as high as 80% of the entire weekly schedule. These percentages translate to over 30,000 hours per year of closed captioned programming provided by the top 20 basic networks and the top six premium networks. According to NCTA, nearly 30% of prime time programming on the top 20 basic cable networks and over 60% on the top six premium networks is closed captioned. 72. A number of cable programming networks are available on a per channel or per program basis. These premium services generally provide movies and special events. Home Box Office ("HBO") and Cinemax, two of the most widely available of these services, provide a variety of programming, much of which is captioned. In 1995, HBO and Cinemax had captioning on 76% of their theatrical motion pictures, 83% of their musical programming, 94% of their documentaries, 72% of their family programming, 82% of their series, 100% of their comedy programs, and 100% of other categories of programming. 73. According to NCTA, there are several reasons why the percentage of closed captioning on cable television is lower than that of closed captioning on broadcast television. First, there are over 100 national cable programming networks, most of which operate 24 hours a day, seven days a week. Furthermore, there are more than 40 regional and local cable programming networks. All of these networks combined represent thousands of hours of television programming daily. In contrast, there are only four major commercial broadcast networks, which combined present only 40 hours of network television programming daily. In addition, NCTA emphasizes that most government funding that has enabled programmers to close caption programming has historically been directed to the broadcast networks, both commercial and noncommercial, rather than the cable networks. 74. Cable networks also differ significantly from broadcast networks in their audience reach. Unlike the four major broadcast networks which reach nearly 100% of the television households in the U.S., even the most widely available cable network reaches only the 65% of the nation's television households that choose to subscribe to cable and DBS, and the approximately 5% of homes subscribing to other MVPDs. Thus, even though cable networks may be available nationwide, they only obtain carriage on a limited number of systems. Even when they obtain carriage, they gain only a limited number of viewers. Some cable networks are also limited to certain regions, which further reduces their audience reach. In addition, many cable networks target niche markets, some are quite new compared to the established broadcast network, and others do not have the audience viewership or the money to support captioning. NCTA points out that the costs of captioning are fixed and do not hinge on the number of subscribers reached or the production budget for a program. Some cable networks operate with proportionately smaller programming budgets than large broadcast networks or the producers of shows for premium cable channels. For example, Arts & Entertainment Television Networks ("A&E") states that the four major broadcast networks spend more on prime time programming in two weeks than does a cable network the size of A&E or The History Channel in the course of a year. Given these financial realities, NCTA asserts that many cable networks may find that the costs of captioning exceed their programming budgets for the entire year. Therefore, in the cable context, NCTA believes that the size of the audience viewership and advertising base rather than the size of the market reached by the programming service, should be a key factor in determining a cable network's economic ability to afford captioning. 75. Furthermore, much of the programming aired by many cable networks is significantly different from that of broadcast networks in terms of scheduling, format and content. The nature of cable programming varies significantly from network to network, and this may affect the logistics and costs of closed captioning. According to NCTA and many cable networks, these qualitative differences in cable programming account in large part for the quantitative differences in the percentage of closed captioning on cable networks. For example, many cable networks regularly show a substantial number of older films and television series, none of which were captioned when produced. NCTA asserts that some cable networks present topical or perishable programming with a short shelf life, such as music videos and sports programming. Numerous cable networks present live programming on a continuous basis, 24 hours a day, which would require real-time captioning. Other cable programming, such as home shopping channels or weather reports, often contain textual material or other visual depictions of the information being described verbally, which according to the cable networks reduces the need for captioning. 76. Some national news on cable is closed captioned. For example, CNN captions approximately 50% of its day, and CNN Headline News captions approximately 25%. CNN Headline News also provides on-screen financial and sports information in textual form 24 hours a day. CNBC, a 24-hour consumer news and business programming service on basic cable that is owned and operated by NBC, currently stenocaptions 47 1/2 hours of programming per week. America's Talking, another basic cable network owned and operated by NBC which focuses on news and information, does not currently caption any of its programming. The Cable Satellite Public Affairs Network ("C-SPAN" and "C-SPAN 2") captions the proceedings of the U.S. House of Representatives and the U.S. Senate. Pursuant to a grant, C-SPAN also captions the one hour program Booknotes which airs on Sunday evenings, but the continuation of this grant is uncertain. Furthermore, cable local news channels generally do not caption live programming. 77. Kaleidoscope, a 24-hour a day cable programming network started in September 1990, was established for the purpose of serving persons with disabilities. This network is distributed by 201 cable systems and now reaches approximately 15 million subscribers. Kaleidoscope provides both general interest programming and programming specifically addressing topics relevant for persons with disabilities. All of Kaleidoscope's programming is "open captioned" so that the captioning is visible to all viewers. Kaleidoscope does its own captioning and also "open captions" programming it receives which is closed captioned. 78. In addition to national cable networks, cable operators provide regional and local programming. The regional programming is primarily news and sports channels. Much locally originated programming carried by cable operators is on their PEG channels. Programming over PEG channels is usually produced by individuals, schools, local governments or small non-profit organizations working with volunteer personnel. Most of these program producers usually operate with very limited funding that results in a low level of captioning of PEG programming. 4. Other Types of Programming 79. Broadcast and cable programming include movies. Nearly all widely distributed motion pictures currently produced and distributed by member companies of the Motion Picture Association of America ("MPAA") are closed captioned for distribution over broadcast television, home video and cable television following their theatrical release. Following first run release, a "submaster" of each motion picture is created, which is then closed captioned by the NCI or another captioning service. All prints of the motion picture distributed for broadcast television, cable television or home video exhibition are manufactured from the initial captioned submaster prepared for home video release, or from a subsequent submaster edited for broadcast television, and are therefore captioned themselves. More than 6000 closed captioned titles have been distributed. According to MPAA, there are approximately 24,000 previously released films that have not been closed captioned and which would cost $38.4 million to caption. MPAA and other commenters believe that, because of the need to pass through these costs, broadcasters and other video programming providers would simply not purchase older programs and films which would then sit on the shelf unviewed. This situation would result in reduced diversity of programming products available to the public. 80. Closed captioning of programming for non-English speakers on both broadcast and cable channels is quite limited because captioning, particularly in multiple languages, can pose various logistical problems. Because such programming is targeted to a narrow niche market -- minority and ethnic viewers -- and is programmed in non-English languages, it has much more limited distribution, as well as more limited advertising and subscriber revenues than most English language programming. These factors can make the cost of captioning programming for non-English speakers significantly higher than English language captioning. Furthermore, expertise in non-English language captioning may be scarce and, for some languages, virtually unavailable. In addition, the alphabets and characters used in certain non-English languages cannot be processed with standard computerized word processing and closed captioning equipment. Even if such languages can be captioned with special equipment, the captioning decoders currently available in television sets used in the U.S. can only decode Latin based alphabets and symbols. Accordingly, captioning that uses non-Latin characters, such as Chinese, Russian and Hebrew, cannot be decoded on the television sets used by U.S. viewers. Another logistical factor is that closed captioning in English would require a staff with multiple translators of numerous languages. Whether captioned in English or a particular non-English language, it can be extremely difficult to assure the accuracy and quality of such multi-lingual captioning. 5. Other Multichannel Video Providers 81. Television programming is also delivered to consumers through several other MVPDs. These video distribution technologies retransmit programming also delivered over broadcast and cable delivery systems. One such new provider is the direct-to-home ("DTH") satellite systems. Approximately 2.2 million homes subscribe through direct broadcast satellite ("DBS") service and 2.3 million homes subscribe via home satellite dishes ("HSDs"). The total 4.5 million DTH subscribers represent approximately 5% of U.S. television households. DTH is purely a program delivery system. Until now, it has not participated (other than through program licensing), in the creation of closed captioned programming, except for retransmitting intact the closed captioning already encoded in the programming it delivers to subscribers. All closed captioned pay-per-view, off-air broadcast signals carried on satellite, satellite-delivered programming and PBS broadcasts carrying closed captioning are included in satellite transmissions. 82. Another multichannel video provider is the wireless cable industry, which includes licensees of multipoint distribution service ("MDS") stations and ITFS stations that lease transmission capacity to wireless cable operators. Currently, wireless cable operators rely heavily on program suppliers such as broadcast networks and cable networks for their commercial programming. Most wireless cable systems voluntarily retransmit to their subscribers intact any closed captioning provided with that programming. The only exception to this general rule is when the scrambling system employed by some wireless cable systems does not allow line 21 of the VBI to be passed through to the subscriber's television set. Much of the educational programming carried on ITFS channels and retransmitted on wireless cable systems is not closed captioned. 83. Local exchange carriers ("LECs") also can provide video programming service through telephone lines. For example, Bell Atlantic is current delivering video programming that has previously been captioned by the programming provider over its digital video system in Dover Township, New Jersey. Many of the hardware and software components of advanced digital systems that Bell Atlantic will deploy, however, are in the prototype stage or not yet engineered to accommodate captioning. Bell Atlantic states that it cannot ensure compliance with any captioning requirements for any future systems it will deploy until it has had the opportunity to develop and test all system components required to support such requirements. Pursuant to Section 653 of the Communications Act, LECs operating open video systems ("OVS") will be subject to the must-carry requirements applicable to cable systems. Accordingly, under the must-carry requirements, OVS providers will be required to transmit intact any captioning contained in the must-carry signals they retransmit. F. Funding of Closed Captioning 84. Currently, closed captioning is funded by a variety of sources. The Federal government is a major source of funding which is administered by the DOE. Last year, DOE provided $7.9 million for closed captioning, which represents roughly 40% of the total amount spent on captioning. Once Congress has made an annual appropriation to DOE, the Department allocates some of that funding to captioning, establishes priorities for programs and awards grants to captioning providers that have applied for Federal funding. Winning applicants supply proposed budgets and program selections for approval by DOE. Among the categories of programming receiving DOE funding for closed captioning are national news, public information, children's and sports programs, movies, mini-series, and special programs broadcast during prime time, syndicated programming and daytime programming. The national broadcast networks rely heavily on DOE grants to fund captioning of network programming. For example, approximately 45% of ABC's 1996 closed captioning costs are funded by DOE grants. Historically, most DOE funding has been provided to broadcast television rather than cable networks. However, the future of Federal funding for closed captioning is uncertain. Several commenters note that this possible defunding scenario appears inconsistent with the 1996 Act, which requires that the Commission adopt rules to implement captioning. These commenters also voice concerns about the Federal government issuing an unfunded captioning mandate. 85. Programmers and program providers also receive funding for captioning from private sources. For example, Capital Cities/ABC states that it will pay for about 46% of the $2,840,000 cost of closed captioning its own programming in 1996, with DOE funding about 45% and private sources contributing about 9% of that cost. For some of its news and public affairs programming, CBS has obtained support from advertisers who subsidize captioning as a public service. CBS also has been able to defray a portion of the costs of captioning its national and regional sports programming by providing open video credits to advertisers in return for financial support of the closed captioning for this type of programming. For its entertainment programming, CBS states that it funds closed captioning in partnership with program producers and advertisers and financial support from the government. 86. Local broadcast stations also use private funding sources for captioning who are then acknowledged during the broadcast. NAB reports that 67.9% of the stations in their survey that carry captioned news programs have sponsors for the closed captioning. This sponsorship by private companies and nonprofit organizations is appreciated by some members of the deaf and hard of hearing community and is credited for the increase in the amount of captioned programming in recent years. Some representatives of the deaf and hard of hearing community, however, find it troubling that the closed captioning is sponsored by private organizations separate from that of the programming itself. They argue that since the audio portion of a program does not include similar statements of sponsorship, there is an appearance that captions are a "charity provided by the goodness of a benefactor, and not as it should be: sound business sense, good education strategy and equal access to information. G. The Quality and Accuracy of Closed Captioning 87. The quality, accuracy and completeness of closed captioning is a relevant factor in examining the accessibility of video programming for persons with hearing disabilities. Unless closed captions accurately reflect the audio portion of the video programming to which they are attached, they may be of limited use to the viewer. Captions, unlike words in books or periodicals, are impermanent. When there are typographical errors or incorrect word usage, the reader does not have the time to look over the previous words to deduce the intended meaning. Part of the art of captioning is the presentation, including the manner of captioning, its placement and timing. 88. Currently, there is no standardization of captioning styles or presentation. Captions can be displayed in pop-up or roll-up form. Pop-up captions are displayed and then erase entirely. They are used most often for off-line captioning. Roll-up captions, which are mostly used for real-time captioning, scroll onto and off the screen in a continuous motion. Some captioning is verbatim, following exactly what the speakers are saying, while other captioning is not and reflects some editing on the part of the captioners. Other differences among captioning styles include the manner in which speakers are identified and how voice inflections, background noise, audience reaction and sound effects are indicated. For example, some entities identify speakers using parentheses and others provide the speaker's name followed by a colon. In addition, some captions are centered and others are left-justified. Expert captioners do not appear to agree on the best presentation style. 89. A number of problems have been observed with closed captioning. Commenters report that often captions are omitted from any review of a prior week's program at the beginning of a show or any preview of a coming episode of a program. They state that it is not uncommon for the commercials or station breaks to lack captions during a program that is otherwise captioned. It is also reported that the closed captions are sometimes turned off five to eight minutes before the end of national network programming. Open character generated announcements, such as emergency messages, election results, weather advisories and school closing information, which crawl across the bottom of the screen are obscured by captions. The closed captions also tend to disappear when the picture is reduced to a small size in order to show other information (e.g., school closings) and they do not return until the picture returns to its normal size. 90. In addition, commenters observe that the closed captions may not remain with a program throughout the distribution chain, as would be expected. It is reported that, sometimes, a prime time program broadcast on network television may not have the captions when it is rerun in syndication or redistributed by a cable network. When a prime time program goes into syndication it may be edited to fit a shorter time frame. While the video and audio portions remain intact, the captioning may be removed. For example, some PBS programming originally broadcast with closed captions has been redistributed on cable by A&E without the captions included. It is also reported that a program may be captioned in one place and not another. For example, one commenter claims that Jeopardy! is captioned in Washington, D.C. and Nashville, Tennessee, but not in Atlanta, Georgia. Further, commenters state that movies on HBO can appear one day with clear, error-free captions and be repeated on another day with captions that are scrambled and unreadable. Additionally, programs may have the "CC" logo indicating that they are closed captioned when they do not actually have the captions. 91. Moreover, there are often errors in captions, including misspelled words, incorrect grammar, poor timing, inaccuracies and poor placement. Captions do not always match what the speaker is saying. Sometimes they are out of synchronization with the audio portion of the program. Accuracy is a problem, particularly with real time captioning. When the ENR type of captions is used it is common for abbreviations, camera cues and anchor cues that appear on the teleprompter to be included in the closed captions. The result of such errors is garbled captions, which one commenter points out are "a nuisance and sometimes funny." 92. Some of the errors in captions noted above are likely due to captionwriter errors. It has been noted that even highly skilled captionwriters, with up to 99% accuracy rates, often make up to two mistakes per minute. These mistakes occur either because of captionwriter's error or the software mistranslation of the operator's keystrokes. Software mistranslation occurs when the software does not recognize the machine shorthand and the mistakes appear as a phonetic rendering of the word. 93. Problems also occur because of inadvertent errors in the transmission of captions by the broadcaster, distributor, cable network, local station or cable system operator. In many cases, the captions have been stripped, moved to the wrong line of the VBI or flipped onto the wrong field of line 21 by maladjusted signal processing equipment. The critical technical steps of a quality captioning service are accurate encoding, transmission reception and decoding of the signal. To avoid such errors, it is important that the captioned signal be monitored as it is fed, monitored during the duplication process and checked to ensure that the equipment used is not inadvertently stripping the captions, moving them onto the wrong line or placing them in the wrong field. ---------- File: DVS Before the FEDERAL COMMUNICATIONS COMMISSION Washington, D.C. 20554 In the Matter of ) ) Closed Captioning and Video Description) of Video Programming ) ) MM Docket No. 95-176 Implementation of Section 305 of the) Telecommunications Act of 1996 ) ) Video Programming Accessibility ) REPORT Adopted: July 25, 1996 Released: July 29, 1996 I. INTRODUCTION 3. Section 713 of the Communications Act of 1934 ("Act"), as amended by the Telecommunications Act of 1996 ("1996 Act"), [in] ... section 713(f) requires the Commission to commence an inquiry within six months after the date of enactment of the 1996 Act "to examine the use of video descriptions of video programming in order to ensure the accessibility of video programming to persons with visual impairments." 4. Section 713 is "designed to ensure that video services are accessible to hearing impaired and visually impaired individuals." The legislative history of this section states that it is Congress' goal "to ensure that all Americans ultimately have access to video services and programs particularly as video programming becomes an increasingly important part of the home, school and workplace." ... 9. Section 713(f) focuses the Commission's inquiry on the appropriate methods and schedules for phasing video description into the marketplace and standards for this technology, including technical and quality standards for video descriptions. In Section IV we provide a general discussion of the availability of video description and general information regarding the population groups that can benefit from its availability, the methods and costs of adding descriptions to video programming, the amount of programming now available with description and the current funding of this technology. As directed by the statute, we then address methods and schedules for phasing video description into the marketplace, including appropriate regulatory and technical requirements. 10. This report encompasses all types of available video programming with closed captioning and video description delivered to consumers through existing distribution technology. We report on the availability of broadcast commercial and noncommercial networks, basic and premium cable networks, syndicated and locally produced broadcast and cable programming with closed captions and video description. In addition to over-the-air broadcast television service and cable television service, we examine the availability of the delivery of closed captions and video descriptions to consumers by other multichannel video programming distributors ("MVPDs"). Among these distributors are direct-to-home ("DTH") satellite services, including direct broadcast satellite ("DBS") services and home satellite dishes ("HSD"), wireless cable systems using the multichannel multipoint distribution service ("MMDS"), instructional television fixed service ("ITFS") or local multipoint distribution ("LMDS"), satellite master antenna television ("SMATV") and local exchange carrier ("LEC") video services. B. Summary of Findings .. 2. Video Description 21. Current Status: Video description includes a narration of the actions taking place in the video programming that are not reflected in the existing dialogue. It requires the development of a second script and uses the second audio programming ("SAP") channel. Video description has not had as far a reach as video captioning. Video description is currently included only on some programs distributed by the Public Broadcasting Service ("PBS") and a few other programs distributed by cable systems. Not all broadcast stations or other video distributors are able to transmit the SAP channel and only about half of the nation's homes have a television with the capability to receive the SAP channel. Unlike line 21 of the vertical blanking interval, which is reserved only for captioning, there is no dedicated or reserved transmission capacity for video descriptions. As a consequence, it competes with second language transmissions, including Spanish language, for use of the SAP channel. According to the National Center for Health Statistics, there are approximately 8.6 million individuals who are blind or visually disabled who might benefit from video description. 22. Because video description is a newer service there is a lack of experience with developing and assessing the best means for promoting its use. In addition, costs for video description are approximately one and a half times the costs associated with closed captioning similar programming. Video description also receives substantially less government funding, which has been a significant factor in promoting the development of closed captioning. Additional legal and technical issues exist. For example, video description requires the development of a second script, which raises creativity and copyright issues, and must use the second audio programming channel and thus must compete for use with other audio services, particularly the bilingual audio service. While it is expected that the implementation of digital technology may be more conducive to video description than the current technology because it will permit the transmission of multiple audio channels, given the high costs, lack of funding and unresolved copyright issues, video description is presently a developing service that faces many obstacles before it can become more accessible. 23. Recommendation: In enacting this section of the Act, Congress intended to ensure video accessibility to all Americans, including persons with visual disabilities. The general accessibility of video description is dependent on the resolution of certain technical, legal, funding and cost issues. Any schedule for expanding the use of video description would depend, in part, on implementation of advanced digital television. Implementation of advanced digital television can make the distribution of additional audio channels feasible and facilitate the implementation of video description. In addition to these technical problems, funding remains a fundamental issue that will effect any schedule for the widespread use of video description since it appears that advertising support alone is unlikely to be sufficient to fund this service, given the costs involved. 24. Congress has directed the Commission to assess the appropriate methods and schedules for phasing video description into the marketplace and to address certain technical and quality standards issues. The present record on which to assess video description, however, is limited and the emerging nature of the service renders definitive conclusions difficult. Thus, we believe that, at this time, the best course is for the Commission to continue to monitor the deployment of video description and the development of standards for new video technologies that will afford greater accessibility of video description. Specifically, we will seek additional information that will permit a better assessment of video description in conjunction with our [August 8] 1997 report to Congress assessing competition in the video marketplace. This annual report is submitted in compliance with Section 628(g) of the Act, 47 U.S.C. 548(g). In the context of this report, the Commission will be able to gather and evaluate information regarding the deployment of SAP channels and digital technology that will enable video providers and programmers to include video description. In seeking more information, we intend to focus on the specific methods and schedules for ensuring that video programming includes descriptions, technical and quality standards and other relevant legal and policy issues. .. IV. Video Description of Video Programming A. Introduction 94. Video description is a more recent innovation than closed captioning. It provides aural descriptions of a program's key visual elements that are inserted during the natural pauses in the program's dialogue. For example, it describes an action that is otherwise not reflected in the dialogue such as the movement of a person in a scene. It was first used in theatrical performances in the early 1980s, and since that time has been developed for television programming primarily by WGBH and other PBS affiliates. PBS first tested broadcast video description in 1988. The video description of a television program is most often transmitted through the SAP channel. The SAP channel is a subcarrier that allows each distributor of video to transmit an additional soundtrack. Essentially video distributors which utilize a SAP channel allow the viewer to choose between the primary soundtrack and an additional, or secondary, soundtrack transmitted on the SAP channel for the program. In addition to video description, the SAP channel is also frequently used for alternative language programming. 95. This ancillary service is permitted under the Commission's rules so long as it causes no observable degradation to any portion of the visual or aural broadcast signal. To receive the service, the audience member must have a stereo television or a videocassette recorder ("VCR") that is capable of receiving the SAP channel, or a television adapter for this channel. There are presently no regulatory requirements regarding video description. B. Audiences that Benefit from Video Description 96. The precise number of persons with visual disabilities likely to benefit from video description is difficult to estimate. This is, in part, due to the wide differences in the degree of visual disability. Indeed, many persons with sufficient vision to watch normal television programming may still benefit from video description. According to the National Center for Health Statistics there are 8.6 million persons who are visually disabled. However, other estimates of the population of persons with visual disabilities who would benefit most from video description range between eight and 12 million persons.[235] Beyond the direct benefit to such persons, video description can relieve family and friends of persons with visual disabilities of the task of providing on the spot descriptions while viewing programming, thereby essentially serving as ad hoc describers. 97. Many of these individuals are children for whom educational programming with video description would offer significant benefits. Estimates suggest that up to 500,000 persons under the age of 18 can be classified as visually disabled. Video description would allow these children to enjoy the same educational experience as their sighted peers. Finally, video description may allow parents with visual disabilities to participate more fully in their children's educational experience. 98. As the population ages, an increasing number of people will become visually disabled as part of the aging process. These people may also become increasingly dependent upon television for information, entertainment and companionship. 99. Some sources have suggested that video description services can also offer ancillary benefits to nonvisually disabled persons. Video description may also benefit persons with cognitive or learning disabilities. Furthermore, video description may offer an educational opportunity for the sighted to improve their vocabulary and even writing skills by suggesting more creative and informative ways of describing a scene. Persons without visual disabilities may sometimes choose to passively "watch" television while engaged in other activities. These persons, like those in the visually disabled community, are already partially served by conventional television and television band radio receivers. However, their experience, like that of people with visual disabilities, might be enriched through video description. The widespread availability of video description might increase this type of use. C. Methods of Distribution of Video Description 100. Generally, video description service is provided using the SAP channel. The SAP channel allows for the delivery of a third audio track for a program in addition to the monaural and stereophonic audio tracks. The transmission of the SAP channel is accomplished with the use of a secondary carrier called a subcarrier. The ancillary audio (in this instance video description) is transferred onto the SAP subcarrier through the use of a modulator. Therefore, any program distributor wishing to deliver SAP would need to install an additional modulator at the transmission facility. In comparison, closed captioning information is carried on the VBI and does not require the use of additional equipment at the transmission facility. The VBI is available as an inherent feature of the Broadcast Television System Committee ("BTSC") video signal standard and is part of the transmission of a television signal, whereas the SAP channel requires the video distributor to generate a separate subcarrier containing the additional audio track. 101. In order for a viewer to access the SAP channel, the consumer must have a television or VCR equipped to receive the SAP channel. Approximately 52% of American households own SAP-compatible televisions, and 20% own VCRs capable of receiving the SAP channel. A consumer who has a television or VCR with SAP capability can activate this feature to receive the video description or other audio, if available, in lieu of the primary soundtrack. 102. When the SAP channel is employed, the program can be transmitted with two separate audio tracks. The additional track "follows" the main program signal through the distribution process. For example, the SAP channel as currently used by PBS for its video description follows the main program signal from the network's master control facility and satellite distribution system to the local station's broadcast facility and through the local transmitter. The accommodation of this additional soundtrack typically requires changes to the network and local station plant wiring and equipment. At the local transmitter, the distributor must have the technical facilities to remodulate the subcarrier signal to include the SAP channel information. 103. Video description may also be provided as an "open" service with the descriptive narrative incorporated as part of the regular sound track. Narrative Television Network ("NTN") is currently providing nearly 20 hours per week of such programming over more than 1000 cable systems. NTN states that this method has the advantage of being available without the special equipment required to access the SAP channel. One potential disadvantage to this method is that the additional narrative may act as a distraction to the wider, sighted audience who wish to watch programming in a conventional manner. 104. In Canada, video description has been provided using a Radio Reading Service. AudioVision Canada transmits descriptive audio separately from regular audio over a radio reading service available on most Canadian cable television FM systems. This allows the consumer to receive either the video signal with the primary soundtrack or the video description soundtrack alone, but not both. For this reason, this technique works best for those not interested in or able to see the video portion of the program, since only one television channel can be accessed at a time. This would partially undermine the value of video description by not allowing persons with visual disabilities to enjoy television programming with their friends and family. However, this technique, as with open video description, allows the audience access to the descriptive narrative without special equipment. 105. Finally, video description may also benefit from digital television technology. This technology may allow operators to provide the viewer with a choice between video description and alternative language programming because it may permit the transmission of multiple audio tracks. According to NAB, digital television may also allow a viewer to listen to more than one audio channel at the same time. This feature may lower the cost of providing video description by allowing the consumer to select both the main audio program with the conventional soundtrack and a descriptive narrative video description audio program synchronized with the natural pauses in the conventional soundtrack simultaneously. This would allow the producer to eliminate the costly process of mixing the main soundtrack with the descriptive narrative. D. Cost of Video Description 106. Estimates for the cost of providing video description vary widely. The service is labor intensive and the actual costs seem to vary considerably depending on the particular project. NCTA estimates that the cost of providing descriptive video service for a full length feature film can range up to $10,000. However, the NTN estimates the cost of high quality narrative programming, when included as part of the primary audio track, to be between $1000 and $1200 per program hour. In addition to NCTA and NTN, other commenters address the issue of cost. PBS estimates the cost at one and one half times the cost of closed captioning or $3000 per program hour. Audio Optics estimates that the cost alone for adding video description to a one and a half hour feature film would be about $4000, exclusive of profit or overhead. This would equal about $2667 per program hour. 107. Video description also entails increased distribution costs. Currently, the commercial broadcast networks do not have the facilities to distribute the SAP channel to affiliated stations for retransmission. In order to distribute programming with SAP channel audio, the network must encode the SAP signal into the transmission to the satellite using a costly digital encryption system. The encrypted signal must be decrypted when received by the ground station. ABC, while unable to provide precise estimates, states that the required upgrades at the network production facilities and the over 200 affiliated stations could cost "many hundreds of thousands of dollars." NBC and CBS estimate the total cost of retrofitting their network facilities and infrastructure with equipment to provide video description using a SAP channel to be at least several million dollars. 108. After receiving the decrypted signal from the networks the ground station must encode the SAP signal into its signal using a SAP generator. The commercial networks estimate that individual stations that do not have SAP reception and decoding capability would have to spend between $30,000 and $1 million for each local station to obtain it. According to the commercial broadcast networks, upgrades to current facilities necessary to provide video description would be wasted after conversion to digital television. 109. Cable systems are technically able to transmit information on the SAP channel. However, cable operators face the same problems as broadcast stations regarding the reception and retransmission of SAP signals. In the case of cable the problems are somewhat compounded because the cable system requires a separate SAP generator for each channel it wishes to distribute with the SAP channel. E. Funding for Video Description 110. To date, the primary source of funding for video description has been through government grants administered by the PBS, National Endowment for the Arts, National Science Foundation and especially the DOE. The DOE currently allocates $1.5 million for video description or about $0.19 per American with a visual disability. 111. In addition to public funding, private sources have begun to support video description. WGBH's Video Description Service receives 35% of its funding from corporations and foundations, home video revenues and individual viewer donations. Video description also has begun to enjoy some success as a commercially viable product as witnessed by the success of NTN, Kaleidoscope and the recent introduction of described programming on Turner Classic Movies. However, even these commercial projects benefit from public funding. For instance, NTN has received government grants. Turner Classic Movies has developed its video description programming in partnership with WGBH, which as noted receives 65% of its funding from government grants. F. Current Availability of Video Description 112. Public broadcasting has contributed substantially to the development and availability of video description. PBS currently distributes video description on 22 programs, including Mister Rogers' Neighborhood, Masterpiece Theater and Mystery. PBS's video description programming is currently being broadcast by 130 PBS stations reaching 71% of the U.S. population. PBS also provided video description for the 1993 presidential inauguration, the only example of live video description to date. 113. There is no video description on the commercial broadcast networks. According to the networks, providing video description would be prohibitively expensive and logistically onerous. For instance, NBC observes that PBS is able to describe some of its programming because it receives the master tape two to three weeks in advance. In contrast, commercial networks state that they receive their master tapes two to three days in advance. 114. Kaleidoscope, the cable programming network devoted to the needs of persons with disabilities, provides movies that include video description. Kaleidoscope's programming schedule includes between two and two and one half hours of such movies each week. 115. In addition to Kaleidoscope, NTN also provides video description. NTN does not use the SAP channel but rather uses "open video description" incorporating the descriptive narrative into the regular soundtrack. NTN programming is distributed by satellite, cable and broadcast. Cable subscribers who receive NTN's programming as part of their basic service are the majority of its audience. NTN maintains that there is some evidence that the availability of NTN programming acts as an inducement to persons with visual disabilities to subscribe to cable. NTN also cites its experience in Canada where it is usually distributed as part of a premium channel. It asserts that its experience there indicates that the availability of such programming may induce persons with visual disabilities to take premium services. 116. Turner Classic Movies began airing movies with video description narrative as its "DVS Showcase" series. This series is aired weekly and runs for about two hours every Sunday afternoon. Turner Classic Movies' efforts are a joint project with WGBH and currently include 12 titles, such as Casablanca and the Maltese Falcon. Turner Classic Movies plans to add 15 more video description titles this fall. 117. Video description poses varying degrees of additional difficulty for other MVPDs. DTH satellite systems face the same problems as other distributors. For example, these providers express general concerns regarding the availability of described programming, a conflicting demand for bilingual programming on the SAP channel and the possible expense of creating and adding descriptive narrative. HSD is not capable of passing through the SAP channel. Using current technology many MMDS operators are unable to decode SAP programming without upgrading a significant portion of their equipment. While the systems are generally capable of passing the SAP channel through, many of the current set top boxes are not capable of decoding the signal. Similarly, SMATVs are able to transmit and receive the SAP channel but are faced with the same limitations of the current SAP channel technology as other MVPD operators. 118. With the exception of the service provided by PBS, Kaleidoscope, NTN and Turner Classic Movies noted above, video description, as such, is unavailable on local, regional or syndicated broadcast television and local or regional cable services. Thus, persons with visual disabilities must rely on these limited video description services or the information that can be gleaned from the conventional television soundtrack. G. Obstacles to Video Description 119. Barriers to video description can be divided into two broad categories: technical issues and obstructions inherent to the service. Technical concerns include the unavailability of the SAP channel or the inability of some broadcast and cable networks to distribute programming with the SAP channel. 120. Other barriers to more widespread use of video description are inherent to the service. For instance, the service requires development of a second script. The development and production of this second script can add considerably to both the production time and the budget required to produce a program. 121. In addition to the increased costs, some commenters suggest that there may be significant copyright issues associated with the addition of descriptive narration to video programming. Whereas closed captioning is essentially a verbatim transcript of the original script, video description necessarily involves creative decisions and thus may create a distinct derivative work. A derivative work is an addition to a pre-existing work which transforms or otherwise modifies the original work. To the extent that video description is subject to copyright laws, an unauthorized video description of an underlying work might constitute a copyright infringement. As a consequence, commenters assert that, absent a statutory exception, mandatory video description regulations may conflict with the copyright holders' exclusive rights to create derivative works from their copyrighted works. 122. Advocates for persons with visual disabilities argue that copyright issues can and will be resolved by the marketplace if video description requirements are put into place. According to this line of reasoning, video description will simply become a routine part of licensing agreements if the service is required. 123. Furthermore, because video description requires breaks in the dialogue to permit the insertion of the description, some programming may simply not be amenable to video description. For instance, programming with a great deal of dialogue may not permit the additional description while a classical music concert or popular music video might not be appropriate for video description because the descriptive narrative would interfere with the primary substance of the programming. In other cases, programming such as an action adventure movie may contain so much action that an ongoing video description could not keep up with the action even if gaps in the dialogue existed. 124. Similarly, other forms of programming already contain considerable narrative and, therefore, video description may be unnecessary. Play-by-play sports programming and talk shows are often cited by programmers as examples of programming which do not warrant video description. However, several commenters on behalf of the visually disabled community argue that play-by-play does not sufficiently address the needs of people with visual disabilities. For instance, a play-by-play announcer excitedly interjecting "Wow did you see that?" does not provide information to a viewer with visual disabilities. Other commenters suggest that video description is not necessary for sports if a comparable radio broadcast is available. Still other commenters respond that a radio broadcast is only a substitute for video description if one assumes persons with visual disabilities were watching sports in isolation. These commenters argue that a significant benefit of video description is that it allows people with visual disabilities to enjoy television programming in social situations and to interact with their sighted friends and family members. Moreover, WGBH notes that even radio commentary is developed primarily with sighted people in mind and may omit information useful to people with visual disabilities. 125. Finally, many stations already use the SAP channel for other purposes. The most common purpose cited is bilingual programming, with 4.7% of local stations reported to be using the SAP channel to provide second language programming to reach 28% of television households. Other uses include local stations using the SAP channel to provide weather bulletins, news or the local farm report. A number of stations carry another feed of their main audio channel on the SAP channel to avoid consumer confusion if the SAP channel were inadvertently selected. Such uses usually serve larger communities and necessarily compete with video description. Commenters indicate that to the extent that stations believe that the demand for such uses of SAP capabilities is greater than the demand for video description, they can be expected to preempt video description at least as long as SAP remains a comparatively limited resource and is not mandated by law or regulation. 126. It appears that digital television may represent a solution to the problem of limited SAP capacity. Digital television allows video distributors to compress considerably more information within a given amount of bandwidth. Digital television may allow broadcasters to transmit several SAP like signals in conjunction with a program thereby permitting the consumer to choose between the conventional soundtrack, non-English language soundtracks or video description. However, this would necessitate the consumer having a digital set-top box or digital television capable of accessing the digital video description. H. Statutory Considerations 127. Under Section 713(f), the Commission is required to assess appropriate methods and possible schedules for phasing video description into the marketplace. We also are required to assess technical and quality standards for video descriptions, a definition of programming for which video descriptions would apply and other relevant technical and legal issues. In this section, we examine each of these matters. 128. Due to their limited experience with video description and the technical difficulties in providing the SAP with video description today, industry commenters generally assert that it is premature to consider implementation of video description requirements. Several commenters suggest that video description should be left to marketplace demands. Some commenters suggest that as the population ages, market demand will ensure that video description will become more widely available. Other commenters assert that as household penetration of SAP compatible televisions and VCRs increase, the marketplace can be expected to respond with increased product for the larger number of viewers with visual disabilities capable of receiving video described programs. 129. Still other commenters, while recognizing a need for video description, urge various exemptions, such as certain kinds of programming where video description would be redundant or overly burdensome, and certain kinds of programmers or video distributors that might face undue hardship if required to provide video description service. Among these suggested exemptions are sports programming, local access programming and programming that already consists primarily of a discussion or narrative. 130. In marked contrast to industry commenters, persons who would substantially benefit from the availability of video description and organizations that serve people who are visually disabled urge that the service be broadened and made more generally available. These commenters advocate a broad range of strategies from mandatory requirements[74] to strong economic incentives as well as various combinations of mandates and incentives. While several commenters offered these suggestions, few offered any specifics regarding the implementation of such incentive programs. 131. The American Council for the Blind ("ACB") urges that an increase in Federal funding is necessary to further the development of video description. At the same time ACB contends that strict video description requirements should be applied across the industry, to producers, distributors and program providers. According to the American Federation for the Blind ("AFB"), there is no justification for any blanket exemption for any class of programmer or distributor. Rather, AFB suggests that the Commission adopt an undue burden standard similar to the standard used for closed captioning. Under such a standard, the Commission would be required to consider the nature and cost of adding video description, the impact on the provider or program owner, the financial resources of the program owner and the type of operations of the provider or program owner. ACB recommends that in establishing standards, priorities and schedules for implementing video description requirements, the Commission should consult with an advisory board composed of consumers with visual disabilities, industry representatives and individuals with video programming experience. 132. Washington Metropolitan Ear suggests that while the marketplace may ultimately provide widespread use of video description, a government mandate is necessary in order to develop the market for this service. Washington Metropolitan Ear proposes that all program carriers be required to have the capability of relaying video description. Noting that the library of video described programming currently available is limited, Washington Metropolitan Ear also proposes a five year phase in period before video description becomes a required part of most programming. 133. In addition to addressing potential regulatory requirements, commenters propose various alternative means of expanding the availability of video description services. These proposals range from increased government funding to tax incentives. In some cases, the positions of these commenters were somewhat contradictory. For instance, NTN argues that video description is economically viable in the marketplace, while maintaining that increased government funding will be necessary to increase the availability of video description. US West proposes that private sources and the marketplace should be the primary funding vehicles for video description. To the extent that public funding is necessary, US West proposes that the money should come from a percentage of locally collected fees, such as cable franchise fees charged by local governments. US West further proposes that the government should provide additional resources to video production companies that insert video description into their programming, and also to those companies and individuals that provide private support, through the use of tax credits or deductions as applicable. 134. Some industry commenters express concern that any video description requirement to be recommended or ultimately imposed should require the producer of the programming rather than the video distributor to include the descriptive narrative. These commenters argue that such a requirement is more efficient than requiring individual video distributors to provide the descriptive narratives. Similarly, industry commenters urge that any requirements mandating that programming include video description be imposed only on a prospective basis. These commenters argue that requiring video description of the enormous libraries of existing programming would be unduly onerous and impose an impossible burden on the industry. 135. Several commenters address the issue of quality standards. These commenters believe that video description has an inherently subjective aspect and that the issue of quality is not as easily measured as in the case of closed captioning. Whereas the quality of closed captioning can be described, at least in part, in terms of errors per hour of programming the quality of video description is, in large measure, a matter of the artistic choices made in developing a descriptive narrative such as what is described and how accurately the narrative conveys the experience enjoyed by a sighted viewer. Nevertheless, these commenters are adamant that video description address the actual needs of persons with visual disabilities rather than the needs perceived by the sighted community. In order to ensure this, these commenters urge that audience testing be required or that a standards board composed of persons with visual disabilities be created. 136. Some commenters suggest that any regulatory action addressing video description should be on a parity with closed captioning. AFB proposes that the standards for video description and closed captioning be the same, including appropriate undue burden tests. Bell Atlantic suggests that the same considerations that are of concern in developing closed captioning standards must be addressed in recommending any regulations for video description. WGBH suggests that video description in its present state should be treated in much the same way as closed captioning is currently treated on cable systems, that is, if it is part of the original program source it must be included if technically feasible. 137. Several commenters suggest that emergency information provided using captioning across the bottom of the screen without audio is of special concern. These commenters cite the public safety needs to provide both sighted people and persons with visual disabilities with important information. AFB proposes that such information be given priority in any requirement implementation schedule that the Commission adopts. I. Conclusion 138. In enacting Section 713 of the Act, Congress intended to ensure video accessibility to all Americans, including individuals with visual disabilities. Video description is an emerging service that currently enjoys only limited availability. Congress has directed the Commission to assess the appropriate methods and schedules for phasing video description into the marketplace and to address certain technical and quality standards issues. The present record on which to assess video description, however, is limited, and the emerging nature of the service renders definitive conclusions difficult. Moreover, with the exception of the Metropolitan Washington Ear's proposal to phase in video description within five years, commenters did not provide any guidance regarding the implementation of video description of video programming in terms of time frames, methods or standards. Nevertheless we believe that the development of rules for closed captioning, which is more widely available, can provide a useful model for the process of phasing in broadened use of video description. The nature and speed of the process for video description remains dependent on the resolution of certain technical, funding, legal and cost issues, as described below. 139. Many broadcast television stations are not yet equipped to transmit a SAP signal. These stations tend to be in smaller markets with a smaller economic base to support increased costs. Other MVPDs also currently do not transmit or decode a SAP signal. Advanced digital technologies, including specifically those used in broadcasting, direct broadcast satellites, MMDS ("wireless cable"), cable and wireline "open video systems" appear capable, when joined with digital receivers, of transmitting a separate channel. In particular, advanced digital television could make the distribution of additional audio channels feasible and thereby eliminate the conflict currently existing with other audio channel uses (e.g., second language). Any schedule for the full deployment of video description is dependent, in part, on the implementation of advanced digital technologies. 140. In addition to these technical problems, funding remains a fundamental issue that will effect any schedule for the widespread use of video description. Currently, given the costs involved, it appears unlikely that advertising support alone will be sufficient to fund this service. Irrespective of the level and source of funding, it appears desirable to phase in service over a period of years. We believe that initial requirements for video description should be applied to new programming that is widely available through national distribution services and attracts the largest audiences, such as prime time entertainment series. Over a period of several years, video description should be phased in for programming with more limited availability, including services distributed in limited areas, and programming that attracts smaller audiences, such as daytime shows. Lower priority for video description should be given to programming that is primarily aural in nature, including newscasts and sports events. Phasing in video description in this manner would follow the model of the development of closed captioning. A more specific schedule for increasing the availability of video description is dependent on the nature of the support mechanism selected. In this regard, Congress could consider increasing funding mechanisms for pilot programming and seed money for joint government/industry projects and could encourage the incorporation of video description in program production. Congress could use the development of closed captioning as a model for broadening video accessibility. 141. Additionally, there are certain legal issues, such as copyright matters, that remain unresolved and are likely to require a Federal reassessment of the applicability of existing laws. The copyright issue might be resolved through private negotiation with respect to newly produced material as part of the initial production process. The law, however, may need to be clarified to permit the addition of descriptions without copyright owner approval to older, previously published programming by parties down the distribution chain from the original production process. 142. Therefore, we believe that the best course is for the Commission to continue to collect information and monitor the deployment of video description and the development of standards for new video technologies that are likely to affect the availability of video description. We intend to seek additional information and data that will permit a better assessment of video description in conjunction with our 1997 report to Congress assessing competition in the video marketplace. This annual report is submitted to Congress in compliance with Section 628(g) of the Act, 47 U.S.C. 548(g). In the context of this report, the Commission will be able to gather and evaluate information regarding the deployment of SAP channels and digital technology that will enable video providers and programmers to include video description. Persons with disabilities and the video programming industries will be able to report to the Commission on any developments to coordinate efforts in new technology standard setting and funding mechanisms. In seeking more information, we intend to continue to focus on the specific methods and schedules for ensuring that video programming includes descriptions, technical and quality standards and other relevant legal and policy issues. Simultaneously, we will monitor the deployment of video description through voluntary efforts and the development of standards for new video technologies that will afford greater accessibility of video description. Based on a more complete record, we expect to be able to better assess those issues that were not fully addressed through this proceeding. ---------- File: ACT96 TELECOMMUNICATIONS ACT OF 1996 REP. DANIEL FRISA (R-NY) The luncheon speaker was the Hon. Daniel Frisa, a Republican first elected to Congress in 1994. Rep. Frisa represents the Fourth District, with offices in Hempstead, Mineola, and Valley Stream. He serves on the House Commerce Committee and was a member of the House-Senate Conference Committee that hammered out the final version of what became P.L. 104-104, the Telecommunications Act of 1996. Mr. Frisa spoke to conference participants about the bill, which then was in almost-final shape, emphasizing the disability provisions which are excerpted below. Mr. Frisa began by explaining that Congress was nearly done with the bill, and that President Clinton's signature was anticipated. He noted that Congress had been working on telecommunications reform for a number of years, but that the controversial nature of the legislation had made it impossible to pass anything yet. He noted that NYNEX is a major employer in the Fourth District, and that he had worked closely with the Company to understand how the pending legislation would affect it and its customers. The act, Rep. Frisa said, adds a new section 255 called "Access by Persons with Disabilities" to the landmark 1934 Communications Act. This new section, he said, called for companies that manufacture telecommunications equipment to make new equipment accessible to and useable by Americans with disabilities, if readily achievable. He said the term "readily achievable" means that the manufacturers can accomplish accessibility without making their products unaffordable or delaying their introduction unnecessarily. In the event that a manufacturer discovered that it could not make something accessible, Rep. Frisa observed, the bill requires the manufacturer to ensure that the equipment works well with adaptive peripheral devices that people with disabilities use, such as speech synthesizers or TTY's. Section 255 also requires that new telecommunications services be accessible to and useable by people with disabilities. He noted that the Federal Communications Commission would determine what kinds of services are "telecommunications services," as opposed to information services or computer-related services. Computer hardware and software traditionally has not been regulated by the Federal Government. Captioning and video description are addressed in another new section, this one section 713. Rep. Frisa commented that captioning will be required for most video programming, including some already existing television programs and movies. Video description for blind individuals, however, would only be studied -- the bill does not require owners and producers of video programming to video describe their offerings. TELECOMMUNICATIONS ACT OF 1996 KEY DISABILITY-RELATED PROVISIONS Below are: "Interconnection" (Section 251), "Access by persons with disabilities" (Section 255), "Coordination for Interconnectivity" (Section 256), and "Video programming accessibility" (Section 305, creating a new Section 713), of P.L. 104-104. President Clinton signed the bill February 8, 1996. TELECOMMUNICATIONS ACT OF 1996, P.L. 104-104 "SEC. 251. INTERCONNECTION "(a) General Duty of Telecommunications Carriers.--Each telecommunications carrier has the duty-- "(1) to interconnect directly or indirectly with the facilities and equipment of other telecommunications carriers; and "(2) not to install network features, functions, or capabilities that do not comply with the guidelines and standards established pursuant to section 255 or 256." ***************************************************************** "SEC. 255. ACCESS BY PERSONS WITH DISABILITIES. "(a) Definitions.--As used in this section-- "(1) Disability.--The term `disability' has the meaning given to it by section 3(2)(A) of the Americans with Disabilities Act of 1990 (42 U.S.C. 12102(2)(A)). "(2) Readily achievable.--The term `readily achievable' has the meaning given to it by section 301(9) of that Act (42 U.S.C. 12181(9)). "(b) Manufacturing.--A manufacturer of telecommunications equipment or customer premises equipment shall ensure that the equipment is designed, developed, and fabricated to be accessible to and usable by individuals with disabilities, if readily achievable. "(c) Telecommunications Services.--A provider of telecommunications service shall ensure that the service is accessible to and usable by individuals with disabilities, if readily achievable. "(d) Compatibility.--Whenever the requirements of subsections (b) and (c) are not readily achievable, such a manufacturer or provider shall ensure that the equipment or service is compatible with existing peripheral devices or specialized customer premises equipment commonly used by individuals with disabilities to achieve access, if readily achievable. "(e) Guidelines.--Within 18 months after the date of enactment of the Telecommunications Act of 1996, the Architectural and Transportation Barriers Compliance Board shall develop guidelines for accessibility of telecommunications equipment and customer premises equipment in conjunction with the Commission. The Board shall review and update the guidelines periodically. "(f) No Additional Private Rights Authorized.--Nothing in this section shall be construed to authorize any private right of action to enforce any requirement of this section or any regulation thereunder. The Commission shall have exclusive jurisdiction with respect to any complaint under this section." ***************************************************************** "SEC. 256. COORDINATION FOR INTERCONNECTIVITY. .. "(b) Commission Functions.--In carrying out the purposes of this section, the Commission-- .. "(2) may participate, in a manner consistent with its authority and practice prior to the enactment of this section, in the development of appropriate industry standard-setting organizations of public telecommunications network interconnectivity standards that promote access to-- .. "(B) network capabilities and services by individuals with disabilities." ***************************************************************** SEC. 305. VIDEO PROGRAMMING ACCESSIBILITY. Title VII is amended by inserting after section 712 (47 U.S.C. 612) the following new section: "SEC. 713. VIDEO PROGRAMMING ACCESSIBILITY. "(a) Commission Inquiry.--Within 180 days after the date of enactment of the Telecommunications Act of 1996, the Federal Communications Commission shall complete an inquiry to ascertain the level at which video programming is closed captioned. Such inquiry shall examine the extent to which existing or previously published programming is closed captioned, the size of the video programming provider or programming owner providing closed captioning, the size of the market served, the relative audience shares achieved, or any other related factors. The Commission shall submit to the Congress a report on the results of such inquiry. "(b) Accountability Criteria.--Within 18 months after such date of enactment, the Commission shall prescribe such regulations as are necessary to implement this section. Such regulations shall ensure that-- "(1) video programming first published or exhibited after the effective date of such regulations is fully accessible through the provision of closed captions, except as provided in subsection (d); and "(2) video programming providers or owners maximize the accessibility of video programming first published or exhibited prior to the effective date of such regulations through the provision of closed captions, except as provided in subsection (d). "(c) Deadlines for Captioning.--Such regulations shall include an appropriate schedule of deadlines for the provision of closed captioning of video programming. "(d) Exemptions.--Notwithstanding subsection (b)-- "(1) the Commission may exempt by regulation programs, classes of programs, or services for which the Commission has determined that the provision of closed captioning would be economically burdensome to the provider or owner of such programming; "(2) a provider of video programming or the owner of any program carried by the provider shall not be obligated to supply closed captions if such action would be inconsistent with contracts in effect on the date of enactment of the Telecommunications Act of 1996, except that nothing in this section shall be construed to relieve a video programming provider of its obligations to provide services required by Federal law; and "(3) a provider of video programming or program owner may petition the Commission for an exemption from the requirements of this section, and the Commission may grant such petition upon a showing that the requirements contained in this section would result in an undue burden. "(e) Undue Burden.--The term `undue burden' means significant difficulty or expense. In determining whether the closed captions necessary to comply with the requirements of this paragraph would result in an undue economic burden, the factors to be considered include-- "(1) the nature and cost of the closed captions for the programming; "(2) the impact on the operation of the provider or program owner; "(3) the financial resources of the provider or program owner; and "(4) the type of operations of the provider or program owner. "(f) Video Descriptions Inquiry.--Within 6 months after the date of enactment of the Telecommunications Act of 1996, the Commission shall commence an inquiry to examine the use of video descriptions on video programming in order to ensure the accessibility of video programming to persons with visual impairments, and report to Congress on its findings. The Commission's report shall assess appropriate methods and schedules for phasing video descriptions into the marketplace, technical and quality standards for video descriptions, a definition of programming for which video descriptions would apply, and other technical and legal issues that the Commission deems appropriate. "(g) Video Description.--For purposes of this section, `video description' means the insertion of audio narrated descriptions of a television program's key visual elements into natural pauses between the program's dialogue. "(h) Private Rights of Actions Prohibited.--Nothing in this section shall be construed to authorize any private right of action to enforce any requirement of this section or any regulation thereunder. The Commission shall have exclusive jurisdiction with respect to any complaint under this section." ---------- File: PANELS PANEL SESSION SUMMARIES The afternoon activities featured three simultaneous panels. In one, representatives from NYNEX Corporation outlined services available from the Company and articulated corporate programs for people with disabilities. In a second panel, two New York State (NYS) officials discussed state policies on telecommunications and related issues as these affect New Yorkers with disabilities. The final panel featured computer trainers who described programs available to people with disabilities who want to learn how to use PC's and how to surf the 'Net. NYNEX CORPORATION PANEL Mary Essex, of the NYNEX Center for Individuals with Disabilities in Marlborough, MA, chaired the NYNEX Corporation panel. Featured speakers were Dr. Sara Basson, of NYNEX Science and Technology, in White Plains; Jim Barry, of NYNEX Consumer Affairs; and Philip Alvarez, NYNEX Market Development. Dr. Basson demonstrated VoiceDialingsm and her research on advanced versions of the service, which is now widely available throughout New York and New England. In response to a question about whether more than one member of a household could use the same VoiceDialing template, Dr. Basson said "Yes, but only if your voices are similar. My 6 year-old and my 10 year-old can use my template, because our voices are similar enough." She also encouraged a woman with cerebral palsy, who worried that the software might not work with her voice, to try it. VoiceDialing was thoroughly tested by NYNEX prior to its initial offering. In one family participating in the trial, a boy trained the software to dial his grandmother when the dog barked; it worked! This helps to show that even people with very unusual voices can effectively dial using VoiceDialing, as long as their voices, however atypical, are consistent from day to day. However, Dr. Basson noted, someone using VoiceDialing to call in an emergency might encounter problems, because our "hysterical voices" may be very different from the trained templates. Jim Barry discussed NYNEX's policies on Universal Design. He outlined what "universal design" means and gave examples. [Editor's Note: An excellent site for information about and self-evaluation guides on how to do universal design may be found at the Trace R&D site: http://www.trace.wisc.edu Several publications on universal design, as well as design guidelines and standards, are available there.] Philip Alvarez talked about the NYNEX Network platform, showing differences between copper and fiber optic cable. With copper, service is limited to voice phone calls, rather slow fax and email electronic messages, and very jerky video. With fiber, however, motion-picture-quality video, very fast fax and email communication, and clearer voice phone calls become possible. Members of the audience appreciated the opportunity to see and even touch copper and fiber cable. NEW YORK STATE GOVERNMENT PANEL This panel featured Deborah Buck, of the Office of Advocate for Persons with Disabilities, in Albany, who served as moderator, and Tom Burke, of the NYS Department of Public Service. Mrs. Buck discussed the NYNEX Equipment Distribution Program, the NYS Diffusion Fund, and the Office of Advocate's approach to policy issues. With respect to the equipment program, which at the time of the conference was not yet in operation, Ms. Buck explained that NYNEX had assembled a good advisory group to help in design of the program. She said that because funds were very limited, eligibility for free equipment would be limited to people who are, or become, NYNEX LifeLine customers. LifeLine, she explained, is designed for low-income individuals, and offers basic phone service for as little as $1 per month plus phone tolls. [The program is now expected to begin in December 1996. NYNEX LifeLine Service subscribers who have disabilities that prevent or limit use of the telephone soon may apply for free assistive equipment under the program. Qualifying disabilities include deafness, deaf-blindness, severe hearing loss ("hard of hearing"), speech impediments or impairments, blindness or low vision, or mobility limitations. Any combination of these disabilities also is included. The disability must be "certified" by a physician or NYNEX Affiliated Agency. Eligible individuals may apply. If selected, the individual may call the NYNEX Center for Individuals with Disabilities -- or an affiliated organization or agency -- to see the equipment, try it out, and choose from the available models. If the individual desires, the equipment will be shipped to his or her home. Under this program, the equipment then belongs to the individual customer. Because funds are limited, the LifeLine Assistive Equipment Distribution program will be administered under a Selection Process. First priority will be given to persons who now have no phone service and no equipment. Second priority is for people who have phone service but no equipment. Finally, people who do have phone service and do have some equipment will be served, to the extent that resources permit. The equipment distribution program will offer only basic equipment. This includes telecommunications devices for deaf people (TTYs), light signallers or amplifiers, speaker phones or other hands-free phones, big button phones or Back Talker, Braille TTY or large display TTY with signallers, and other equipment. The idea is to allow the individual to make the same use of the telephone network as can people with no disabilities.] Some members of the audience asked Ms. Buck about the role of the Office of Advocate for Persons with Disabilities, noting that New York's Governor, George Pataki, had made numerous proposals for cutting services and benefits for New Yorkers with disabilities. Ms. Buck explained that although the Office of Advocate is located within the Governor's office, they worked to advocate on behalf of people with disabilities, helping the Governor to become more aware of the potential impact of some of his policy proposals. She also explained the Diffusion Fund. This fund distributes $10 million annually for five years to organizations serving low- income, rural, inner-city and other traditionally underserved, economically disadvantaged areas within the State. The groups use the money for advanced telecommunications products and services, customer premises equipment, and related training. Ms. Buck said that she had worked hard to be sure that programs funded would have to demonstrate access for persons with disabilities. Tom Burke, of the New York State Department of Public Service, picked up on the Diffusion Fund. He said he did not disagree with some members of the audience that the Diffusion Fund should be larger, urging participants to become actively involved in PSC proceedings so that they could offer input. He also explained the procedures followed by the NYS Public Service Commission (PSC) and how people with disabilities could offer comments on pending matters and otherwise influence decisions. He said that both Commission offices (one is in Albany and the other is in New York City) are accessible to people with disabilities; the PSC also provides interpreters for hearings and Commission sessions upon request. Several participants asked Mr. Burke about Enhanced 911 (E911). Mr. Burke explained that most of the State has had E911 capabilities for several years (Nassau County, where Hofstra is located, has had it for eight years). He said that Suffolk County, on the eastern end of Long Island, probably would get E911 service some time in 1997 or 1998. New York City, he added, would soon get E911 [Ed. Note: It did, a few days after the conference]. EDUCATION AND TRAINING PANEL Susan Fridie, rehabilitation technologist at the Helen Hayes Hospital Center for Rehabilitation Technology, moderated this panel, at which Karen Gourgey, director of the Center for Visually Impaired People at Baruch College, and Julie Klauber, from the Suffolk Library for the Blind and Physically Handicapped, spoke. Dr. Gourgey described her Center's new telecommunications course, which takes people from the point of learning how to set up a modem to an introduction to the World Wide Web. The Center also provides training on email and other online resources. It serves as a resource for other City University of New York (CUNY) campuses in solving problems for blind or low-vision students and faculty or staff. High school students and adults may also take classes at the Center on computing, adaptive equipment, and related topics; the Center's only requirements are that they be able to get to/from Baruch independently and that they be able to touch-type, so as to take notes on lectures. Topics offered include word processing, dBase, Lotus, and other programs. Ms. Klauber described the Library, including its Talking Books, Braille books, and other special offerings. The library has an adapted workstation on which speech and large-print technologies are demonstrated for blind and low-vision visitors. Mr. Klauber noted that the Library can gain access to a large number of specialized databases, such as Able Data, so that a visitor could learn about hardware and software available anywhere in the country and the world. Visitors can also surf the 'Net. The Library's services are free to Suffolk residents. ---------- File: NYNEX NYNEX ACCESSIBILITY AND UNIVERSAL DESIGN PRINCIPLES Principle 1: NYNEX will provide quality services that can reasonably accommodate a broad range of diverse users, including individuals with disabilities. Principle 2: NYNEX will review its existing services to determine which services should be made more accessible. Principle 3: NYNEX will design and develop its services, to the extent readily achievable, so as to be accessible to a broad range of diverse users. Principle 4: NYNEX will market and provision its services in a manner consistent with accessibility by a broad range of diverse users. Principle 5: NYNEX will employ these Universal Design Principles NYNEX-wide, in its relationships with customers, employees, shareholders and suppliers. NYNEX will encourage companies related to but not controlled by NYNEX to adopt these Principles. ---------- File: WHERE WHERE TO LEARN MORE WGBH Educational Foundation National Center for Accessible Media 125 Western Avenue Boston, MA 02134 1-617-492-9258 (v/tty) 1-617-782-2155 (fax) Emai: larry_goldberg@wgbh.org http://www.wgbh.org/ncam Trace Research and Development Center Waisman Center and Department of Industrial Engineering University of Wisconsin-Madison S-151 Waisman Center 1500 Highland Ave. Madison, WI 53705 Phone: (608) 262-6966 Fax: (608) 262-8848 TDD: (608) 263-5408 Email: curbcuts@trace.wisc.edu http://www.trace.wisc.edu Nathaniel H. Kornreich Technology Center National Center for Disability Services 201 I.U. Willets Road Albertson, NY 11507 Phone: (516) 747-5400 Fax: (516) 746-3298 Federal Communications Commission Disabilities Issues Task Force 1919 M Street NW Washington, DC 20554 Email: LDubroof@fcc.gov http://www.fcc.gov National Council on Disability 1331 F Street, NW, Suite 1050 Washington, DC 20004-1107 Phone: (202) 272-2004 Fax: (202) 272-2022 TDD: (202) 272-2074 ELECTRONIC RESOURCES Center on Information Technology Accommodation http://www.gsa.gov/coca Adaptive Computer Technology Centre http://www.doe.ca WebABLE! http://www.webable.com NYNEX Interactive Yellow Pages and BigYellow http://www.bigyellow.com Microsoft - Accessibility Team http://www.microsoft.com/windows/enable/ The Arc of the U.S. http://www.metronet.com/~thearc/welcome.html University of Alberta's Developmental Disabilities Center http://gpu.srv.ualberta.ca/~ddc/INDEX.html Stanford University's Center for the Study for Language and Information - Archimedes Project http://kanpai.stanford.edu/arch/arch.html Equal Access to Software and Information (EASI) http://www.rit.edu/~easi/easidata/easidata.html Autism Network International http://www.students.uiuc.edu/~bordner/ani.html Disability Solutions Page http://www.albany.net/~dsw Council for Exceptional Children http://www.cec.sped.org Electronic Telecommunications Relay Services Forum http://www.ourworld.compuserve.com/homepages/mitch_travers Gallaudet University http://www.Gallaudet.edu/ American Council of the Blind http://www.acb.org National Library Service for the Blind and Physically Handicapped http://lcweb.loc.gov/nls/nls.html Dyslexia http://www.dyslexia.com Lorien Systems (maker of hardware and software for people with disabilities) http://www.gpl.net/users/loriens Apple Computer (Disability) http://www.apple.com/disability/welcome.html IBM (Disability) http://www.austin.ibm.com/pspinfo/snshome.html Sun Microsystems' Enabling Technologies http://www.sun.com/smli/projects/enabling-tech/index.html ---------- File: ACKNOWL ACKNOWLEDGEMENTS The Hofstra Conference, "Access to the Information Superhighway," was a large and challenging project. We installed and at the conference demonstrated complex, state-of-the-art technology that in some instances had not yet reached the maturity necessary for reliable, consistent operation. Many hours were needed to plan, deploy, test and de-bug equipment and software. We also arranged a wide variety of accommodations for participants who had disabilities, including personal attendant services, interpreters (both oral and sign), and synthesizer-readable disks with conference materials. The day of the conference went smoothly, with few problems, precisely because of all of the advance work. NYNEX Corporation's generosity made the conference possible. Bonnie White, who is with NYNEX Strategic Alliances, and Peter Dresch, with NYNEX/New York Regulatory and Governmental Affairs, supplemented their financial donations with much-needed personal assistance in getting things done. Christopher Heaney, who works for Dresch, spent dozens of hours with me, most via email, to coordinate the technology we needed for the conference. He also spent two days on campus testing and de-bugging the equipment. Chris arranged for AdCom's Arthur Katz, Wendy Amato, and Julio Mundo, who support PictureTel marketing, to provide us with and operate a PictureTel unit allowing me to make the conference- ending speech from my office, using simultaneous voice, video and data. Gene McAuliffe and Neil Stackel, NYNEX sales representatives, made sure we had other equipment we needed for the conference. Sandy Berman and Carol Fuhrer, of the National Center for Disability Services, coordinated the dozen exhibitors we had and loaned us for conference use software that enhanced the ability of campus Macintosh computers to talk. Sandy and Carol now run the Center's new Nathaniel H. Kornreich Technology Center. Kathleen Pelligrini, who works with me in the Counseling, Research, Special Education, and Rehabilitation (CRSR) Department at Hofstra, spent untold hours on the many details of conference logistics, on the phone with participants and vendors, and with university support services personnel. When Long Island had a blizzard in the days just prior to the conference, making travel on the roads hazardous and sometimes impossible, she worked from home, using the phone, fax, and the Internet to work out alternate arrangements so the conference could go forward. Hofstra's Academic Computing Center staff, particularly Sherry Ross, Harry Baya, and David Klein, not only arranged for the conference's use of Hofstra computing facilities but also led participants in "hands-on" sessions in which they surfed the 'Net on IBM-compatible and Macintosh computers. They also provided the PC's that conference exhibitors needed. Joe Dalto and Bob Johnson, of Hofstra Telecommunications, coordinated installation of cables and phones, enabling participants visiting exhibits to surf the 'Net and to use VoiceDialingsm. Kathleen Dwyer reserved for us the many rooms we needed to use for the conference. And associate provost Howard Negrin took care of the finances. I am also grateful to keynote speakers Larry Goldberg and Maureen Kaine-Krolak who not only gave outstanding presentations in the morning but also spent most of the remainder of the day in the exhibitor area, answering questions and demonstrating equipment and software programs. Gregg Vanderheiden of the Trace Center updated me on the report Mrs. Krolak used in her presentation, and granted us permission to print it. The CPB-WGBH National Center for Accessible Media (NCAM), which Goldberg directs, graciously granted us permission to use the new symbol for an accessible World Wide Web (WWW) site. Geoff Freed, manager of external projects at NCAM, provided us with camera-ready copy. The Web Access Symbol is a product of the NCAM, Stormship Studios, the Telecommunications Funding Partnership for People with Disabilities, and the Boston Foundation. Donna Bernhardt, Diana DiMonda, Suzanne Dooley, Mary Dunn, Kim Hirschberger, and Stefanie Lewis handled the interpreting duties for the conference. Their job was made even more critical by the last-minute cancellation of our real-time captioning contractor. Mary Essex, of the NYNEX Center for Individuals with Disabilities in Marlborough, MA, chaired the afternoon NYNEX Corporation panel. Featured speakers were Dr. Sara Basson, of NYNEX Science and Technology, in White Plains; Jim Barry, of NYNEX Consumer Affairs; and Philip Alvarez, NYNEX Market Development. A second afternoon panel, this one on New York State Government agencies, featured Deborah Buck, of the Office of Advocate for Persons with Disabilities, and Tom Burke, of the NYS Department of Public Service. Susan Fridie, of Helen Hayes Hospital, moderated the final panel, on education and training programs, at which Karen Gourgey, of the Center for Visually Impaired People at Baruch College, and Julie Klauber, from the Suffolk Library for the Blind and Physically Handicapped, spoke. I thank all of them. Finally, the conference exhibitors graciously demonstrated their products and services for participants for most of the day, never seeming to run out of patience or good humor. They included: Arroyo & Associates, of Oceanside, NY; Barrier-Free Access Systems, of Huntington Station, NY; C TECH, of Pearl River, NY; Citibank, of Albertson, NY; Maxi-Aids, of Farmingdale, NY; newAbilities Systems, of Palo Alto, CA; Newsday Direct, of Melville, NY; NYNEX Science and Technology, of White Plains, NY; Sighted Electronics, of Northvale, NJ; SSK Technology, of Aptos, CA; WGBH Caption Center, of Boston, MA; and Words + of Portsmouth, NH