Turn on any random press conference, usually held by government officials. 80% of the time, you might see a person off to the side, translating the speaker’s words into sign language. This is pretty cool, but I have to make an observation.
It is estimated that only between 250,000 and 500,000 people in the US speak sign language. In contrast, just as an example, some 41 million people speak Spanish in the US. If the goal of the translators is to communicate the speaker’s message to the greatest number of constituents, it would make more sense to have a live closed captioning typist put real-time Spanish subtitles on these broadcasts. How come they do one but not the other?