{"id":142277,"date":"2025-09-08T19:06:07","date_gmt":"2025-09-08T19:06:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/142277\/"},"modified":"2025-09-08T19:06:07","modified_gmt":"2025-09-08T19:06:07","slug":"new-language-technologies-for-american-sign-language-usc-viterbi","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/142277\/","title":{"rendered":"New Language Technologies for American Sign Language &#8211; USC Viterbi"},"content":{"rendered":"<p>                            <img fetchpriority=\"high\" decoding=\"async\" aria-describedby=\"caption-attachment-79799\" class=\"size-full wp-image-79799\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/09\/Untitled-design-37.jpg\" alt=\"3 hand signs are drawn in American Sign Language, from left to right: Letters A, S, and L. This is to represent American Sign Language\" width=\"1200\" height=\"600\"  \/><\/p>\n<p id=\"caption-attachment-79799\" class=\"wp-caption-text\">Spelling out the letters A, S, and L to represent American Sign Language, image courtesy of Pixabay.<\/p>\n<p>While there is \u201ctalk to text,\u201d there\u2019s no equivalent tool for American Sign Language (ASL) to be automatically recognized and translated into text. New research and language technologies developed by scholars affiliated with the USC School of Advanced Computing\u2019s\u00a0 Thomas Lord Department of Computer Science might help future researchers who aim to build translation tools.<\/p>\n<p>The team\u2019s innovations outlined in <a href=\"https:\/\/aclanthology.org\/2025.findings-naacl.389.pdf\" rel=\"nofollow noopener\" target=\"_blank\">paper<\/a> presented at the 2025 Nations of the Americas\u00a0 Chapter of the Association for Computational Linguistics conference, are in developing a machine learning model that treats sign language data as a complex linguistic system rather than just a mere translation of English.\u00a0 The team led by Lee Kezar, then a doctoral candidate in computer science out of Professor Jesse Thomason\u2019s <a href=\"https:\/\/glamor-usc.github.io\/\" rel=\"nofollow noopener\" target=\"_blank\">GLAMOR (Grounding Language in Actions, Multimodal Observations, and Robotics) Lab<\/a>, introduces a new natural language processing model, incorporating the spatial and semantic richness of ASL, treating it as a primary language with its own syntax.<\/p>\n<p>The first step for developing a means of ASL recognition which demands an understanding of the language\u2019s specific nuances\u2014and how natural signing may be divided into phonological features, such as the \u2018C handshape\u2019 or \u2018produced on the forearm&#8217;\u201d for a computer.<\/p>\n<p>(Shown here by Dr. William Vicars and Lifeprint.com)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-79809\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/09\/computer.gif\" alt=\"GIF of Computer in American Sign Language\" width=\"402\" height=\"402\"\/><\/p>\n<p>However, the main challenge for creating a model for automatic ASL detection and other global sign languages\u2014is the limited data available. In contrast, says corresponding author Thomason, data for non-signed languages is available from all over the world via the internet and films.<\/p>\n<p>Thus, the team realized that one of the first steps needed was to generate a knowledge graph, an organized way of communicating graphically how the visual properties of signs relate to their meanings throughout the lexicon.<\/p>\n<p>(For example, the C handshape in CUP below shows the shape of the cup itself.)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-79803\" class=\"wp-image-79803\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/09\/cup.gif\" alt=\"A GIF of the word &quot;cup&quot; in American Sign Language \" width=\"374\" height=\"374\"\/><\/p>\n<p id=\"caption-attachment-79803\" class=\"wp-caption-text\">A GIF of the word \u201ccup\u201d in American Sign Language<\/p>\n<p>Kezar, who knows American Sign Language and is now a <a href=\"https:\/\/gallaudet.edu\/visual-language-visual-learning\/action-and-brain-lab\/#team\" rel=\"nofollow noopener\" target=\"_blank\">Postdoctoral Researcher at Gallaudet University,<\/a> took on this project as he saw a huge gap in this family of languages.<\/p>\n<p>\u201cSign languages are full, natural languages. They\u2019re complete, meaning we can express basically any idea. But it\u2019s not really included in natural language processing research,\u201d said Kezar.<\/p>\n<p>The researchers, who were sure to include native signers and collaborated with the Deaf community,\u00a0 explain that any viable model for ASL recognition and generation would need to take in account some unique aspects of the language that make up the signs, including:<\/p>\n<p>Facial expressions as noted here ins the signs for \u201cunderstand\u201d versus \u201cdon\u2019t understand.\u201d\u00a0<\/p>\n<p>Understand<\/p>\n<\/p>\n<p>Don\u2019t understand<\/p>\n<\/p>\n<p>\u00a0<\/p>\n<p>Where a sign is in relation to a body as seen in the signs summer versus dry.\u00a0\u00a0<\/p>\n<p>Summer<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-79806\" class=\"wp-image-79806\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/09\/summer-1.gif\" alt=\"Gif of the term Summer in American Sign Language\" width=\"386\" height=\"386\"\/><\/p>\n<p id=\"caption-attachment-79806\" class=\"wp-caption-text\">GIF of the word \u201cSummer\u201d in American Sign Language<\/p>\n<p>\u00a0<\/p>\n<p>Dry<\/p>\n<\/p>\n<p>\u00a0<\/p>\n<p>In addition, any model needs to have flexibility to recognize the new signs that are evolving.\u00a0<\/p>\n<p>For example, the sign for the <a href=\"https:\/\/www.handspeak.com\/word\/8166\/\" rel=\"nofollow noopener\" target=\"_blank\">coronavirus:<\/a><\/p>\n<p>Thomason emphasized that the project is not simply about recognition, but also about understanding and generation\u2014creating systems that can comprehend and produce fluent sign language in its natural structure.<\/p>\n<p>Thus far, the researchers have trained a machine learning model to achieve:<\/p>\n<p>91 percent accuracy in recognizing isolated signs.<br \/>\n14 percent accuracy in recognizing unseen signs\u2019 semantic features, such as inferring that a sign is related to sight because it involves the V handshape produced near the eyes (e.g. SEE, LOOK, REVIEW).<\/p>\n<p>(See)<\/p>\n<\/p>\n<p>\u00a0<\/p>\n<p>They have also trained a machine learning model to achieve 36 percent accuracy at classifying the topic (news, sports, etc.) of ASL videos on YouTube.<\/p>\n<p>To achieve this, the researchers emphasized the importance of working directly with the Deaf and Hard-of-Hearing community, including native signers and linguistic experts, to guide the direction of the models and how data is handled.<\/p>\n<p>\u201cWe wanted to do something that is deeply respectful of the language itself,\u201d he noted. \u201cAnd that meant collaborating directly with members of the Deaf community,\u201d said Thomason.<\/p>\n<p>As the project moves forward, the team will look to expand their model to include other sign languages around the world by mapping out their shared grammatical structures and unique features.\u00a0<\/p>\n<p>Ultimately, Kezar says the team \u201cenvisions applications ranging from looking beyond automatic translation, and focusing on more useful applications allowing users to search YouTube using ASL (e.g. the topic modeling experiment), building augmented-reality educational tools (the focus of Kezar\u2019s postdoc), and enabling linguistic research into signing.\u201d<\/p>\n<p>\u00a0<\/p>\n<p class=\"created-on\">Published on September 8th, 2025<\/p>\n<p class=\"last-updated\">Last updated on September 8th, 2025<\/p>\n","protected":false},"excerpt":{"rendered":"Spelling out the letters A, S, and L to represent American Sign Language, image courtesy of Pixabay. While&hellip;\n","protected":false},"author":2,"featured_media":142278,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[46],"tags":[191,74],"class_list":{"0":"post-142277","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-computing","9":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/142277","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=142277"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/142277\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/142278"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=142277"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=142277"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=142277"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}