<oembed><type>rich</type><version>1.0</version><title>Final wrote</title><author_name>Final (npub1hx…sg75y)</author_name><author_url>https://yabu.me/npub1hxx76n82ags8jrduk0p3gqrfyqyaxnrlnynu9p5rt2vmwjq6ts3q4sg75y</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>We&#39;re developing our own implementations of text-to-speech and speech-to-text to use in #GrapheneOS which are entirely open source and avoid using so-called &#39;open&#39; models without the training data available. Instead, we&#39;re making a truly open source implementation of both where all of the data used for it is open source. If you don&#39;t want to use our app for local text-to-speech and speech-to-text then you don&#39;t need to use it. Many people need this and want a better option.&#xA;&#xA;We are working on TTS first then SST. The TTS training data is LJ Speech https://keithito.com/LJ-Speech-Dataset/ and the model used is our own fork of Matcha-TTS.&#xA;&#xA;If people want they can fork it and add/remove/change the training data in any way they see fit. It&#39;s nothing like the so-called &#34;open&#34; models from OpenAI, Facebook, etc. where the only thing that&#39;s open are the neural network weights after training with no way to know what they used to train it and no way to reproduce that.&#xA;&#xA;Many blind users asked us to include one of the existing open source TTS apps so they could use it to obtain a better app. None of the available open source apps meets our requirements for reasonable licensing, privacy, security or functionality. Therefore, we&#39;ve developed our own text-to-speech which will be shipping soon, likely in January. We&#39;ll also be providing our own speech-to-text. We&#39;re using neural networks for both which we&#39;re making ourselves.</html></oembed>