13 September 2023

Loli Vtuber 151

Test

Chapter 151. [Text-To-Speech]


I haven't been able to speak due to a sore throat for a few days. In the end, it's all thanks to my older sister or rather, because of her, that I've been living through these busy days.

[Ding Question]

I shook both of my wet hands as if drying them once.

Answers were being posted one after another in the comments section.

>The thing I do after going to the bathroom

>Is it satisfying? (U.S.)

>Are you tired?

[PinPon. That's right! The correct answer: "Are you tired~".]

I raised both my thumbs and index fingers, moving them as if in a close-up.

It's sign language for "correct answer".

So, today, I'm doing a sign language livestream.

By the way, I learned it using a language cheat just for this livestream.

>Iroha-chan, you're amazing for being able to do sign language

>I thought it was difficult, but surprisingly, I can understand it through the nuances

>Sign language in my country is quite different (U.S.)

As chats from international viewers suggest, sign language actually varies from country to country.

Today, what I'm doing in the livestream is what's known as "Japanese Sign Language".

[Haven't you all started to understand it somewhat?]

>"It feels like a gesture game, and it's fun

>↑That was actually a specific sign language expression called 'Classifier'

>But the accuracy of hand tracking is amazing (U.S.)

[Yeah, it's really amazing. And the fact that this is considered home recording quality...]

I often use a 2D model in my regular streams, but today I'm using a 3D model just for this occasion.

I got help from a solo VTuber who's skilled in handling 3D equipment.

Recently, I can even use hand tracking with 2D models in "VTuber Studio", but...

When it comes to sign language, 3D is the way to go, no doubt.

>Especially recently, the development of 3D has been remarkable (Korean)

>It might be the influence of the increase in people debuting as VTubers in 3D from the start, possibly because of Ilyena-chan's videos (U.S.)

>So, Iroha-chan, are you wearing something like gloves now?

[No, it's just a camera for me. I bought a recommended module kit from an acquaintance and used that. Isn't it amazing that even without gloves, you can achieve such high-precision hand tracking like this?]

I say "amazing" in sign language.

Even with just a camera, it's truly impressive that it can replicate not only the opening and closing of the palm but also the partial degrees of openness.

Of course, when it comes to precision, it can't compete with glove-based systems. 

But those have their own significant drawbacks.

The main one is the necessity of wearing gloves!

It might seem like a given, but it's quite a big deal. 

Specifically, it makes tasks like typing on a computer keyboard or using a mouse very cumbersome.

These issues also arise with VR equipment.

When you wear a head-mounted display, you might not be able to see your real-world monitor, keyboard, or mouse.

>I see, that makes sense (U.S.)

>It's cute to see Iroha-chan making those excited movements

>And the 'voice' is cute too

[Yeah, right? It's the usual voice, isn't it?]

>That's right

>I used to be quite monotone, so you could say I've actually returned to that

>How about singing now? (U.S.)

[Why would that happen!? My usual self is definitely a better singer than the monotone 'relaxed' version!]

>It's not like that

>My body craves the relaxed Iroha (U.S.)

>Give us back our Iroha (Korean)

[Stop saying I'm like a fake!]

That's right. Actually, the sign language explanations I've been giving aren't in my natural voice. 

It's text-to-speech software that replicates the voice of "Translator Girl Iroha".

I deeply reflected on it when Ah-nee and Angu Ogu arranged for me to do remote-controlled streams or ventriloquism broadcasts.

I realized I couldn't leave things to them like this. 

I had to create an environment where I could stream even without them.

That's when I came up with the idea of using sign language and text-to-speech software like I'm doing now.

I type the content on the keyboard, and it's read out in a somewhat monotone manner on my behalf.

Then I came up with this plan. 

I intended to use the well-known "relaxed voice" for the text-to-speech.

However, one of the volunteers used AI voice synthesis technology to replicate my voice.

I mean, you're a wild pro, aren't you!?

[There's a link in the description for "Monotone Iroha", so as long as it doesn't go against public decency, everyone can say whatever they like. If you want to hear monotone songs. Feel free to create them yourselves.]

>Haha, Ange was the first to use it and uploaded a video. That's hilarious! (U.S.)

>I see. She's already back in America

>So is Iroha saying 'I want to see Ogu soon' a matter of interpretation? www

[Ogu, are you ready for punishment next time?]

>Oh!

>I was just speaking on behalf of the voiceless Iroha (U.S.)

>It's like I'm always in the comments section LOL

[That's right, for now, it's only available in Japanese, but they're planning to create international versions, including English, in the future.]

>For real!? (U.S.)

>Thank you, thank you so much!" (Korean)

>The international viewers must be thrilled!

>I just had a brilliant idea. Let's use this for the translation device's voice.

>↑Genius, right???

>Will there come a time when every family has an Iroha!?

[Of course, let's keep it reasonable, shall we?]

The chats section seemed to be unusually lively for some reason.

Could it be that things might get weird in the future...? Nah, that won't happen, right?

I had a somewhat forced smile on my face.

No comments:

Post a Comment

Shop 81

Test