Misaki

The face expression AI is fun :D As you mentioned, some facial expressions seem to be difficult to recognize, like angry and fearful. However, just being able to recognize happiness, surprise and sadness seems to give me a lot more freedom!
I would definitely like to add facial expression animations to my model when it is possible to change the model's facial expression with the AI. (My understanding is that it is not possible to do that yet, but is it possible already?)
Kullanıcı avatarı
Misaki

Misaki
  • Mesajlar: 1150

SilverStraw

Have you tried to lower the confident threshold for the harder facial expressions and raise the confident threshold for the easier facial expression?

This picture might help people understand what the AI is looking for when it tries to recognize facial expressions.

I would definitely like to add facial expression animations to my model when it is possible to change the model's facial expression with the AI. (My understanding is that it is not possible to do that yet, but is it possible already?)
Yes it should be possible.
SilverStraw
  • Mesajlar: 109

Misaki

Have you tried to lower the confident threshold for the harder facial expressions and raise the confident threshold for the easier facial expression?
Yes, but even when set threshold to 1, "angry" and "fearful" were not recognized well. As long as I've tried, "disgusted" is the most difficult to be recognized.
Yes it should be possible.
Oh, so could you tell me what animation name is required for each expression?
Kullanıcı avatarı
Misaki

Misaki
  • Mesajlar: 1150

SilverStraw

Yes, but even when set threshold to 1, "angry" and "fearful" were not recognized well. As long as I've tried, "disgusted" is the most difficult to be recognized.
I might not have explained it well. The values should be between 0 and 1 where 1 makes it harder to trigger and 0 makes it easier to trigger. If you set those expressions to 1, then it makes it harder for the expressions to be recognized.
Oh, so could you tell me what animation name is required for each expression?
I am sorry for the misunderstanding. The current prototype is not ready yet for adding facial expression animations.
SilverStraw
  • Mesajlar: 109

Misaki

I might not have explained it well. The values should be between 0 and 1 where 1 makes it harder to trigger and 0 makes it easier to trigger. If you set those expressions to 1, then it makes it harder for the expressions to be recognized.
Ah, I see. Indeed, when I tried to set the threshold for "angry" to 0, my facial expression was often recognized as "angry" even if it is almost expressionless.
I am sorry for the misunderstanding. The current prototype is not ready yet for adding facial expression animations.
I got it, so I'll try to add the animations when it is ready :)
Kullanıcı avatarı
Misaki

Misaki
  • Mesajlar: 1150

skarasuko

Is there a guide on how to export correctly? I tried out a very big Spine file, but some images are completely black and shaped like their mesh. It would also be nice if this supports multiple skins as well.

Edit 1: I found out that this does not accept multiple pngs.
skarasuko
  • Mesajlar: 131

SilverStraw

@skarasuko Sorry you are having issues.
SilverStraw
  • Mesajlar: 109

skarasuko

SilverStraw yazdı:@skarasuko Sorry you are having issues.
No worries at all. This is already feels very usable among most aspiring V-Tubers! The Spine Team should consider fast-tracking official support for this, because it's still the perfect time to be part of the market!
skarasuko
  • Mesajlar: 131

SilverStraw

Spine Vtuber Prototype

Update 1.1.3

* Separate error catching on Spine animation tracks.
* Allow "left eye open" Spine animation track to keyframe both eyes while detecting eye wink.
* Add wink threshold to determine the ease of detecting eye winks. Range from 0 ( easiest ) to 1 ( hardest ). Setting located in Model Settings > Single Value Properties > wink threshold. This property can be saved into and loaded from svp file.
* Fixed "brow raise" incorrect landmark tracking.

https://silverstraw.itch.io/spine-vtuber-prototype

Updated Spine Vtube Test Model for synchronizing both eyes on "left eye open" track and keyframe only the right eye on "right eye open" track.

https://silverstraw.itch.io/spine-vtube-test-model

Some ninja patching with the help of LustFire's bug catching.

---

A user notified me an issue with the experimental motion capture not exporting the JSON file. I patched the bug.
SilverStraw
  • Mesajlar: 109

Misaki

I have tried 1.1.3, and it certainly seems to make it easier to detect eye wink! Here is the result of my test (the model was also modified slightly):

It had been a while since I tested this tool, but I found it enjoyable once again :D
Thank you for keeping this cool tool updated!

One thing I thought was it might be a good idea to add a threshold value for the eye pupils, such that subtle changes in position could be ignored. My model's pupils wobbled while winking, so I adjusted the left/right pupil pitch strength and left/right pupil yaw strength parameters, but if these values are set too low, the model’s eyes will not follow the eye movement at all and it is not ideal. FYI, in the model in the video above, left/right pupil pitch strength is set to 4 and left/right pupil yaw strength to 5. I would be happy if you could consider it!

Here is my latest model data:
chara-for-Spine-Vtuber-Prototype_20221223.zip
Bu mesaja eklenen dosyaları görüntülemek için gerekli yetkilere sahip değilsiniz.
Kullanıcı avatarı
Misaki

Misaki
  • Mesajlar: 1150

SilverStraw

1.1.4

  • Update to Spine WebGL 4.1.24 .
  • Add skeleton debug mode into scene render. The checkbox is located under "Canvas Settings" menu.
  • Add debug bones checkbox under "Canvas Settings" menu. This include options for bone center color and bone line color, and bone line width ( minimum of 0 and max of 10 ).
  • Add debug region attachments checkbox under "Canvas Settings" menu. This include option for region attachment line color.
  • Add debug mesh triangle checkbox under "Canvas Settings" menu. This include the option for mesh triangle line color and mesh line opacity ( minimum of 0 and max 100 ).
  • Add debug clipping checkbox under "Canvas Settings" menu. This include the option for clipping line color.
  • Add left/right pupil pitch/yaw threshold settings to "Single Value Properties" drop-down list under "Model Settings" menu.
  • Fix bug that did not load pre-multiplied alpha when it was assigned false.

https://silverstraw.itch.io/spine-vtuber-prototype
Misaki yazdı:One thing I thought was it might be a good idea to add a threshold value for the eye pupils, such that subtle changes in position could be ignored. My model's pupils wobbled while winking, so I adjusted the left/right pupil pitch strength and left/right pupil yaw strength parameters, but if these values are set too low, the model’s eyes will not follow the eye movement at all and it is not ideal. FYI, in the model in the video above, left/right pupil pitch strength is set to 4 and left/right pupil yaw strength to 5. I would be happy if you could consider it!
I had to reinstate the threshold mechanism after I forgo it for moving average. You cannot export those settings to .svp yet. I want you to test it out before I make more commitments to the pupil threshold.
SilverStraw
  • Mesajlar: 109

Misaki

Thank you so much for adding left/right pupil pitch/yaw threshold settings!! I have tried them and changing the threshold values actually helped to reduce the eye wobbling. Here is the result:


The settings for my model related the eyes are as follows:
  • left/right eye strength: 50
  • left/right pupil pitch strength: 10
  • left/right pupil pitch threshold: 0.15
  • left/right pupil yaw strength: 4
  • left/right pupil yaw threshold: 0.07

While testing these settings, I found I should modify my animations, so the model in the video above is updated. Here is the updated project files:
chara-for-Spine-Vtuber-Prototype_20230117.zip

As shown in the following image, I made the pose will not be changed immediately after detecting changes in eyelids and mouth movements captured in some animations:
eye-open-animation.png

This adjustment was made so that minute movement changes would not cause the eyelids or mouth subtle opening. Also, since I set the thresholds for pupil pitch higher values than for pupil yaw, I adjusted pupils in the pitch down/up animations do not move from frame 0 to frame 10, so that these do not appear as if the position suddenly jumps when the threshold is exceeded.

By the way, you said:
You cannot export those settings to .svp yet.
but somehow I can export the threshold settings to .svp. (Maybe you have updated this tool after replying to this thread?)

Anyway, I am happy with the results this time! There are some things I would like to fix in my rig (e.g., The half-eye pose is not very good, although I have adjusted it many times), but I think the current specification of this tool is already great for vtubing. I am looking forward to the day when facial expression animations can be added. Great work!! :yes: :D
Bu mesaja eklenen dosyaları görüntülemek için gerekli yetkilere sahip değilsiniz.
Kullanıcı avatarı
Misaki

Misaki
  • Mesajlar: 1150

SilverStraw

but somehow I can export the threshold settings to .svp. (Maybe you have updated this tool after replying to this thread?)
It has been a while since I worked on the source code. I forgot that I have a function that exports settings from a list of default setting values. I updated that list for the left/right pupil pitch/yaw threshold default values, therefore, they got exported. :lol:
SilverStraw
  • Mesajlar: 109


Dön Showcase