• Editor
  • Can you guys add morph target animation tool to Spine?

Can you guys add morph target animation tool to Spine ?
like this article http://code.zynga.com/2011/11/mesh-compression-in-dream-zoo/ said

Looks like the engine of Rayman legend that is ubi Art framework used this technique, to combined use of morph target animation with skeleton animation should make more complicated animation easyly.

Here is a video shows how the animations of Rayman legends made by ubi art framework.
http://www.youtube.com/watch?feature=player_embedded&v=y-chi097uV4

Related Discussions
...

We already have a Kickstarter planned to implement this. If Morph targets (more commonly known as blend shapes) will be stored in a list or just as keyframes is yet to be decided. We'll also have skinning in there so you will be able to drive your meshes based on vertice weights.
Here's a small video that shows what you will be able to do. http://www.youtube.com/watch?v=I8VwvBILhPw

Wow I didn't see this video Shiu. It's amazing! We want to get our teeth into this!!

Come on! We can be your beta tester!!!

  • Düzenlendi

What's a blend shape/morph target? Are you just talking about FFD? (just curious)

(edit:just looked it up)
They're like deform presets you can alpha/fade into instead of storing the same vertex offset/deformation data in every keyframe where you have a deformation.

I've seen this in....
Blender? 😃

Shiu, when do you plan to launch this new feature?

Pharan, with FFD you can move around vertices of a mesh. A blend shape is a stored configuration of those vertices. If you apply a blend shape on a frame and later apply a different one on a different frame, the tweening will make it appear to morph from one mesh shape to the other.

JuanluGC, we want to release early and often, just don't have anything ready yet. I've done some work on meshes and some prototyping to make sure the KS2 features are reasonable. Wish I had more time in the day!

Since Shiu is showing off, here's a video of skinning:
http://youtu.be/PU506p11RSw

Blend shapes are basically FFD. There are a couple of differences though.
In most animation packages you can't freely animate the positions of individual vertices, you need to store a new blend shape first. Animation between the original shape and the new one is then done with a slider where you control the weight. Southpark for example uses this for facial animation where they have stored a lot of different shapes. The downside to this is that you can end up with A LOT of shapes, and it's a bit overkill if you only need to move a single vertex slightly. One cool way to use blend shapes is to control the weight of a shape based on the rotation of a bone, so for example you could have two shapes for an upper arm, one of them with the bicep being a bit larger, then when the bone is rotated to bend the arm you increase the weight of that blend shape.
We would like to possibly allow storing deforms as shapes, as well as freely being able to just move vertices around and then setting a key on the timeline, but we haven't decided 100% on this yet.

puzzler yazdı

Shiu, when do you plan to launch this new feature?

We plan on launching the Kickstarter this month, if we can't make that deadline then it will be early next month.
Can't say exactly how long it will take after it is over, a couple of months probably. So early next year is a good estimate.

Registering my huge interest in this feature. It gives the animation something more cartoon-y, less rigid.

Really looking forward to it.

Would that let us set the draw order per vertex? Or what would happen if you, for example, have multiple bones deforming the mesh of a tail, and you make the tail go over itself, crossing itself multiple times? How do you decide what draws over what?

There would not be draw order per vertex. A self intersecting mesh probably would be undefined which part is draw on top.

2 yıl sonra

Hate to dredge up an old topic for the sake of curiosity, but did anything ever come out of this? I've searched the guide, the forums, Trello, and even the old Kickstarter, and can't find any signs of its implementation, not even a place in the backlog.

Blend shapes would be a major time saver for cases where arms and legs are bent at extreme angles (automatic FFD deformation, set to blend in between specific rotation values), or for when a character requires facial animation that goes beyond simple texture swapping. (eg: sliders for eyes ranging from 'wide open' to 'shut tight', or subtle FFD deformation to simulate face rotation)

We have researched what it would take to do blend shapes. It's feasible, though it increases the number of entire mesh transforms that need to be performed. We haven't implemented it yet, as there are many "low(er) hanging fruit" features that would help a lot: paths, skeleton and animation sequence attachments, rotation limiters so bones don't rotate the wrong way around during mixing, rotation by more than 180 degrees, runtimes that need updating to the latest editor version, etc. As much as we want to work on big new features like blend shapes and a curve editor, it's hard to do so when there are many important needs that easier to solve.

Sliders and can be useful even without blend shapes. Tying sliders to rotation ranges is also a separate feature and can be very useful. Eg, if you have many attachments to change for various perspectives, you could do it with a slider/joystick.

6 yıl sonra

I know this is an old thread, but have blend shapes (or something similar) been implemented in the meantime?

I'm the author of Rhubarb Lip Sync, a tool that creates mouth animation from audio recordings. Currently, the user defines a slot for the mouth, then adds image attachments for the individual mouth shapes. Rhubarb then creates an animation that switches between these attachments.

While this works, the resulting animations look rather jumpy because there are no transitions. With blend shapes, a future version of Rhubarb could create smooth morphed transitions between mouth shapes.

I tried googling for blend shapes, but couldn't find anything.

We have something better than morph targets called additive animation blending. Check out the owl demo on this example project page.
Owl example

Here's the code to setup the blending:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.0/spine-ts/spine-webgl/demos/additiveblending.js#L46

And here's the code to calculate rhe blend alphas:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.0/spine-ts/spine-webgl/demos/additiveblending.js#L76

The way it works: you create an animation per pose (aka blend shape) you want to blend between. These animations may key a unique subset of properties (transforms, mesh deforms, you name it) but may also key the same properties. In the lip sync case, that'd be extreme positions of the mouth (i.e. mouth shapes) and facial poses, i.e. eye brows, eye lids, etc.

You then queue each animation on a separate track, enable additive animation blending, and set each track's alpha from 0-1 depending on how much that animation should contribute to the final pose of the skeleton. For lip sync, you'd figure out which mouth shapes contribute to the current phoneme (or whatever unit you use) and their relative contributions. Then you simply set the alpha of each mouth shape animation and presto, you got smooth transitions between phonemes.

Additive animation blending is a lot more powerful than classical blend shapes, as it allows you to blend between not just mesh deforms, but all keyable properties of elements in your skeleton.

Maybe it helps to also see how these many terms are related and were they are coming from. (aka. Harald's little Christmas story of 2021 :nerd: )

Before skeletal animation, vertex animation (more correctly: per-vertex animation) was the way many (low poly realtime) characters were animated, where each keyframe stored the absolute location of each vertex of your character. These vertex positions were then interpolated linearly (along straight lines) during animation playback. Then animation transitioned to skeletal animation, some reasons are that vertex counts increased, and that bone rotation interpolation simply looks better than interpolating vertex positions on straight lines.

Then, as demand for deforming muscles (bulging when joints are bent) and face shapes on top of bone-based skeletal animation was growing, vertex animation was added back-in on top of existing bone-based skeletal animation. Luckily these deformations require only a single keyframe to be well defined, such as a bulging biceps shape when bent. This single frame vertex animation on top of skeletal animation was then called "blend shapes" or "morph targets" as a naming convention (and likely also as a marketing term to sell 3D editor features 😉 ). But in theory, "blend shapes" or "morph targets" are just single-keyframe additive vertex animation. In Spine terminology, vertex animation is called "deform keys", so if you create an animation with a single deform key, you have basically created a blend shape.

Hope this helped and didn't cause more confusion 🙂.

Thank you for your replies! I'm not sure, however, whether we're talking about the same thing.

The approach I've seen in other software is this: For lip sync, an artist will draw one bitmap per mouth shape, resulting in around six separate bitmaps. Then, they will apply the same deformation grid to each of these bitmaps. For instance, the image of a wide-open mouth may have 6 grid points around the mouth; the image of a puckered mouth will have the same 6 grid points around the mouth, "stuck" to the same areas of the mouth, but much closer to each other.

Transitioning between two images will then do a classic "morph": It will animate the 6 grid points of the first image towards the positions of the corresponding grid points in second image, making the first image take on the shape of the second one; it will do the reverse to the second image; and it will fade between the two.

Does additive animation blending support this kind of morphing?

Don’t worry, we’re talking about the same thing! I've prepared an example Spine project and attached it to this reply, although it's not a very perfect shape as I created it roughly.

The attached Spine project contains this skeleton:

I prepared three bitmap images (01_square / 02_circle / 03_star.png) and I deformed a simple square image (mesh.png) to match each shape in each animation.

I hope this will help you.

Wow, thank you very much! I'll study your example project after the holidays. In the meantime, I wish you all a very merry Christmas!