• Runtimes
  • [Unity] Spine editor and efficient output

  • Düzenlendi
Related Discussions
...

Hi!

Another noob question I guess. I'm making a dress-up game for kids.

I have a character in Spine that has 6 skins - 3 different races with male and female. On top of that I have 6 different wardrobes in male and female versions. Each wardrobe has items for head, eyes, neck, torso, arms, legs and feet. There are a lot of items - each wardrobe has 72 images.
I have the races and genders in skins and the clothes and items in attachments such as lower_clothes and left_shoe, right_shoe etc.
Each of the attachments just has all of the possible items for all of the wardrobes.

Now I'm wondering how best to streamline the output from Spine and the usage of these in Unity. If I let Spine output everything then I guess I'll end up with a bunch of texture atlases for all of the attachments that will all need to be loaded into memory. This won't work.
Maybe I need to limit what is loaded in for a skeleton to a specific race and gender and to 1 wardrobe? In that case, I'd like the output from Spine (the texture atlases) to be be generated as per race, gender and per wardrobe.
I guess this can't be achieved automatically, but if I'm to do it manually, what should my workflow look like?

I think the ultimate would be if the dressing part of the game could mix and match wardrobes and that would mean some sort of dynamic texture atlas. So when the kids drag in some shoes from the pirate wardrobe, a dress from the princess wardrobe and a sweatband from the sports wardrobe, I don't need the texture atlases for all of these in memory at once.
Or should I just have the images separate (not in atlases) and then build up a texture atlas for the currently dressed character as the items are added to the character? How would I do this?

I have no idea if I'm on the right track or not.

Thanks, Mark

You can control what images are packed together, if that meets your needs. See here:
https://github.com/libgdx/libgdx/wiki/Texture-packer

A dynamic texture atlas could be most efficient if it isn't feasible to pack images in such a way that draw calls are minimized. I don't have experience building an atlas at runtime with Unity though. FWIW, with libgdx you'd use PixmapPacker.

You can control which textures are loaded and can even load them lazily by using your own AttachmentLoader. This is also how you'd use individual images instead of an atlas, if you wanted.

How many images will you have attached at once? How many skeletons on screen at once? What platforms are you targeting? If texture binds aren't a bottleneck, you could get by with packing images you expect to be drawn together as best you can and just have multiple binds.

Yeah, I think Unity has the ability to pack images at runtime. But I've never done it myself either.

I was also thinking about how Spine might be able to take advantage of Unity 4.3+'s built-in sprite packer. There are hints of theoretically being able to in their scripting docs.
But it's currently in beta and I think the APIs aren't solid yet. And currently, some access to it needs Unity Pro so I haven't been able to try it at all.

Nate yazdı

A dynamic texture atlas could be most efficient if it isn't feasible to pack images in such a way that draw calls are minimized. I don't have experience building an atlas at runtime with Unity though. FWIW, with libgdx you'd use PixmapPacker.

The problem I have is not draw calls, but the amount of memory required for all of the atlases if I'm to have all of the wardrobes loaded at once. Then it would be 8.5x 2048x2048 texture atlases! They are going to be optimized a little bit more, but I don't think we can get under 7x2048x2048 atlases for all of the wardrobes. Then on top of that we have the base body images for the characters and other GUI stuff and backgrounds etc.

Nate yazdı

You can control which textures are loaded and can even load them lazily by using your own AttachmentLoader. This is also how you'd use individual images instead of an atlas, if you wanted.

This sounds very interesting and is probably the best way to do it if I am thinking correctly about how it might work ;-)
I'm guessing I can hook into some sort of callback whenever an attachment's image is needed? Then I could either just hand off individual images or I could conceivably then have a blank 2048x2048 texture atlas that I add the individual images to 1 by 1.
Handing off individual images would be the easiest but I don't know how to do that or if it would work?
I'll look into the AttachmentLoader then and see if I can figure it out.

Nate yazdı

How many images will you have attached at once?

I will have 1 image for each attachment for a total of 8 images at once, on top of the base skin.

Nate yazdı

How many skeletons on screen at once?

Only 1 skeleton on the dressing screen.

Nate yazdı

What platforms are you targeting?

Mobile platforms only, starting with iOS and Android.

Nate yazdı

If texture binds aren't a bottleneck, you could get by with packing images you expect to be drawn together as best you can and just have multiple binds.

I don't know what texture binds are? I guess it means how many textures are loaded at once? In that case it would be too many with 8x2048x2048.

Pharan yazdı

I was also thinking about how Spine might be able to take advantage of Unity 4.3+'s built-in sprite packer. There are hints of theoretically being able to in their scripting docs.
But it's currently in beta and I think the APIs aren't solid yet. And currently, some access to it needs Unity Pro so I haven't been able to try it at all.

It would be great if you could choose how the packing should be done from Spine for output to Unity. Just being able to tag things in the tree according to which atlas you want it added to or something like that? An easier workflow for noobs like me would be good ;-)

Thanks for helping me learn this stuff guys!

mark yazdı

The problem I have is not draw calls, but the amount of memory required for all of the atlases if I'm to have all of the wardrobes loaded at once. Then it would be 8.5x 2048x2048 texture atlases! They are going to be optimized a little bit more, but I don't think we can get under 7x2048x2048 atlases for all of the wardrobes. Then on top of that we have the base body images for the characters and other GUI stuff and backgrounds etc.

Then you'll need to not load them all. 🙂 You can use individual images for each part. You'll have one bind per attachment, but as long as it isn't a bottleneck you are fine. That or build an atlas at runtime.

I'm guessing I can hook into some sort of callback whenever an attachment's image is needed?

You would just not load the textures when the attachments are loaded. Once you have your skeleton with all the attachments you want attached, you would only load images for the attachments on the skeleton.

I don't know what texture binds are?

Read the link above.

It would be great if you could choose how the packing should be done from Spine for output to Unity. Just being able to tag things in the tree according to which atlas you want it added to or something like that?

Read the link above. You can use subdirectories to control what images are packed together.

Good luck! 🙂 Feel free to ask more questions if needed.

If it makes you feel any better, if you're targeting high-DPI screens, large VRAM consumption is almost inevitable.
But most devices with high-resolution screens also come with hefty memory so you shouldn't worry too much.

But yeah, you can control which images go to which atlas.
To have an easier time, you just need to arrange the images that go together in one atlas to also be together in their own folder, separate from the others.
And... I guess, read the link? https://github.com/libgdx/libgdx/wiki/Texture-packer

Nate yazdı

You would just not load the textures when the attachments are loaded. Once you have your skeleton with all the attachments you want attached, you would only load images for the attachments on the skeleton.

I think this is my way forward.
OK, so that means going over to doing everything in code right?
At the moment I set up a Spine Atlas, materials for each atlas and then a Spine SkeletonData where I've connected the Atlas Asset and the Skeleton JSON. Then in my scene I have a Spine SkeletonAnimation with SkeletonData connected. This is all done in the Unity project and hierarchy.

How would I go about doing this in code if that is what is needed?
Would I just create a SkeletonJSON myself and set my own AttachmentLoader class?
I can dig some more in the code myself, but any sort of code example to help me on my way would be appreciated ;-)

Pharan yazdı

If it makes you feel any better, if you're targeting high-DPI screens, large VRAM consumption is almost inevitable.

Yeah, I realise this - I just want to get it working at this stage :-)

Thanks, Mark

SkeletonDataAsset calls SkeletonJson(Atlas) which uses an AtlasAttachmentLoader. This is where you need to change to use your own AttachmentLoader. Nothing else needs to change.

Nate yazdı

SkeletonDataAsset calls SkeletonJson(Atlas) which uses an AtlasAttachmentLoader. This is where you need to change to use your own AttachmentLoader. Nothing else needs to change.

OK, but I have a base skeleton with attachments that will use images from an atlas so the default AtlasAttachmentLoader behavior is wanted there.
But then for the clothes slots I want to be able to load the single images directly and return those and not have them in an atlas.
So would I need to hack SkeletonJSON for this or would I just hard code something in AtlasAttachmentLoader? For example when there are missing attachments in the atlas, I could then call my SingleImageAttachmentLoader?

Am I on the right track?

You could do it either way. 🙂