At the moment i am focusing on making this stable enough for windows for this next update,but yes the plan is to add linux support and mac support but that will be later
I have no experience hosting AI model locally before, so setting up everything is quite a hassle for me. In the end, would be great if it work for me but it's not. It took 25 minutes to generate just one image () and got error loading BiRefNet model: We couldn't connect to 'https://huggingface.co' to load the files, and couldn't find them in the cached files. That's it - no image. I'm not mad, I'm just sad.
25 minutes to generate mean 1 or two things, the server dint find your gpu and switched back to cpu or you simply have a under performing pc,i would suggest waiting for the next update as the server and the extension have been updated upgraded and tested on multiple gpu's like rtx 3080, 5090s, A40, H100, A100 also the new extension will have an support ticket system with a log viewer and if you are still having issues i can better assist you there
If you're having trouble with 50-series cards, please wait for the major update in a few weeks. It features a full system redesign to fix these issues people are experiencing . I'll also be opening support tickets alongside that update to provide one on one help for any remaining problems.
In other words, I download another (and nightly!!!) version of pytorch so that everything works for me. CUDA 13+ is required to work on my video card. Unfortunately, as far as I understand, there is no release build for Windows with CUDA 13, only a nightly build.
Do this at your own risk :) This comment is not a call to action xD
Perhaps the library author will suggest a better solution. :)
Really appreciate you making this tool. I was having trouble running the models locally on my machine, so I adapted your tool to integrate with OpenAI and Stability AI so the image generation could be cloud based:
Is your problem related to the amd video card by any chance?
i had an error while installing (my videocard is rx7600)
so im looking for solution, dont really want to generate
on my cpu
yeah, I was having to use my CPU and it was overall just taking a very long time to generate the images so I found using a cloud based solution, made the workflow a lot faster
Hey, I can't get aseprite to connect to the server. Install seemed fine. The refresh connection button does nothing, and when trying to generate it says "cannot create data file".
I'm using self-built aseprite v1.3.15.4. Think that's the problem? Thanks for your work, hope to try it out.
Hi, thank you so much for your work, and thank you for making it free! <3 I hope it can also be used for animations, that would be fantastic. Of course, if we could add a mannequin to help the algorithm understand how to position the character, that would be fantastic! I think that would make the animations more consistent :)
Hi there, I made a tweak to this for running it under linux (and it might work for macos). If you don't want to use this directly, the original file is just a zip and you simply need to extract it and change all instances of "curl.exe" in http-client.lua to just say "curl".
Yeah i mean if you are handy with coding it's pretty easy to add your own models just gotta modify the script a bit, also i am gonna release a new update here in a bit where you can add your own models and some other features and etc, was not expecting this to get as much publicity as it got so yeah this was like something i worked for like 2 days so a lot of things missing
Hi. Using 1.3.14 here. When I open the model settings, the base model is "connection failed" and attempting to generate an image returns "Generation Failed: Empty or no response from server".
Make sure your local ai server is running, the extension will try to connect to it, the extension does not work without you first opening the local server which is called start_server just run that and it will do everything automatically just keep it open if you close it the extension fails
← Return to pixelai
Comments
Log in with itch.io to leave a comment.
Any chance for a Directml or Vulkan backend?
I will take a look at it
will this be receiving Linux support? that would awesome to see.
At the moment i am focusing on making this stable enough for windows for this next update,but yes the plan is to add linux support and mac support but that will be later
that is understandable, i am happy to hear that :)
I have no experience hosting AI model locally before, so setting up everything is quite a hassle for me. In the end, would be great if it work for me but it's not. It took 25 minutes to generate just one image () and got error loading BiRefNet model: We couldn't connect to 'https://huggingface.co' to load the files, and couldn't find them in the cached files. That's it - no image. I'm not mad, I'm just sad.
25 minutes to generate mean 1 or two things, the server dint find your gpu and switched back to cpu or you simply have a under performing pc,i would suggest waiting for the next update as the server and the extension have been updated upgraded and tested on multiple gpu's like rtx 3080, 5090s, A40, H100, A100 also the new extension will have an support ticket system with a log viewer and if you are still having issues i can better assist you there
Thank you for the answer, I'll wait to try it out. Also, thank you for making and sharing this project.
how to get Nvidia 50 series cards to work
If you're having trouble with 50-series cards, please wait for the major update in a few weeks. It features a full system redesign to fix these issues people are experiencing . I'll also be opening support tickets alongside that update to provide one on one help for any remaining problems.
that's awesome, any chance at reference images input
Yeah it will be in there If the models support it you will be able to use
When I try to run the server, I get this message:
Now starting Python setup and server...
Python will handle dependency installation and server startup.
'venv\Scripts\activate.bat' is not recognized as an internal or external command,
operable program or batch file.
Γ£ù Failed to activate virtual environment
Press any key to continue . . .
Any ideas?
Where is the button?
File -> Local AI Generator
Hi,
its using my CPU atm, how do you switch your GPU?
i got a RTX 4070 but the server give me this?
i'am pritty sure that i have omost 6000 CUDA cores to use
⚠️ No CUDA GPU detected - using CPU (slower)
Consider using a NVIDIA GPU for better performance
Hello! I have an RTX 5070TI and a similar problem. I spent the whole evening on it, but managed to get it working:
The problem is that the CUDA version of PyTorch used by the library author does not yet support newer graphics cards.
Accordingly, the script throws an error during download and switches to CPU mode.
The solution that helped me:
1) I tried many different versions of Python, but ultimately settled on Python 3.10.11.
2) In the startup_script.py file, I changed line 79 to
3) Start the installation from the beginning.
In other words, I download another (and nightly!!!) version of pytorch so that everything works for me. CUDA 13+ is required to work on my video card. Unfortunately, as far as I understand, there is no release build for Windows with CUDA 13, only a nightly build.
Do this at your own risk :) This comment is not a call to action xD
Perhaps the library author will suggest a better solution. :)
thanks ill try it out 😋👍
Nice Work 👏!
Where can one contact you?
Really appreciate you making this tool. I was having trouble running the models locally on my machine, so I adapted your tool to integrate with OpenAI and Stability AI so the image generation could be cloud based:
https://textarcade.itch.io/pixelforgeai
yeah, I was having to use my CPU and it was overall just taking a very long time to generate the images so I found using a cloud based solution, made the workflow a lot faster
Hey, I can't get aseprite to connect to the server. Install seemed fine. The refresh connection button does nothing, and when trying to generate it says "cannot create data file".
I'm using self-built aseprite v1.3.15.4. Think that's the problem? Thanks for your work, hope to try it out.
hay, Is there no video tutorial? I don't know how to use it. I've run the server until it's ready but it's still offline.
Hey, did you find a solution? i was getting the same issue on macbook
hi, i cant download any models offline/online.
📦 Loading default model: stabilityai/stable-diffusion-xl-base-1.0
📥 Loading base model: stabilityai/stable-diffusion-xl-base-1.0
🔧 Loading SDXL pipeline with optimized VAE...
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]`torch_dtype` is deprecated! Use `dtype` instead!
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]
❌ Error loading model stabilityai/stable-diffusion-xl-base-1.0: CLIPTextModelWithProjection.__init__() got an unexpected keyword argument 'offload_state_dict'
Hi, thank you so much for your work, and thank you for making it free! <3 I hope it can also be used for animations, that would be fantastic. Of course, if we could add a mannequin to help the algorithm understand how to position the character, that would be fantastic! I think that would make the animations more consistent :)
Hi there, I made a tweak to this for running it under linux (and it might work for macos). If you don't want to use this directly, the original file is just a zip and you simply need to extract it and change all instances of "curl.exe" in http-client.lua to just say "curl".
Linux version of extension (gdrive link).
Can't use custom own models?, only SDXL base? and SD 1.5?
Yeah i mean if you are handy with coding it's pretty easy to add your own models just gotta modify the script a bit, also i am gonna release a new update here in a bit where you can add your own models and some other features and etc, was not expecting this to get as much publicity as it got so yeah this was like something i worked for like 2 days so a lot of things missing
is it possible to create character sprites AND Their animations?
First one yes second one no as of now
Hi. Using 1.3.14 here. When I open the model settings, the base model is "connection failed" and attempting to generate an image returns "Generation Failed: Empty or no response from server".
Make sure your local ai server is running, the extension will try to connect to it, the extension does not work without you first opening the local server which is called start_server just run that and it will do everything automatically just keep it open if you close it the extension fails
Oh, that's my bad! Sorry for wasting your time
Is it possible to integrate with comfyui+ zluda ? It's for AMD users.
I also have a SD.next setup so after reading your files it seems that I can use my existing SD.next setup instead.
I'd also like this feature, as I have an rx 6750 xt.