--- license: apache-2.0 base_model: - CabalResearch/NoobAI-Flux2VAE-RectifiedFlow-0.3 library_name: diffusers tags: - lora --- # Files: ## **Loras/Mugen-lanzcos-test-000001.safetensors** Very small lora trained after replacing the downscaling algorithm used in sd-scripts from `cv2.INTER_AREA` to `cv2.INTER_LANCZOS4` Works to increase sharpness and fine details when setting the positive conditioning's original size greater than the target size's. - ComfyUI nodes to make this easier [HERE](https://github.com/lRemixl/ComfyUI-sdxl-micro-conditioning) My reasoning for this is in this [.docx file](Loras/documents/Reasoning%20behind%20lanzcos%20downscaling.docx) ## **Loras/Mugen-Consistency-Test-Muon-000004.safetensors** Consistency lora trained similarly to the ones below, but with Muon Changes structure more than other consistency loras ## **Loras/Mugen_NoobFlux2RF_Test-000003.safetensors** Produces different results, *normally* worse, trained for [Mugen](https://huggingface.co/CabalResearch/Mugen) with an auxillary training objective from https://arxiv.org/abs/2411.04873. I decode both the ground truth latent and predicited clean latent, and take the first two up_blocks of the VAE when decoding both the ground truth latent and predicited latent, then compare them against each other using L2 loss and add that back onto the regular flow matching loss at a weight of 0.1 (So `loss_total = flow matching + 0.1 * latent_perceptual_loss`). I only did this if the timestep was less than 50% (`sigmas < 0.5`). ## **Loras/RF-Flux2VAE-Consistency-Test-000002.safetensors** Trained by generating one forward pass from 20-30 timesteps before the target timestep with **no gradients**. Then simulating one euler step to the target timestep and using the resulting latent as the input to the model and training on that. So the model is trained on `clean latent + noise + discretization error + mispredicition error from previous step` My thinking was at inference time the model **doesn't** only receive the `clean latent + gaussian noise` like in training, but also `+ discretization error + mispredicition error from previous step` Works on [Mugen](https://huggingface.co/CabalResearch/Mugen) too, but trained on [NoobAI-Flux2VAE-RectifiedFlow-0.3](https://huggingface.co/CabalResearch/NoobAI-Flux2VAE-RectifiedFlow-0.3) The one below performs **better** ## **Loras/RF-Flux2VAE-Consistency-Test-50-000002.safetensors** Same exact settings as above, but trained using timestep jumps of 50-60. Performs **better**