Saturday, August 3, 2024
What is the scheduler in the Stable Diffusion?
1. what is the scheduler?
- Choosing the Best SDXL Samplers for Image Generation
- Schedulers in AI Image Generation
- The ML developers guide to Schedulers in Stable Diffusion
2. sampler or scheduler?
Monday, July 29, 2024
Transformer step by step
Original Paper: Attention Is All You Need
- The Illustrated Transformer by Jay Alammar
- https://www.linkedin.com/posts/eordax_transformers-ai-genai-activity-7221423755506421760-4fyh?utm_source=share&utm_medium=member_desktop
- https://www.linkedin.com/posts/eordax_transformers-encoder-ai-activity-7221063158302425088-kGLD?utm_source=share&utm_medium=member_desktop
Monday, July 22, 2024
About Quantization
Quantization
https://paperswithcode.com/task/quantization
Vector Quantization
- https://ieeexplore.ieee.org/document/1162229
- GPTVQ
- Example: https://scikit-learn.org/stable/auto_examples/cluster/plot_face_compress.html
& Maarten Greentendorst
- Quantization FP16 FP8, and INT8
Apple, WWDC
- https://developer.apple.com/
videos/play/wwdc2023/10047/ - https://apple.github.io/
coremltools/docs-guides/ source/opt-overview.html ( website)
Sunday, July 14, 2024
how to convert array to vector in c++ fast way
array to vector
#include <vector>
constexpr int vec_size = 5;
float a[vec_size] = {0, 1, 2, 3, 4};
std::vector<float> vec_a(a, a + vec_size); // good
- https://sites.google.com/site/hashemian/home/tips-and-tricks/copy-array-cpp
- https://stackoverflow.com/questions/8777603/what-is-the-simplest-way-to-convert-array-to-vector
- https://www.freecodecamp.org/news/cpp-vector-how-to-initialize-a-vector-in-a-constructor/ <- how to initialize a vector from an array in C++
#include <vector>
constexpr int vec_size = 5;
float a[vec_size] = {0, 1, 2, 3, 4};
std::vector<float> vec_a(a, a + vec_size); // good
#include <algorithm>
float b[vec_size] = {}
std::copy(vec_a.begin(), vec_a.end(), b); // good
Thursday, July 4, 2024
Python float to hexadecimal & hexadecimal to float, and default flaot vs. fp32
import struct
def float_to_hex(f):
return hex(struct.unpack('<I', struct.pack('<f', f))[0])
def hex_to_float(h):
return struct.unpack('!f', bytes.fromhex(h))[0]
hex_val = "0xbf557ca4"
float_val = hex_to_float(hex_val.replace("0x", ""))
print(f"-0.8339 -> 0xbf557ca4 -> {float_val} <- -0.8339331150054932")
----
-0.8339 -> 0xbf557ca4 -> -0.8339331150054932 <- -0.8339331150054932
import numpy as np
fp32_value = np.float32(-0.8339)
print(f"-0.8339 -> fp32: {fp32_value}")
----
output: -0.8339 -> fp32: -0.833899974822998
fp64_value = -0.8339
print(f"-0.8339 -> fp64: {fp64_value}")
----
output: -0.8339 -> fp64: -0.8339
Monday, June 24, 2024
Quantization FP16, FP8, or INT8
In this article, you can download many papers.
Floating-point arithmetic for AI inference — hit or miss?