This is 3D meshes of letters created out of Omniglot dataset.
Each letter represented as .glb file. Meshes are estimated out of svg images.
python 3.9
Enviroment setup:
# CONDA #
conda create --name omniglot3d python=3.9
conda activate omniglot3d
pip install -r requirements.txt
# VENV #
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt install python3.9
sudo apt update
sudo apt install python3.9 python3.9-venv
python3.9 -m venv omniglot3d_venv
source omniglot3d_venv/bin/activate
pip install -r requirements.txt
From 2D to 3D pipe:
- Run
get_data.shto load data from original repo.
Or put original "Omniglot" into./data/folder
you need extractimages_background.zip. image2svg.pyto generate svg conturs for extrusion meshes.- Run python
svg2mesh.pyto generate “broken” glb files (something is wrong with trimesh glb export for normals) they are fine, but they need to be resaved in blender. - Run
blender --background --python fix_glbs.pyto resave glb file. (Blender v.3.0.1) That might take some time.
Final .GLB meshes will be in ./processed_data/fixed_glbs with same folder structure as original Omniglot.
If you just want data there is two archives in ./processed_data for train and evaluation respectfully. Use it as you see fit.
Syntetic 3D data of generic(but human) origin for detection and localization tasks in 3D.
- Seems like meashes rotated 180 degees on y-axis. (Upsidedown)
- Topology is not great, maybe marching cubes pass is in order. (in blender pass, but it will take even longer)
I wanna acknowledge again original Omniglot repo and all people involved as well as an original website.
