多玩幻灵qwq a9c754982b | 1 year ago | |
---|---|---|
configs | 1 year ago | |
filelists | 1 year ago | |
monotonic_align | 2 years ago | |
node_modules | 1 year ago | |
resources | 2 years ago | |
text | 1 year ago | |
.gitignore | 1 year ago | |
LICENSE | 3 years ago | |
README.md | 1 year ago | |
attentions.py | 1 year ago | |
biaobei_train.txt | 1 year ago | |
chars.txt | 1 year ago | |
commons.py | 2 years ago | |
data_utils.py | 2 years ago | |
genshin_train.txt | 1 year ago | |
genshin_val.txt | 1 year ago | |
inference.ipynb | 2 years ago | |
losses.py | 2 years ago | |
mel_processing.py | 2 years ago | |
models.py | 2 years ago | |
modules.py | 2 years ago | |
package.json | 1 year ago | |
preprocess.py | 1 year ago | |
requirements.txt | 1 year ago | |
result.1.json | 1 year ago | |
script.js | 1 year ago | |
server.py | 1 year ago | |
start.py | 1 year ago | |
train.py | 2 years ago | |
train_ms.py | 1 year ago | |
transforms.py | 1 year ago | |
ut.py | 1 year ago | |
utils.py | 2 years ago | |
yarn.lock | 1 year ago |
模型和API均不再提供,大家可以使用我整理的数据集:点击这里
本repo包含了我用于训练原神VITS模型对源代码做出的修改,以及新的config文件。
请注意,下方的API均已经停止运行。
感谢星尘以及国家超级计算广州中心提供的算力支持,感谢VITS模型作者Jaehyeon Kim, Jungil Kong, and Juhee Son,感谢ContentVEC作者 Kaizhi Qian.
本模型训练时使用的所有音频文件版权属于米哈游科技(上海)有限公司。
支持的说话者:
['派蒙', '凯亚', '安柏', '丽莎', '琴', '香菱', '枫原万叶',
'迪卢克', '温迪', '可莉', '早柚', '托马', '芭芭拉', '优菈',
'云堇', '钟离', '魈', '凝光', '雷电将军', '北斗',
'甘雨', '七七', '刻晴', '神里绫华', '戴因斯雷布', '雷泽',
'神里绫人', '罗莎莉亚', '阿贝多', '八重神子', '宵宫',
'荒泷一斗', '九条裟罗', '夜兰', '珊瑚宫心海', '五郎',
'散兵', '女士', '达达利亚', '莫娜', '班尼特', '申鹤',
'行秋', '烟绯', '久岐忍', '辛焱', '砂糖', '胡桃', '重云',
'菲谢尔', '诺艾尔', '迪奥娜', '鹿野院平藏']
Query String 参数:
参数 | 类型 | 值 |
---|---|---|
text | 字符串 | 生成的文本,支持常见标点符号。英文可能无法正常生成,数字请转换为对应的汉字再进行生成。 |
speaker | 字符串 | 说话者名称。必须是上面的名称之一。 |
noise | 浮点数 | 生成时使用的 noise_factor,可用于控制感情等变化程度。默认为0.667。 |
format | 字符串 | 生成语音的格式,必须为mp3或者wav。默认为mp3。 |
示例:http://233366.proxy.nscc-gz.cn:8888/?text=你好&speaker=枫原万叶
此外,也可以尝试使用公开的api:http://233366.proxy.nscc-gz.cn:8888/ 来进行尝试,此API可用于二创等用途,但禁止用于任何商业用途。
请注意多次生成的效果不会一致,可以多次尝试来选择一次较好的效果。
同时支持可视化合成:http://150.158.164.18:9069/
感谢星尘以及国家超级计算广州中心提供的算力支持,感谢VITS模型作者Jaehyeon Kim, Jungil Kong, and Juhee Son,本模型训练时使用的所有音频文件版权属于米哈游科技(上海)有限公司。
Query String 参数:
参数 | 类型 | 值 |
---|---|---|
text | 字符串 | 生成的文本,支持常见标点符号。英文可能无法正常生成,数字请转换为对应的汉字再进行生成。 |
speaker | 字符串 | 说话者名称。必须是上面的名称之一。 |
noise | 浮点数 | 生成时使用的 noise_factor,可用于控制感情等变化程度。默认为0.667。 |
noisew | 浮点数 | 生成时使用的 noise_factor_w,可用于控制音素发音长度变化程度。默认为0.8。 |
length | 浮点数 | 生成时使用的 length_factor,可用于控制整体语速。默认为1.2。 |
format | 字符串 | 生成语音的格式,必须为mp3或者wav。默认为mp3。 |
示例:http://233366.proxy.nscc-gz.cn:8888/?text=你好&speaker=派蒙
In our recent paper, we propose VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.
Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.
Visit our demo for audio samples.
We also provide the pretrained models.
** Update note: Thanks to Rishikesh (ऋषिकेश), our interactive TTS demo is now available on Colab Notebook.
VITS at training | VITS at inference |
---|---|
apt-get install espeak
ln -s /path/to/LJSpeech-1.1/wavs DUMMY1
ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2
# Cython-version Monotonoic Alignment Search
cd monotonic_align
python setup.py build_ext --inplace
# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided.
# python preprocess.py --text_index 1 --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt
# python preprocess.py --text_index 2 --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt
# LJ Speech
python train.py -c configs/ljs_base.json -m ljs_base
# VCTK
python train_ms.py -c configs/vctk_base.json -m vctk_base
See inference.ipynb
No Description
Text Python JavaScript other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》