anything-llm/frontend
Timothy Carambat 61db981017
feat: Embed on-instance Whisper model for audio/mp4 transcribing (#449)
* feat: Embed on-instance Whisper model for audio/mp4 transcribing
resolves #329

* additional logging

* add placeholder for tmp folder in collector storage
Add cleanup of hotdir and tmp on collector boot to prevent hanging files
split loading of model and file conversion into concurrency

* update README

* update model size

* update supported filetypes
2023-12-15 11:20:13 -08:00
..
public Robots.txt (#369) 2023-11-13 15:22:24 -08:00
src feat: Embed on-instance Whisper model for audio/mp4 transcribing (#449) 2023-12-15 11:20:13 -08:00
.env.example update local dev docs 2023-08-16 15:34:03 -07:00
.eslintrc.cjs [Fork] Additions on franzbischoff resolution on #122 (#152) 2023-07-20 11:14:23 -07:00
.gitignore untrack frontend production env 2023-08-16 12:16:02 -07:00
.nvmrc bump node version requirement 2023-06-08 10:29:17 -07:00
index.html add feedback form, hosting link, update readme, show promo image 2023-08-11 17:28:30 -07:00
jsconfig.json chore: add @ as alias for frontend root (#414) 2023-12-07 09:09:01 -08:00
package.json Enable chat streaming for LLMs (#354) 2023-11-13 15:07:30 -08:00
postcss.config.js inital commit 2023-06-03 19:28:07 -07:00
tailwind.config.js AnythingLLM UI overhaul (#278) 2023-10-23 13:10:34 -07:00
vite.config.js chore: add @ as alias for frontend root (#414) 2023-12-07 09:09:01 -08:00
yarn.lock Enable chat streaming for LLMs (#354) 2023-11-13 15:07:30 -08:00