mirror of https://gitee.com/namelin2022/ollama
Branch:
main
api
bmizerany/client-registry
bmizerany/embedspeedup
bmizerany/fastverify
bmizerany/filepathnobuild
bmizerany/filepathwithcoloninhost
bmizerany/grammar
bmizerany/hrm
bmizerany/modenameenforcealphanum
bmizerany/nameswork
bmizerany/noseek
bmizerany/nosillyggufslurps
bmizerany/replacecolon
bmizerany/types/model/defaultfix
bmizerany/validatenames
bmizerany/x
bruce/iq-quants
brucemacd/allow-ollama
brucemacd/api-doc-formatting
brucemacd/benchmark-list
brucemacd/browser-key-register
brucemacd/cache-models
brucemacd/check-key-register
brucemacd/check-key-register-structured-err
brucemacd/community-docs
brucemacd/concurrent-fail
brucemacd/convert-cli
brucemacd/convert-valid-tests
brucemacd/create-no-loop
brucemacd/default-param-tag
brucemacd/doc-go-engine
brucemacd/e2e-benchmark
brucemacd/encode
brucemacd/err-hint
brucemacd/err-no-vocab
brucemacd/forward-test
brucemacd/go_qwen2
brucemacd/ignore-debug
brucemacd/install-path-clean
brucemacd/jomorganca/mistral
brucemacd/lib-wpath
brucemacd/llama-mem-calc
brucemacd/logprobs
brucemacd/mem-calc
brucemacd/mistral
brucemacd/mistral-small-convert
brucemacd/model-forward-test-ext
brucemacd/models_dir_tilde
brucemacd/new_runner_e2e
brucemacd/new_runner_graph_bench
brucemacd/new_runner_qwen2
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/no-at-create
brucemacd/no-move-prompt-path
brucemacd/openai-chat
brucemacd/parallel-embed-models
brucemacd/partial-read-caps
brucemacd/push-name-validation
brucemacd/qwen25vl
brucemacd/qwen2_5
brucemacd/remove-ggml-runner
brucemacd/rope-config
brucemacd/ropeconfig
brucemacd/runner-completion
brucemacd/runner-test
brucemacd/shim-grammar
brucemacd/structured-api-errs
brucemacd/token-gen-timeout
brucemacd/tokenize
brucemacd/use-req-model-chat
brucemacd/user-template
build_dist
cgo
cp-model
cuda-search
delete-fix
deletemodels
dhiltgen/remove_submodule
distribution
drifkin/5483
drifkin/array-head-count
drifkin/array-head-count-simple
drifkin/chat-truncation-fix
drifkin/num-parallel
drifkin/print-template
editor
fix-model-names
fix-unknown-model
format-config
go-opts
gpt-oss-bump
insecure-registry
jessegross/bump-memory
jessegross/memory
jessegross/new_runner
jessegross/sample
jessegross/worst-multimodal
jmorgan/sample-fix-sorting-extras
jmorganca/add-missing-symlink-eval
jmorganca/batch-embeddings
jmorganca/cuda-compression-none
jmorganca/degin-1
jmorganca/done-reason
jmorganca/enable-fa
jmorganca/execstack
jmorganca/faster-releases
jmorganca/fix-gguf-error
jmorganca/fix-null-format
jmorganca/fix-proxy
jmorganca/ga
jmorganca/ggml-static
jmorganca/if-none-match
jmorganca/initcmake
jmorganca/limit
jmorganca/llama-bump
jmorganca/llama-cpp-7c26775
jmorganca/llama-cpp-8960fe8
jmorganca/llama-update-6
jmorganca/llama-vit
jmorganca/mistral
jmorganca/mistral-wip
jmorganca/mllama
jmorganca/mm
jmorganca/native
jmorganca/no-concat
jmorganca/no-error-template
jmorganca/openai-context
jmorganca/openai-fix-first-message
jmorganca/options
jmorganca/qwen25vl
jmorganca/qwen2vl
jmorganca/replace-assets
jmorganca/silence-tokenizer
jmorganca/sync
jmorganca/temp-0-images
jmorganca/template-mistral
jmorganca/testing
jmorganca/vendor-081b29bd
jyan/auth
jyan/convert-prog
jyan/format
jyan/local
jyan/local2
jyan/ollama-v
jyan/p2
jyan/paligemma
jyan/palitest
jyan/parse-temp
jyan/progress
jyan/q4_4/8
jyan/quant3
jyan/quant4
jyan/quant5
jyan/reord-g
jyan/v0.146
language_support
license-layers
list-models
ls
main
matt/examplemodelfiles
matt/streamingapi
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/communitylinks
mattw/faq-context
mattw/howtoquant
mattw/noprune
mattw/python-functioncalling
mattw/quantcontext
mattw/selfqueryingretrieval
mattw/whatneedstorun
modelfile-readme
modelpath
modenameenforcealphanum
mxyng/16-bit
mxyng/api-models
mxyng/benchmark
mxyng/cleanup
mxyng/cmd-history
mxyng/convert
mxyng/create-context
mxyng/create-stdin
mxyng/environ-2
mxyng/extra-args
mxyng/fix-memory
mxyng/func-checks
mxyng/gguf
mxyng/gin-slog
mxyng/install
mxyng/layers-from-files
mxyng/llama4
mxyng/mllama
mxyng/modelname-5
mxyng/modelname-6
mxyng/modelname-7
mxyng/modelname-8
mxyng/next
mxyng/next-bert
mxyng/next-build
mxyng/next-debug
mxyng/next-mlx
mxyng/no-deprecated-gpu-targets
mxyng/omit-array
mxyng/parallel-create-blobs
mxyng/quant
mxyng/server-timestamp
mxyng/split-bin
mxyng/tune-concurrency
mxyng/update-registry-domain
mxyng/v3
native
nogogen
ollama.com
paligemma-support
parth/cmd-cleanup-SO
parth/constrained-sampling-json
parth/deepseek-r1-tools
parth/disallow-streaming-tools
parth/fix-default-to-warn-json
parth/fix-referencing-so
parth/log-probs
parth/next-sampling
parth/openai-stream-usage
parth/opt-in-error-context-window
parth/python-function-parsing
parth/python-tools-calling
parth/sample-correctness-fix
parth/sample-fix-sorting
parth/sample-so-test
parth/sample-unmarshal-json-for-params
parth/sampling-remove-model-loading-for-grammar
parth/sampling-structured-outputs
parth/server-enable-content-stream-with-tools
parth/server-improve-json-grammar
parth/set-context-size-openai
parth/templating
parth/tokenize-detokenize
parth/tool-prefix-temp
pdevine/authorizedkeys
pdevine/bfloat16
pdevine/convert-cohere2
pdevine/fix-template
pdevine/geems-2b
pdevine/gemma2
pdevine/ggla
pdevine/import-docs
pdevine/logging
pdevine/newlines
pdevine/ps-glitches
pdevine/showggmlinfo
progress-flicker
progressbar
pulse
qwen25omni
readme-updates
remove-first
rename
revert-5963-revert-5924-mxyng/llama3.1-rope
revert-991-brucemacd/history-api
rmdisplaylong
roy-embed-parallel
royh-embed-parallel
royh-imgembed
royh-ls
royh-name
royh-openai-delete
royh-openai-suffixdocs
royh-params
royh-precision
royh-show-rigid
royh-testdelete
royh/embed-viz
royh/ep-methods
royh/stream-tools
royh/whisper
scratch
shell
skip-list
stream-tools-stop
timeout
update-nous-hermes
upgrade-all
upload-progress
whitespace-detection
v0.0.1
v0.0.10
v0.0.11
v0.0.12
v0.0.13
v0.0.14
v0.0.15
v0.0.16
v0.0.17
v0.0.18
v0.0.19
v0.0.2
v0.0.20
v0.0.21
v0.0.3
v0.0.4
v0.0.5
v0.0.6
v0.0.7
v0.0.8
v0.0.9
v0.1.0
v0.1.1
v0.1.10
v0.1.11
v0.1.12
v0.1.13
v0.1.14
v0.1.15
v0.1.16
v0.1.17
v0.1.18
v0.1.19
v0.1.2
v0.1.20
v0.1.21
v0.1.22
v0.1.23
v0.1.24
v0.1.25
v0.1.26
v0.1.27
v0.1.28
v0.1.29
v0.1.3
v0.1.30
v0.1.31
v0.1.32
v0.1.32-rc1
v0.1.32-rc2
v0.1.33
v0.1.33-rc1
v0.1.33-rc2
v0.1.33-rc3
v0.1.33-rc4
v0.1.33-rc5
v0.1.33-rc6
v0.1.33-rc7
v0.1.34
v0.1.34-rc1
v0.1.35
v0.1.35-rc1
v0.1.36
v0.1.37
v0.1.38
v0.1.39
v0.1.39-rc1
v0.1.39-rc2
v0.1.4
v0.1.40
v0.1.40-rc1
v0.1.41
v0.1.42
v0.1.43
v0.1.44
v0.1.45
v0.1.45-rc1
v0.1.45-rc2
v0.1.45-rc3
v0.1.45-rc4
v0.1.45-rc5
v0.1.46
v0.1.47
v0.1.48
v0.1.49-rc1
v0.1.49-rc10
v0.1.49-rc11
v0.1.49-rc12
v0.1.49-rc13
v0.1.49-rc14
v0.1.49-rc2
v0.1.49-rc3
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc6
v0.1.49-rc7
v0.1.49-rc8
v0.1.49-rc9
v0.1.5
v0.1.6
v0.1.7
v0.1.8
v0.1.9
v0.10.0
v0.10.0-rc0
v0.10.0-rc1
v0.10.0-rc2
v0.10.0-rc3
v0.10.0-rc4
v0.10.1
v0.11.0
v0.11.1
v0.11.2
v0.11.3
v0.11.3-rc0
v0.11.4
v0.11.4-rc0
v0.2.0
v0.2.1
v0.2.2
v0.2.2-rc1
v0.2.2-rc2
v0.2.3
v0.2.4
v0.2.5
v0.2.6
v0.2.7
v0.2.8
v0.2.8-rc1
v0.2.8-rc2
v0.3.0
v0.3.1
v0.3.10
v0.3.10-rc1
v0.3.11
v0.3.11-rc1
v0.3.11-rc2
v0.3.11-rc3
v0.3.11-rc4
v0.3.12
v0.3.12-rc1
v0.3.12-rc2
v0.3.12-rc3
v0.3.12-rc4
v0.3.12-rc5
v0.3.13
v0.3.14
v0.3.14-rc0
v0.3.2
v0.3.3
v0.3.4
v0.3.5
v0.3.6
v0.3.7
v0.3.7-rc1
v0.3.7-rc2
v0.3.7-rc3
v0.3.7-rc4
v0.3.7-rc5
v0.3.7-rc6
v0.3.8
v0.3.9
v0.4.0
v0.4.0-ci3
v0.4.0-rc0
v0.4.0-rc1
v0.4.0-rc2
v0.4.0-rc3
v0.4.0-rc4
v0.4.0-rc5
v0.4.0-rc6
v0.4.0-rc7
v0.4.0-rc8
v0.4.1
v0.4.1-rc0
v0.4.2
v0.4.2-rc0
v0.4.2-rc1
v0.4.3
v0.4.3-rc0
v0.4.4
v0.4.5
v0.4.6
v0.4.7
v0.4.8-rc0
v0.5.0
v0.5.0-rc1
v0.5.1
v0.5.10
v0.5.11
v0.5.12
v0.5.12-rc0
v0.5.12-rc1
v0.5.13
v0.5.13-rc0
v0.5.13-rc1
v0.5.13-rc2
v0.5.13-rc3
v0.5.13-rc4
v0.5.13-rc5
v0.5.13-rc6
v0.5.2
v0.5.2-rc0
v0.5.2-rc1
v0.5.2-rc2
v0.5.2-rc3
v0.5.3
v0.5.3-rc0
v0.5.4
v0.5.5
v0.5.5-rc0
v0.5.6
v0.5.7
v0.5.8
v0.5.8-rc0
v0.5.8-rc1
v0.5.8-rc10
v0.5.8-rc11
v0.5.8-rc12
v0.5.8-rc13
v0.5.8-rc2
v0.5.8-rc3
v0.5.8-rc4
v0.5.8-rc5
v0.5.8-rc6
v0.5.8-rc7
v0.5.8-rc8
v0.5.8-rc9
v0.5.9
v0.5.9-rc0
v0.6.0
v0.6.0-rc0
v0.6.1
v0.6.1-rc0
v0.6.2
v0.6.2-rc0
v0.6.3
v0.6.3-rc0
v0.6.3-rc1
v0.6.4
v0.6.4-rc0
v0.6.5
v0.6.5-rc0
v0.6.5-rc1
v0.6.6
v0.6.6-rc0
v0.6.6-rc1
v0.6.6-rc2
v0.6.7
v0.6.7-rc0
v0.6.7-rc1
v0.6.7-rc2
v0.6.8
v0.6.8-rc0
v0.7.0
v0.7.0-rc0
v0.7.0-rc1
v0.7.1
v0.7.1-rc0
v0.7.1-rc1
v0.7.1-rc2
v0.8.0
v0.8.0-rc0
v0.9.0
v0.9.0-rc0
v0.9.1
v0.9.1-rc0
v0.9.1-rc1
v0.9.2
v0.9.3
v0.9.3-rc0
v0.9.3-rc1
v0.9.3-rc2
v0.9.3-rc3
v0.9.3-rc4
v0.9.3-rc5
v0.9.4
v0.9.4-citest0
v0.9.4-rc0
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc5
v0.9.4-rc6
v0.9.5
v0.9.6
v0.9.6-rc0
v0.9.7-rc0
v0.9.7-rc1
${ noResults }
15 Commits (main)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
5b446cc815
|
chore: update gitattributes (#8860)
* chore: update gitattributes * chore: add build info source |
1 year ago |
|
|
dcfb7a105c
|
next build (#8539)
* add build to .dockerignore * test: only build one arch * add build to .gitignore * fix ccache path * filter amdgpu targets * only filter if autodetecting * Don't clobber gpu list for default runner This ensures the GPU specific environment variables are set properly * explicitly set CXX compiler for HIP * Update build_windows.ps1 This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset. * build: add ollama subdir * add .git to .dockerignore * docs: update development.md * update build_darwin.sh * remove unused scripts * llm: add cwd and build/lib/ollama to library paths * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS * add additional cmake output vars for msvc * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12 * remove unncessary filepath.Dir, cleanup * add hardware-specific directory to path * use absolute server path * build: linux arm * cmake install targets * remove unused files * ml: visit each library path once * build: skip cpu variants on arm * build: install cpu targets * build: fix workflow * shorter names * fix rocblas install * docs: clean up development.md * consistent build dir removal in development.md * silence -Wimplicit-function-declaration build warnings in ggml-cpu * update readme * update development readme * llm: update library lookup logic now that there is one runner (#8587) * tweak development.md * update docs * add windows cuda/rocm tests --------- Co-authored-by: jmorganca <jmorganca@gmail.com> Co-authored-by: Daniel Hiltgen <daniel@ollama.com> |
1 year ago |
|
|
b754f5a6a3
|
Remove submodule and shift to Go server - 0.4.0 (#7157)
* Remove llama.cpp submodule and shift new build to top * CI: install msys and clang gcc on win Needed for deepseek to work properly on windows |
1 year ago |
|
|
cd7e01e8b9
|
fix vendoring attribute for metal (#7156)
Add missing metal files to vendoring list |
1 year ago |
|
|
7a962bd802
|
fix vendoring attribute (#7155)
Expand out the file extensions for vendored code so git reports the status correctly |
1 year ago |
|
|
96efd9052f
|
Re-introduce the `llama` package (#5034)
* Re-introduce the llama package This PR brings back the llama package, making it possible to call llama.cpp and ggml APIs from Go directly via CGo. This has a few advantages: - C APIs can be called directly from Go without needing to use the previous "server" REST API - On macOS and for CPU builds on Linux and Windows, Ollama can be built without a go generate ./... step, making it easy to get up and running to hack on parts of Ollama that don't require fast inference - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners takes <5 min on a fast CPU) - No git submodule making it easier to clone and build from source This is a big PR, but much of it is vendor code except for: - llama.go CGo bindings - example/: a simple example of running inference - runner/: a subprocess server designed to replace the llm/ext_server package - Makefile an as minimal as possible Makefile to build the runner package for different targets (cpu, avx, avx2, cuda, rocm) Co-authored-by: Jesse Gross <jesse@ollama.com> Co-authored-by: Daniel Hiltgen <daniel@ollama.com> * cache: Clear old KV cache entries when evicting a slot When forking a cache entry, if no empty slots are available we evict the least recently used one and copy over the KV entries from the closest match. However, this copy does not overwrite existing values but only adds new ones. Therefore, we need to clear the old slot first. This change fixes two issues: - The KV cache fills up and runs out of space even though we think we are managing it correctly - Performance gets worse over time as we use new cache entries that are not hot in the processor caches * doc: explain golang objc linker warning (#6830) * llama: gather transitive dependencies for rocm for dist packaging (#6848) * Refine go server makefiles to be more DRY (#6924) This breaks up the monolithic Makefile for the Go based runners into a set of utility files as well as recursive Makefiles for the runners. Files starting with the name "Makefile" are buildable, while files that end with ".make" are utilities to include in other Makefiles. This reduces the amount of nearly identical targets and helps set a pattern for future community contributions for new GPU runner architectures. When we are ready to switch over to the Go runners, these files should move to the top of the repo, and we should add targets for the main CLI, as well as a helper "install" (put all the built binaries on the local system in a runnable state) and "dist" target (generate the various tar/zip files for distribution) for local developer use. * llama: don't create extraneous directories (#6988) * llama: Exercise the new build in CI (#6989) Wire up some basic sanity testing in CI for the Go runner. GPU runners are not covered yet. * llama: Refine developer docs for Go server (#6842) This enhances the documentation for development focusing on the new Go server. After we complete the transition further doc refinements can remove the "transition" discussion. * runner.go: Allocate batches for all sequences during init We should tell the model that we could have full batches for all sequences. We already do this when we allocate the batches but it was missed during initialization. * llama.go: Don't return nil from Tokenize on zero length input Potentially receiving nil in a non-error condition is surprising to most callers - it's better to return an empty slice. * runner.go: Remove stop tokens from cache If the last token is EOG then we don't return this and it isn't present in the cache (because it was never submitted to Decode). This works well for extending the cache entry with a new sequence. However, for multi-token stop sequences, we won't return any of the tokens but all but the last one will be in the cache. This means when the conversation continues the cache will contain tokens that don't overlap with the new prompt. This works (we will pick up the portion where there is overlap) but it causes unnecessary cache thrashing because we will fork the original cache entry as it is not a perfect match. By trimming the cache to the tokens that we actually return this issue can be avoided. * runner.go: Simplify flushing of pending tokens * runner.go: Update TODOs * runner.go: Don't panic when processing sequences If there is an error processing a sequence, we should return a clean HTTP error back to Ollama rather than panicing. This will make us more resilient to transient failures. Panics can still occur during startup as there is no way to serve requests if that fails. Co-authored-by: jmorganca <jmorganca@gmail.com> * runner.go: More accurately capture timings Currently prompt processing time doesn't capture the that it takes to tokenize the input, only decoding time. We should capture the full process to more accurately reflect reality. This is especially true once we start processing images where the initial processing can take significant time. This is also more consistent with the existing C++ runner. * runner.go: Support for vision models In addition to bringing feature parity with the C++ runner, this also incorporates several improvements: - Cache prompting works with images, avoiding the need to re-decode embeddings for every message in a conversation - Parallelism is supported, avoiding the need to restrict to one sequence at a time. (Though for now Ollama will not schedule them while we might need to fall back to the old runner.) Co-authored-by: jmorganca <jmorganca@gmail.com> * runner.go: Move Unicode checking code and add tests * runner.go: Export external cache members Runner and cache are in the same package so the change doesn't affect anything but it is more internally consistent. * runner.go: Image embedding cache Generating embeddings from images can take significant time (on my machine between 100ms and 8s depending on the model). Although we already cache the result of decoding these images, the embeddings need to be regenerated every time. This is not necessary if we get the same image over and over again, for example, during a conversation. This currently uses a very small cache with a very simple algorithm but it is easy to improve as is warranted. * llama: catch up on patches Carry forward solar-pro and cli-unicode patches * runner.go: Don't re-allocate memory for every batch We can reuse memory allocated from batch to batch since batch size is fixed. This both saves the cost of reallocation as well keeps the cache lines hot. This results in a roughly 1% performance improvement for token generation with Nvidia GPUs on Linux. * runner.go: Default to classic input cache policy The input cache as part of the go runner implemented a cache policy that aims to maximize hit rate in both single and multi- user scenarios. When there is a cache hit, the response is very fast. However, performance is actually slower when there is an input cache miss due to worse GPU VRAM locality. This means that performance is generally better overall for multi-user scenarios (better input cache hit rate, locality was relatively poor already). But worse for single users (input cache hit rate is about the same, locality is now worse). This defaults the policy back to the old one to avoid a regression but keeps the new one available through an environment variable OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is to improve this in the future to get the best of both worlds without user configuration. For inputs that result in cache misses, on Nvidia/Linux this change improves performance by 31% for prompt processing and 13% for token generation. * runner.go: Increase size of response channel Generally the CPU can easily keep up with handling reponses that are generated but there's no reason not to let generation continue and handle things in larger batches if needed. * llama: Add CI to verify all vendored changes have patches (#7066) Make sure we don't accidentally merge changes in the vendored code that aren't also reflected in the patches. * llama: adjust clip patch for mingw utf-16 (#7065) * llama: adjust clip patch for mingw utf-16 * llama: ensure static linking of runtime libs Avoid runtime dependencies on non-standard libraries * runner.go: Enable llamafile (all platforms) and BLAS (Mac OS) These are two features that are shown on llama.cpp's system info that are currently different between the two runners. On my test systems the performance difference is very small to negligible but it is probably still good to equalize the features. * llm: Don't add BOS/EOS for tokenize requests This is consistent with what server.cpp currently does. It affects things like token processing counts for embedding requests. * runner.go: Don't cache prompts for embeddings Our integration with server.cpp implicitly disables prompt caching because it is not part of the JSON object being parsed, this makes the Go runner behavior similarly. Prompt caching has been seen to affect the results of text completions on certain hardware. The results are not wrong either way but they are non-deterministic. However, embeddings seem to be affected even on hardware that does not show this behavior for completions. For now, it is best to maintain consistency with the existing behavior. * runner.go: Adjust debug log levels Add system info printed at startup and quiet down noisier logging. * llama: fix compiler flag differences (#7082) Adjust the flags for the new Go server to more closely match the generate flow * llama: refine developer docs (#7121) * llama: doc and example clean up (#7122) * llama: doc and example clean up * llama: Move new dockerfile into llama dir Temporary home until we fully transition to the Go server * llama: runner doc cleanup * llama.go: Add description for Tokenize error case --------- Co-authored-by: Jesse Gross <jesse@ollama.com> Co-authored-by: Daniel Hiltgen <daniel@ollama.com> Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com> |
1 year ago |
|
|
d4e6407464 |
Restrict text files with explicit line feeds to *.go.
This partially reverts
|
2 years ago |
|
|
67472e0e89
|
Also flag *.icns as binary
|
2 years ago |
|
|
ce67706037 |
Set *.png and *.ico to be treated as binary files.
The change
|
2 years ago |
|
|
b732beba6a |
lint
|
2 years ago |
|
|
f7dc7dcc64
|
Update .gitattributes
|
2 years ago |
|
|
04f971c84b
|
fix golangci workflow missing gofmt and goimports (#4190)
|
2 years ago |
|
|
9164b0161b
|
Update .gitattributes
|
2 years ago |
|
|
59fbceedcc
|
use lf for line endings (#4085)
|
2 years ago |
|
|
38daf0a252 |
rename `.gitattributes`
|
2 years ago |
|
|
2dce1ab40b
|
add `llm/ext_server` directory to `linguist-vendored` (#3173)
|
2 years ago |