mirror of
https://github.com/unslothai/unsloth
synced 2026-04-21 13:37:39 +00:00
Studio (#4237)
* Rebuild Studio branch on top of main * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix security and code quality issues for Studio PR #4237 - Validate models_dir query param against allowed directory roots to prevent path traversal in /api/models/local endpoint - Replace string startswith() with Path.is_relative_to() for frontend path traversal check in serve_frontend - Sanitize SSE error messages to not leak exception details to clients (4 locations in inference.py) - Bind port-discovery socket to 127.0.0.1 instead of all interfaces in llama_cpp backend - Import datasets_root and resolve_output_dir in embedding training function to fix NameError and use managed output directory - Remove stale .gitignore entries for package-lock.json and test directories so tests can be tracked in version control - Add venv-reexecution logic to ui CLI command matching the studio command behavior * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Move models_dir path validation before try/except block The HTTPException(403) was inside the try/except Exception handler, so it would be caught and re-raised as a 500. Moving the validation before the try block ensures the 403 is returned directly and also makes the control flow clearer for static analysis (path is validated before any filesystem operations). * Use os.path.realpath + startswith for models_dir validation CodeQL py/path-injection does not recognize Path.is_relative_to() as a sanitizer. Switched to os.path.realpath + str.startswith which is a recognized sanitizer pattern in CodeQL's taint analysis. The startswith check uses root_str + os.sep to prevent prefix collisions (e.g. /app/models_evil matching /app/models). * Never pass user input to Path constructor in models_dir validation CodeQL traces taint through Path(resolved) even after a startswith barrier guard. Fix: the user-supplied models_dir is only used as a string for comparison against allowed roots. The Path object passed to _scan_models_dir comes from the trusted allowed_roots list, not from user input. This fully breaks the taint chain. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
bced78373f
commit
f08aef1804
664 changed files with 103567 additions and 4 deletions
42
.gitignore
vendored
42
.gitignore
vendored
|
|
@ -3,6 +3,15 @@ __pycache__/
|
|||
*.py[cod]
|
||||
*.class
|
||||
unsloth_compiled_cache/
|
||||
# ML artifacts (large files)
|
||||
feature/
|
||||
outputs/
|
||||
exports/
|
||||
/datasets/
|
||||
studio/backend/assets/datasets/
|
||||
unsloth_training_checkpoints/
|
||||
*.gguf
|
||||
*.safetensors
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
|
@ -136,6 +145,9 @@ venv/
|
|||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
.venv_overlay/
|
||||
.venv_t5/
|
||||
environment.yaml
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
|
|
@ -172,6 +184,34 @@ cython_debug/
|
|||
.ruff_cache/
|
||||
.pre-commit-cache/
|
||||
|
||||
# PyPI configuration file
|
||||
# PyPI configuration file and IDE/Editors
|
||||
.pypirc
|
||||
.vscode
|
||||
.idea/
|
||||
.claude/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# oh-my-codex
|
||||
.omx/
|
||||
|
||||
# Firebase
|
||||
firebase-debug.log
|
||||
|
||||
# Other
|
||||
resources/
|
||||
tmp/
|
||||
**/node_modules/
|
||||
auth.db
|
||||
|
||||
# Local working docs
|
||||
**/CLAUDE.md
|
||||
**/claude.md
|
||||
**/AGENT.md
|
||||
**/agent.md
|
||||
docs/canvas-lab-architecture.md
|
||||
log_rtx.txt
|
||||
log.txt
|
||||
setup_leo.sh
|
||||
server.pid
|
||||
*.log
|
||||
|
|
|
|||
664
COPYING
Normal file
664
COPYING
Normal file
|
|
@ -0,0 +1,664 @@
|
|||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published
|
||||
by the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
Files under unsloth/*, tests/*, scripts/* are Apache 2.0 licensed.
|
||||
Files under studio/*, cli/* which is optional to install are AGPLv3 licensed.
|
||||
4
LICENSE
4
LICENSE
|
|
@ -186,7 +186,9 @@
|
|||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [2024-] [Unsloth AI, Daniel Han-Chen & Michael Han-Chen]
|
||||
Copyright [2024-] [Unsloth AI. Inc team, Daniel Han-Chen & Michael Han-Chen]
|
||||
Files under unsloth/*, tests/*, scripts/* are Apache 2.0 licensed.
|
||||
Files under studio/*, cli/* which is optional to install are AGPLv3 licensed.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
|
|
|||
20
build.sh
Normal file
20
build.sh
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# 1. Build frontend (Vite outputs to dist/)
|
||||
cd studio/frontend
|
||||
npm install
|
||||
npm run build # outputs to studio/frontend/dist/
|
||||
cd ../..
|
||||
|
||||
# 2. Clean old artifacts
|
||||
rm -rf build dist *.egg-info
|
||||
|
||||
# 3. Build wheel
|
||||
python -m build
|
||||
|
||||
# 4. Optionally publish
|
||||
if [ "${1:-}" = "publish" ]; then
|
||||
python -m twine upload dist/*
|
||||
fi
|
||||
7
cli.py
Normal file
7
cli.py
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
from cli import app
|
||||
|
||||
if __name__ == "__main__":
|
||||
app()
|
||||
22
cli/__init__.py
Normal file
22
cli/__init__.py
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
import typer
|
||||
|
||||
from cli.commands.train import train
|
||||
from cli.commands.inference import inference
|
||||
from cli.commands.export import export, list_checkpoints
|
||||
from cli.commands.ui import ui
|
||||
from cli.commands.studio import studio_app
|
||||
|
||||
app = typer.Typer(
|
||||
help = "Command-line interface for Unsloth training, inference, and export.",
|
||||
context_settings = {"help_option_names": ["-h", "--help"]},
|
||||
)
|
||||
|
||||
app.command()(train)
|
||||
app.command()(inference)
|
||||
app.command()(export)
|
||||
app.command("list-checkpoints")(list_checkpoints)
|
||||
app.command()(ui)
|
||||
app.add_typer(studio_app, name = "studio", help = "Unsloth Studio commands.")
|
||||
2
cli/commands/__init__.py
Normal file
2
cli/commands/__init__.py
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
132
cli/commands/export.py
Normal file
132
cli/commands/export.py
Normal file
|
|
@ -0,0 +1,132 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import typer
|
||||
|
||||
|
||||
EXPORT_FORMATS = ["merged-16bit", "merged-4bit", "gguf", "lora"]
|
||||
GGUF_QUANTS = ["q4_k_m", "q5_k_m", "q8_0", "f16"]
|
||||
|
||||
|
||||
def list_checkpoints(
|
||||
outputs_dir: Path = typer.Option(
|
||||
Path("./outputs"), "--outputs-dir", help = "Directory that holds training runs."
|
||||
),
|
||||
):
|
||||
"""List checkpoints detected in the outputs directory."""
|
||||
from studio.backend.core.export import ExportBackend
|
||||
|
||||
backend = ExportBackend()
|
||||
checkpoints = backend.scan_checkpoints(outputs_dir = str(outputs_dir))
|
||||
if not checkpoints:
|
||||
typer.echo("No checkpoints found.")
|
||||
raise typer.Exit()
|
||||
|
||||
for model_name, ckpt_list, metadata in checkpoints:
|
||||
typer.echo(f"\n{model_name}:")
|
||||
for display, path, loss in ckpt_list:
|
||||
loss_str = f" (loss: {loss:.4f})" if loss is not None else ""
|
||||
typer.echo(f" {display}{loss_str}: {path}")
|
||||
|
||||
|
||||
def export(
|
||||
checkpoint: Path = typer.Argument(..., help = "Path to checkpoint directory."),
|
||||
output_dir: Path = typer.Argument(..., help = "Directory to save exported model."),
|
||||
format: str = typer.Option(
|
||||
"merged-16bit",
|
||||
"--format",
|
||||
"-f",
|
||||
help = f"Export format: {', '.join(EXPORT_FORMATS)}",
|
||||
),
|
||||
quantization: str = typer.Option(
|
||||
"q4_k_m",
|
||||
"--quantization",
|
||||
"-q",
|
||||
help = f"GGUF quantization method: {', '.join(GGUF_QUANTS)}",
|
||||
),
|
||||
push_to_hub: bool = typer.Option(
|
||||
False, "--push-to-hub", help = "Push exported model to HuggingFace Hub."
|
||||
),
|
||||
repo_id: Optional[str] = typer.Option(
|
||||
None, "--repo-id", help = "HuggingFace repo ID (username/model-name)."
|
||||
),
|
||||
hf_token: Optional[str] = typer.Option(
|
||||
None, "--hf-token", envvar = "HF_TOKEN", help = "HuggingFace token."
|
||||
),
|
||||
private: bool = typer.Option(
|
||||
False, "--private", help = "Make the HuggingFace repo private."
|
||||
),
|
||||
max_seq_length: int = typer.Option(2048, "--max-seq-length"),
|
||||
load_in_4bit: bool = typer.Option(True, "--load-in-4bit/--no-load-in-4bit"),
|
||||
):
|
||||
"""Export a checkpoint to various formats (merged, GGUF, LoRA adapter)."""
|
||||
if format not in EXPORT_FORMATS:
|
||||
typer.echo(
|
||||
f"Error: Invalid format '{format}'. Choose from: {', '.join(EXPORT_FORMATS)}",
|
||||
err = True,
|
||||
)
|
||||
raise typer.Exit(code = 2)
|
||||
|
||||
if push_to_hub and not repo_id:
|
||||
typer.echo("Error: --repo-id required when using --push-to-hub", err = True)
|
||||
raise typer.Exit(code = 2)
|
||||
|
||||
from studio.backend.core.export import ExportBackend
|
||||
|
||||
backend = ExportBackend()
|
||||
|
||||
typer.echo(f"Loading checkpoint: {checkpoint}")
|
||||
success, message = backend.load_checkpoint(
|
||||
checkpoint_path = str(checkpoint),
|
||||
max_seq_length = max_seq_length,
|
||||
load_in_4bit = load_in_4bit,
|
||||
)
|
||||
if not success:
|
||||
typer.echo(f"Error: {message}", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
typer.echo(message)
|
||||
|
||||
typer.echo(f"Exporting as {format}...")
|
||||
if format == "merged-16bit":
|
||||
success, message = backend.export_merged_model(
|
||||
save_directory = str(output_dir),
|
||||
format_type = "16-bit (FP16)",
|
||||
push_to_hub = push_to_hub,
|
||||
repo_id = repo_id,
|
||||
hf_token = hf_token,
|
||||
private = private,
|
||||
)
|
||||
elif format == "merged-4bit":
|
||||
success, message = backend.export_merged_model(
|
||||
save_directory = str(output_dir),
|
||||
format_type = "4-bit (FP4)",
|
||||
push_to_hub = push_to_hub,
|
||||
repo_id = repo_id,
|
||||
hf_token = hf_token,
|
||||
private = private,
|
||||
)
|
||||
elif format == "gguf":
|
||||
success, message = backend.export_gguf(
|
||||
save_directory = str(output_dir),
|
||||
quantization_method = quantization.upper(),
|
||||
push_to_hub = push_to_hub,
|
||||
repo_id = repo_id,
|
||||
hf_token = hf_token,
|
||||
)
|
||||
elif format == "lora":
|
||||
success, message = backend.export_lora_adapter(
|
||||
save_directory = str(output_dir),
|
||||
push_to_hub = push_to_hub,
|
||||
repo_id = repo_id,
|
||||
hf_token = hf_token,
|
||||
private = private,
|
||||
)
|
||||
|
||||
if not success:
|
||||
typer.echo(f"Error: {message}", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
|
||||
typer.echo(message)
|
||||
69
cli/commands/inference.py
Normal file
69
cli/commands/inference.py
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
import typer
|
||||
|
||||
|
||||
def inference(
|
||||
model: str = typer.Argument(..., help = "HF model id or local path."),
|
||||
prompt: str = typer.Argument(..., help = "Prompt to send to the model."),
|
||||
hf_token: Optional[str] = typer.Option(
|
||||
None, "--hf-token", envvar = "HF_TOKEN", help = "Hugging Face token if needed."
|
||||
),
|
||||
temperature: float = typer.Option(0.7, "--temperature"),
|
||||
top_p: float = typer.Option(0.9, "--top-p"),
|
||||
top_k: int = typer.Option(40, "--top-k"),
|
||||
max_new_tokens: int = typer.Option(256, "--max-new-tokens"),
|
||||
repetition_penalty: float = typer.Option(1.1, "--repetition-penalty"),
|
||||
system_prompt: str = typer.Option(
|
||||
"",
|
||||
"--system-prompt",
|
||||
help = "Optional system prompt to prepend.",
|
||||
),
|
||||
max_seq_length: int = typer.Option(2048, "--max-seq-length"),
|
||||
load_in_4bit: bool = typer.Option(True, "--load-in-4bit/--no-load-in-4bit"),
|
||||
):
|
||||
"""Run a single inference using the specified model."""
|
||||
from studio.backend.core import ModelConfig, get_inference_backend
|
||||
|
||||
inference_backend = get_inference_backend()
|
||||
model_config = ModelConfig.from_ui_selection(
|
||||
dropdown_value = model, search_value = None, hf_token = hf_token, is_lora = False
|
||||
)
|
||||
if not model_config:
|
||||
typer.echo("Could not resolve model config", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
|
||||
if not inference_backend.load_model(
|
||||
config = model_config,
|
||||
max_seq_length = max_seq_length,
|
||||
load_in_4bit = load_in_4bit,
|
||||
hf_token = hf_token,
|
||||
):
|
||||
typer.echo("Model load failed", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
stream = inference_backend.generate_chat_response(
|
||||
messages = messages,
|
||||
system_prompt = system_prompt,
|
||||
temperature = temperature,
|
||||
top_p = top_p,
|
||||
top_k = top_k,
|
||||
max_new_tokens = max_new_tokens,
|
||||
repetition_penalty = repetition_penalty,
|
||||
)
|
||||
|
||||
typer.echo("Assistant:", nl = True)
|
||||
previous = ""
|
||||
for chunk in stream:
|
||||
delta = chunk[len(previous) :]
|
||||
if delta:
|
||||
sys.stdout.write(delta)
|
||||
sys.stdout.flush()
|
||||
previous = chunk
|
||||
sys.stdout.write("\n")
|
||||
sys.stdout.flush()
|
||||
374
cli/commands/studio.py
Normal file
374
cli/commands/studio.py
Normal file
|
|
@ -0,0 +1,374 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
import os
|
||||
import platform
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
import typer
|
||||
|
||||
studio_app = typer.Typer(help = "Unsloth Studio commands.")
|
||||
|
||||
STUDIO_HOME = Path.home() / ".unsloth" / "studio"
|
||||
|
||||
# __file__ is cli/commands/studio.py — two parents up is the package root
|
||||
# (either site-packages or the repo root for editable installs).
|
||||
_PACKAGE_ROOT = Path(__file__).resolve().parent.parent.parent
|
||||
|
||||
|
||||
def _is_repo_root(path: Path) -> bool:
|
||||
"""Check if a directory looks like the repo root (actual git clone, not site-packages)."""
|
||||
return (
|
||||
(path / ".git").exists()
|
||||
and (path / "pyproject.toml").is_file()
|
||||
and (
|
||||
(path / "studio" / "setup.sh").is_file()
|
||||
or (path / "studio" / "setup.ps1").is_file()
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def _get_repo_root() -> Optional[Path]:
|
||||
"""Find the git clone repo root.
|
||||
|
||||
Used only by setup() — checks __file__ first (editable install),
|
||||
then walks CWD parents (wheel install, user is inside the clone).
|
||||
"""
|
||||
# Check 1: __file__ is in the repo (editable install)
|
||||
if _is_repo_root(_PACKAGE_ROOT):
|
||||
return _PACKAGE_ROOT
|
||||
# Check 2: CWD or any parent is the repo
|
||||
cwd = Path.cwd().resolve()
|
||||
for parent in (cwd, *cwd.parents):
|
||||
if _is_repo_root(parent):
|
||||
return parent
|
||||
return None
|
||||
|
||||
|
||||
def _studio_venv_python() -> Optional[Path]:
|
||||
"""Return the studio venv Python binary, or None if not set up."""
|
||||
if platform.system() == "Windows":
|
||||
p = STUDIO_HOME / ".venv" / "Scripts" / "python.exe"
|
||||
else:
|
||||
p = STUDIO_HOME / ".venv" / "bin" / "python"
|
||||
return p if p.is_file() else None
|
||||
|
||||
|
||||
def _find_run_py() -> Optional[Path]:
|
||||
"""Find studio/backend/run.py.
|
||||
|
||||
No CWD dependency — works from any directory.
|
||||
Since studio/ is now a proper package (has __init__.py), it lives in
|
||||
site-packages after pip install, right next to cli/.
|
||||
"""
|
||||
# 1. Relative to __file__ (site-packages or editable repo root)
|
||||
run_py = _PACKAGE_ROOT / "studio" / "backend" / "run.py"
|
||||
if run_py.is_file():
|
||||
return run_py
|
||||
# 2. Studio venv's site-packages (Linux + Windows layouts)
|
||||
for pattern in (
|
||||
"lib/python*/site-packages/studio/backend/run.py",
|
||||
"Lib/site-packages/studio/backend/run.py",
|
||||
):
|
||||
for match in (STUDIO_HOME / ".venv").glob(pattern):
|
||||
return match
|
||||
return None
|
||||
|
||||
|
||||
def _find_install_script() -> Optional[Path]:
|
||||
"""Find studio/install_python_stack.py.
|
||||
|
||||
No CWD dependency — works from any directory.
|
||||
"""
|
||||
# 1. Relative to __file__ (site-packages or editable repo root)
|
||||
s = _PACKAGE_ROOT / "studio" / "install_python_stack.py"
|
||||
if s.is_file():
|
||||
return s
|
||||
# 2. Studio venv's site-packages
|
||||
for pattern in (
|
||||
"lib/python*/site-packages/studio/install_python_stack.py",
|
||||
"Lib/site-packages/studio/install_python_stack.py",
|
||||
):
|
||||
for match in (STUDIO_HOME / ".venv").glob(pattern):
|
||||
return match
|
||||
return None
|
||||
|
||||
|
||||
def _find_setup_script() -> Optional[Path]:
|
||||
"""Find studio/setup.sh or studio/setup.ps1.
|
||||
|
||||
No CWD dependency — works from any directory.
|
||||
"""
|
||||
name = "setup.ps1" if platform.system() == "Windows" else "setup.sh"
|
||||
# 1. Relative to __file__ (site-packages or editable repo root)
|
||||
s = _PACKAGE_ROOT / "studio" / name
|
||||
if s.is_file():
|
||||
return s
|
||||
# 2. Studio venv's site-packages
|
||||
for pattern in (
|
||||
f"lib/python*/site-packages/studio/{name}",
|
||||
f"Lib/site-packages/studio/{name}",
|
||||
):
|
||||
for match in (STUDIO_HOME / ".venv").glob(pattern):
|
||||
return match
|
||||
return None
|
||||
|
||||
|
||||
# ── unsloth studio (server) ──────────────────────────────────────────
|
||||
|
||||
|
||||
@studio_app.callback(invoke_without_command = True)
|
||||
def studio_default(
|
||||
ctx: typer.Context,
|
||||
port: int = typer.Option(8000, "--port", "-p"),
|
||||
host: str = typer.Option("0.0.0.0", "--host", "-H"),
|
||||
frontend: Optional[Path] = typer.Option(None, "--frontend", "-f"),
|
||||
silent: bool = typer.Option(False, "--silent", "-q"),
|
||||
):
|
||||
"""Launch the Unsloth Studio server."""
|
||||
if ctx.invoked_subcommand is not None:
|
||||
return
|
||||
|
||||
# Always use the studio venv if it exists and we're not already in it
|
||||
studio_venv_dir = STUDIO_HOME / ".venv"
|
||||
in_studio_venv = sys.prefix.startswith(str(studio_venv_dir))
|
||||
|
||||
if not in_studio_venv:
|
||||
studio_python = _studio_venv_python()
|
||||
run_py = _find_run_py()
|
||||
if studio_python and run_py:
|
||||
if not silent:
|
||||
typer.echo("Launching with studio venv...")
|
||||
args = [
|
||||
str(studio_python),
|
||||
str(run_py),
|
||||
"--host",
|
||||
host,
|
||||
"--port",
|
||||
str(port),
|
||||
]
|
||||
if frontend:
|
||||
args.extend(["--frontend", str(frontend)])
|
||||
if silent:
|
||||
args.append("--silent")
|
||||
os.execvp(str(studio_python), args)
|
||||
else:
|
||||
typer.echo("Studio not set up. Run 'unsloth studio setup' first.")
|
||||
raise typer.Exit(1)
|
||||
|
||||
from studio.backend.run import run_server
|
||||
|
||||
if not silent:
|
||||
from studio.backend.run import _resolve_external_ip
|
||||
|
||||
display_host = _resolve_external_ip() if host == "0.0.0.0" else host
|
||||
typer.echo(f"Starting Unsloth Studio on http://{display_host}:{port}")
|
||||
|
||||
run_server(
|
||||
host = host,
|
||||
port = port,
|
||||
frontend_path = frontend,
|
||||
silent = silent,
|
||||
)
|
||||
|
||||
try:
|
||||
while True:
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
typer.echo("\nShutting down...")
|
||||
|
||||
|
||||
# ── unsloth studio setup ─────────────────────────────────────────────
|
||||
|
||||
|
||||
@studio_app.command()
|
||||
def setup():
|
||||
"""Run one-time Studio environment setup."""
|
||||
# If we're inside a git clone, use the full setup script (builds frontend, etc.)
|
||||
repo = _get_repo_root()
|
||||
if repo:
|
||||
_dev_setup(repo)
|
||||
else:
|
||||
_pip_setup()
|
||||
|
||||
|
||||
def _dev_setup(repo_root: Path):
|
||||
"""Git-clone: run setup.sh / setup.ps1."""
|
||||
studio_dir = repo_root / "studio"
|
||||
if platform.system() == "Windows":
|
||||
script = studio_dir / "setup.ps1"
|
||||
subprocess.run(
|
||||
["powershell", "-ExecutionPolicy", "Bypass", "-File", str(script)],
|
||||
check = True,
|
||||
)
|
||||
else:
|
||||
script = studio_dir / "setup.sh"
|
||||
subprocess.run(["bash", str(script)], check = True)
|
||||
|
||||
|
||||
def _pip_setup():
|
||||
"""Pip-install: create studio venv, install all deps, build extras."""
|
||||
import venv as _venv
|
||||
|
||||
venv_dir = STUDIO_HOME / ".venv"
|
||||
venv_t5_dir = STUDIO_HOME / ".venv_t5"
|
||||
|
||||
if platform.system() == "Windows":
|
||||
venv_python = venv_dir / "Scripts" / "python.exe"
|
||||
venv_pip = venv_dir / "Scripts" / "pip.exe"
|
||||
else:
|
||||
venv_python = venv_dir / "bin" / "python"
|
||||
venv_pip = venv_dir / "bin" / "pip"
|
||||
|
||||
typer.echo("Setting up Unsloth Studio...")
|
||||
|
||||
# 1. Create venv
|
||||
if not venv_python.is_file():
|
||||
typer.echo(f" Creating venv at {venv_dir}...")
|
||||
STUDIO_HOME.mkdir(parents = True, exist_ok = True)
|
||||
_venv.create(str(venv_dir), with_pip = True)
|
||||
|
||||
# 2. Install all Python deps via install_python_stack.py
|
||||
install_script = _find_install_script()
|
||||
if install_script:
|
||||
typer.echo(" Installing Python dependencies...")
|
||||
subprocess.run([str(venv_python), str(install_script)], check = True)
|
||||
else:
|
||||
typer.echo("Error: Could not find install_python_stack.py")
|
||||
raise typer.Exit(1)
|
||||
|
||||
# 3. Pre-install transformers 5.x overlay
|
||||
if venv_t5_dir.is_dir() and any(venv_t5_dir.iterdir()):
|
||||
typer.echo(f" Transformers 5.x overlay already at {venv_t5_dir}")
|
||||
else:
|
||||
typer.echo(" Installing transformers 5.x overlay...")
|
||||
venv_t5_dir.mkdir(parents = True, exist_ok = True)
|
||||
subprocess.run(
|
||||
[
|
||||
str(venv_pip),
|
||||
"install",
|
||||
"--target",
|
||||
str(venv_t5_dir),
|
||||
"--no-deps",
|
||||
"transformers==5.2.0",
|
||||
],
|
||||
check = True,
|
||||
)
|
||||
subprocess.run(
|
||||
[
|
||||
str(venv_pip),
|
||||
"install",
|
||||
"--target",
|
||||
str(venv_t5_dir),
|
||||
"--no-deps",
|
||||
"huggingface_hub==1.3.0",
|
||||
],
|
||||
check = True,
|
||||
)
|
||||
typer.echo(f" Installed to {venv_t5_dir}")
|
||||
|
||||
# 4. Build llama.cpp
|
||||
_build_llama_cpp()
|
||||
|
||||
typer.echo("")
|
||||
typer.echo("Setup complete! Run 'unsloth studio' to start.")
|
||||
|
||||
|
||||
def _build_llama_cpp():
|
||||
"""Build llama.cpp at ~/.unsloth/llama.cpp/."""
|
||||
import shutil
|
||||
|
||||
unsloth_home = Path.home() / ".unsloth"
|
||||
llama_dir = unsloth_home / "llama.cpp"
|
||||
|
||||
if not shutil.which("cmake"):
|
||||
typer.echo(" cmake not found — skipping llama.cpp build")
|
||||
return
|
||||
if not shutil.which("git"):
|
||||
typer.echo(" git not found — skipping llama.cpp build")
|
||||
return
|
||||
|
||||
typer.echo(" Building llama.cpp for GGUF inference...")
|
||||
|
||||
if llama_dir.exists():
|
||||
shutil.rmtree(llama_dir)
|
||||
unsloth_home.mkdir(parents = True, exist_ok = True)
|
||||
|
||||
result = subprocess.run(
|
||||
[
|
||||
"git",
|
||||
"clone",
|
||||
"--depth",
|
||||
"1",
|
||||
"https://github.com/ggml-org/llama.cpp.git",
|
||||
str(llama_dir),
|
||||
],
|
||||
stdout = subprocess.PIPE,
|
||||
stderr = subprocess.STDOUT,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
typer.echo(" Failed to clone llama.cpp")
|
||||
return
|
||||
|
||||
cmake_args = []
|
||||
nvcc_path = shutil.which("nvcc")
|
||||
if not nvcc_path and Path("/usr/local/cuda/bin/nvcc").is_file():
|
||||
nvcc_path = "/usr/local/cuda/bin/nvcc"
|
||||
if nvcc_path:
|
||||
typer.echo(f" Building with CUDA (nvcc: {nvcc_path})...")
|
||||
cmake_args.append("-DGGML_CUDA=ON")
|
||||
else:
|
||||
typer.echo(" Building CPU-only...")
|
||||
|
||||
build_dir = llama_dir / "build"
|
||||
result = subprocess.run(
|
||||
["cmake", "-S", str(llama_dir), "-B", str(build_dir)] + cmake_args,
|
||||
stdout = subprocess.PIPE,
|
||||
stderr = subprocess.STDOUT,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
typer.echo(" cmake configure failed")
|
||||
return
|
||||
|
||||
ncpu = str(os.cpu_count() or 4)
|
||||
result = subprocess.run(
|
||||
[
|
||||
"cmake",
|
||||
"--build",
|
||||
str(build_dir),
|
||||
"--config",
|
||||
"Release",
|
||||
"--target",
|
||||
"llama-server",
|
||||
f"-j{ncpu}",
|
||||
],
|
||||
stdout = subprocess.PIPE,
|
||||
stderr = subprocess.STDOUT,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
typer.echo(" llama-server build failed")
|
||||
return
|
||||
|
||||
subprocess.run(
|
||||
[
|
||||
"cmake",
|
||||
"--build",
|
||||
str(build_dir),
|
||||
"--config",
|
||||
"Release",
|
||||
"--target",
|
||||
"llama-quantize",
|
||||
f"-j{ncpu}",
|
||||
],
|
||||
stdout = subprocess.PIPE,
|
||||
stderr = subprocess.STDOUT,
|
||||
)
|
||||
|
||||
server_bin = build_dir / "bin" / "llama-server"
|
||||
if server_bin.is_file():
|
||||
typer.echo(f" llama-server built at {server_bin}")
|
||||
else:
|
||||
typer.echo(" llama-server binary not found after build")
|
||||
144
cli/commands/train.py
Normal file
144
cli/commands/train.py
Normal file
|
|
@ -0,0 +1,144 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import typer
|
||||
|
||||
from cli.config import Config, load_config
|
||||
from cli.options import add_options_from_config
|
||||
|
||||
|
||||
@add_options_from_config(Config)
|
||||
def train(
|
||||
config: Optional[Path] = typer.Option(
|
||||
None,
|
||||
"--config",
|
||||
"-c",
|
||||
help = "Path to YAML/JSON config file. CLI flags override config values.",
|
||||
),
|
||||
hf_token: Optional[str] = typer.Option(
|
||||
None, "--hf-token", envvar = "HF_TOKEN", help = "Hugging Face token if needed."
|
||||
),
|
||||
wandb_token: Optional[str] = typer.Option(
|
||||
None, "--wandb-token", envvar = "WANDB_API_KEY", help = "Weights & Biases API key."
|
||||
),
|
||||
dry_run: bool = typer.Option(
|
||||
False,
|
||||
"--dry-run",
|
||||
help = "Show resolved config and exit without training.",
|
||||
),
|
||||
config_overrides: dict = None,
|
||||
):
|
||||
"""Launch training using the existing Unsloth training backend."""
|
||||
try:
|
||||
cfg = load_config(config)
|
||||
except FileNotFoundError as e:
|
||||
typer.echo(f"Error: {e}", err = True)
|
||||
raise typer.Exit(code = 2)
|
||||
|
||||
cfg.apply_overrides(**config_overrides)
|
||||
|
||||
# CLI/env tokens take precedence over config
|
||||
# Handle case where typer.Option isn't resolved (decorator interaction)
|
||||
from typer.models import OptionInfo
|
||||
|
||||
if isinstance(hf_token, OptionInfo):
|
||||
hf_token = None
|
||||
if isinstance(wandb_token, OptionInfo):
|
||||
wandb_token = None
|
||||
hf_token = hf_token or cfg.logging.hf_token
|
||||
wandb_token = wandb_token or cfg.logging.wandb_token
|
||||
|
||||
if dry_run:
|
||||
import yaml
|
||||
|
||||
data = cfg.model_dump()
|
||||
data["training"]["output_dir"] = str(data["training"]["output_dir"])
|
||||
typer.echo(yaml.dump(data, default_flow_style = False, sort_keys = False))
|
||||
raise typer.Exit(code = 0)
|
||||
|
||||
if not cfg.model:
|
||||
typer.echo("Error: provide --model or set model in --config", err = True)
|
||||
raise typer.Exit(code = 2)
|
||||
|
||||
if not cfg.data.dataset and not cfg.data.local_dataset:
|
||||
typer.echo(
|
||||
"Error: provide --dataset or --local-dataset (or via --config)", err = True
|
||||
)
|
||||
raise typer.Exit(code = 2)
|
||||
|
||||
# Check if the model path is a LoRA adapter (has adapter_config.json)
|
||||
model_path = Path(cfg.model) if cfg.model else None
|
||||
model_is_lora = (
|
||||
model_path
|
||||
and model_path.is_dir()
|
||||
and (model_path / "adapter_config.json").exists()
|
||||
)
|
||||
use_lora = cfg.training.training_type.lower() == "lora"
|
||||
|
||||
if model_is_lora and not use_lora:
|
||||
typer.echo(
|
||||
"Error: Cannot do full finetuning on a LoRA adapter. "
|
||||
"Use --training-type lora or provide a base model.",
|
||||
err = True,
|
||||
)
|
||||
raise typer.Exit(code = 2)
|
||||
|
||||
from studio.backend.core.training.trainer import UnslothTrainer
|
||||
|
||||
trainer = UnslothTrainer()
|
||||
|
||||
# Load model (trainer.is_vlm is set after this)
|
||||
if not trainer.load_model(
|
||||
model_name = cfg.model,
|
||||
max_seq_length = cfg.training.max_seq_length,
|
||||
load_in_4bit = cfg.training.load_in_4bit if use_lora else False,
|
||||
hf_token = hf_token,
|
||||
):
|
||||
typer.echo("Model load failed", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
|
||||
is_vision = trainer.is_vlm
|
||||
|
||||
if not trainer.prepare_model_for_training(**cfg.model_kwargs(use_lora, is_vision)):
|
||||
typer.echo("Model preparation failed", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
|
||||
result = trainer.load_and_format_dataset(
|
||||
dataset_source = cfg.data.dataset or "",
|
||||
format_type = cfg.data.format_type,
|
||||
local_datasets = cfg.data.local_dataset,
|
||||
)
|
||||
if result is None:
|
||||
typer.echo("Dataset load failed", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
|
||||
ds, eval_ds = result
|
||||
|
||||
training_kwargs = cfg.training_kwargs()
|
||||
training_kwargs["wandb_token"] = wandb_token # CLI/env takes precedence
|
||||
started = trainer.start_training(
|
||||
dataset = ds, eval_dataset = eval_ds, **training_kwargs
|
||||
)
|
||||
|
||||
if not started:
|
||||
typer.echo("Training failed to start", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
|
||||
try:
|
||||
while trainer.training_thread and trainer.training_thread.is_alive():
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
typer.echo("Stopping training (Ctrl+C detected)...")
|
||||
trainer.stop_training()
|
||||
finally:
|
||||
if trainer.training_thread:
|
||||
trainer.training_thread.join()
|
||||
|
||||
final = trainer.get_training_progress()
|
||||
if getattr(final, "error", None):
|
||||
typer.echo(f"Training error: {final.error}", err = True)
|
||||
raise typer.Exit(code = 1)
|
||||
76
cli/commands/ui.py
Normal file
76
cli/commands/ui.py
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import typer
|
||||
|
||||
|
||||
def ui(
|
||||
port: int = typer.Option(
|
||||
8000, "--port", "-p", help = "Port to run the UI server on."
|
||||
),
|
||||
host: str = typer.Option(
|
||||
"0.0.0.0", "--host", "-H", help = "Host address to bind to."
|
||||
),
|
||||
frontend: Optional[Path] = typer.Option(
|
||||
None, "--frontend", "-f", help = "Path to frontend build directory."
|
||||
),
|
||||
silent: bool = typer.Option(
|
||||
False, "--silent", "-q", help = "Suppress startup messages."
|
||||
),
|
||||
):
|
||||
"""Launch the Unsloth web UI backend server (alias for 'unsloth studio')."""
|
||||
from cli.commands.studio import _studio_venv_python, _find_run_py, STUDIO_HOME
|
||||
|
||||
# Re-execute in studio venv if available and not already inside it
|
||||
studio_venv_dir = STUDIO_HOME / ".venv"
|
||||
in_studio_venv = sys.prefix.startswith(str(studio_venv_dir))
|
||||
|
||||
if not in_studio_venv:
|
||||
studio_python = _studio_venv_python()
|
||||
run_py = _find_run_py()
|
||||
if studio_python and run_py:
|
||||
if not silent:
|
||||
typer.echo("Launching with studio venv...")
|
||||
args = [
|
||||
str(studio_python),
|
||||
str(run_py),
|
||||
"--host",
|
||||
host,
|
||||
"--port",
|
||||
str(port),
|
||||
]
|
||||
if frontend:
|
||||
args.extend(["--frontend", str(frontend)])
|
||||
if silent:
|
||||
args.append("--silent")
|
||||
os.execvp(str(studio_python), args)
|
||||
else:
|
||||
typer.echo("Studio not set up. Run 'unsloth studio setup' first.")
|
||||
raise typer.Exit(1)
|
||||
|
||||
from studio.backend.run import run_server
|
||||
|
||||
if not silent:
|
||||
from studio.backend.run import _resolve_external_ip
|
||||
|
||||
display_host = _resolve_external_ip() if host == "0.0.0.0" else host
|
||||
typer.echo(f"Starting Unsloth Studio on http://{display_host}:{port}")
|
||||
|
||||
run_server(
|
||||
host = host,
|
||||
port = port,
|
||||
frontend_path = frontend,
|
||||
silent = silent,
|
||||
)
|
||||
|
||||
try:
|
||||
while True:
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
typer.echo("\nShutting down...")
|
||||
149
cli/config.py
Normal file
149
cli/config.py
Normal file
|
|
@ -0,0 +1,149 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Literal, Optional, List
|
||||
|
||||
import yaml
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class DataConfig(BaseModel):
|
||||
dataset: Optional[str] = None
|
||||
local_dataset: Optional[List[str]] = None
|
||||
format_type: Literal["auto", "alpaca", "chatml", "sharegpt"] = "auto"
|
||||
|
||||
|
||||
class TrainingConfig(BaseModel):
|
||||
training_type: Literal["lora", "full"] = "lora"
|
||||
max_seq_length: int = 2048
|
||||
load_in_4bit: bool = True
|
||||
output_dir: Path = Path("./outputs")
|
||||
num_epochs: int = 3
|
||||
learning_rate: float = 2e-4
|
||||
batch_size: int = 2
|
||||
gradient_accumulation_steps: int = 4
|
||||
warmup_steps: int = 5
|
||||
max_steps: int = 0
|
||||
save_steps: int = 0
|
||||
weight_decay: float = 0.01
|
||||
random_seed: int = 3407
|
||||
packing: bool = False
|
||||
train_on_completions: bool = False
|
||||
gradient_checkpointing: Literal["unsloth", "true", "none"] = "unsloth"
|
||||
|
||||
|
||||
class LoraConfig(BaseModel):
|
||||
lora_r: int = 64
|
||||
lora_alpha: int = 16
|
||||
lora_dropout: float = 0.0
|
||||
target_modules: str = "q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj"
|
||||
vision_all_linear: bool = False
|
||||
use_rslora: bool = False
|
||||
use_loftq: bool = False
|
||||
finetune_vision_layers: bool = True
|
||||
finetune_language_layers: bool = True
|
||||
finetune_attention_modules: bool = True
|
||||
finetune_mlp_modules: bool = True
|
||||
|
||||
|
||||
class LoggingConfig(BaseModel):
|
||||
enable_wandb: bool = False
|
||||
wandb_project: str = "unsloth-training"
|
||||
wandb_token: Optional[str] = None
|
||||
enable_tensorboard: bool = False
|
||||
tensorboard_dir: str = "runs"
|
||||
hf_token: Optional[str] = None
|
||||
|
||||
|
||||
class Config(BaseModel):
|
||||
model: Optional[str] = None
|
||||
data: DataConfig = Field(default_factory = DataConfig)
|
||||
training: TrainingConfig = Field(default_factory = TrainingConfig)
|
||||
lora: LoraConfig = Field(default_factory = LoraConfig)
|
||||
logging: LoggingConfig = Field(default_factory = LoggingConfig)
|
||||
|
||||
def apply_overrides(self, **kwargs):
|
||||
"""Apply CLI overrides by matching arg names to config fields."""
|
||||
for key, value in kwargs.items():
|
||||
if value is None:
|
||||
continue
|
||||
if hasattr(self, key):
|
||||
setattr(self, key, value)
|
||||
else:
|
||||
for section in (self.data, self.training, self.lora, self.logging):
|
||||
if hasattr(section, key):
|
||||
setattr(section, key, value)
|
||||
break
|
||||
|
||||
def model_kwargs(self, use_lora: bool, is_vision: bool) -> dict:
|
||||
"""Return kwargs for trainer.prepare_model_for_training()."""
|
||||
# Determine target modules based on model type
|
||||
if use_lora and is_vision:
|
||||
# Vision models expect a string (e.g., "all-linear"); fall back to None to use trainer defaults
|
||||
target_modules = "all-linear" if self.lora.vision_all_linear else None
|
||||
else:
|
||||
parsed = [
|
||||
m.strip()
|
||||
for m in str(self.lora.target_modules).split(",")
|
||||
if m and m.strip()
|
||||
]
|
||||
target_modules = parsed or None
|
||||
|
||||
return {
|
||||
"use_lora": use_lora,
|
||||
"finetune_vision_layers": self.lora.finetune_vision_layers,
|
||||
"finetune_language_layers": self.lora.finetune_language_layers,
|
||||
"finetune_attention_modules": self.lora.finetune_attention_modules,
|
||||
"finetune_mlp_modules": self.lora.finetune_mlp_modules,
|
||||
"target_modules": target_modules,
|
||||
"lora_r": self.lora.lora_r,
|
||||
"lora_alpha": self.lora.lora_alpha,
|
||||
"lora_dropout": self.lora.lora_dropout,
|
||||
"use_gradient_checkpointing": self.training.gradient_checkpointing,
|
||||
"use_rslora": self.lora.use_rslora,
|
||||
"use_loftq": self.lora.use_loftq,
|
||||
}
|
||||
|
||||
def training_kwargs(self) -> dict:
|
||||
"""Return kwargs for trainer.start_training()."""
|
||||
return {
|
||||
"output_dir": str(self.training.output_dir),
|
||||
"num_epochs": self.training.num_epochs,
|
||||
"learning_rate": self.training.learning_rate,
|
||||
"batch_size": self.training.batch_size,
|
||||
"gradient_accumulation_steps": self.training.gradient_accumulation_steps,
|
||||
"warmup_steps": self.training.warmup_steps,
|
||||
"max_steps": self.training.max_steps,
|
||||
"save_steps": self.training.save_steps,
|
||||
"weight_decay": self.training.weight_decay,
|
||||
"random_seed": self.training.random_seed,
|
||||
"packing": self.training.packing,
|
||||
"train_on_completions": self.training.train_on_completions,
|
||||
"max_seq_length": self.training.max_seq_length,
|
||||
"enable_wandb": self.logging.enable_wandb,
|
||||
"wandb_project": self.logging.wandb_project,
|
||||
"wandb_token": self.logging.wandb_token,
|
||||
"enable_tensorboard": self.logging.enable_tensorboard,
|
||||
"tensorboard_dir": self.logging.tensorboard_dir,
|
||||
}
|
||||
|
||||
|
||||
def load_config(path: Optional[Path]) -> Config:
|
||||
"""Load config from YAML/JSON file, or return defaults if no path given."""
|
||||
if not path:
|
||||
return Config()
|
||||
|
||||
path = Path(path)
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Config file not found: {path}")
|
||||
|
||||
text = path.read_text(encoding = "utf-8")
|
||||
if path.suffix.lower() in {".yaml", ".yml"}:
|
||||
data = yaml.safe_load(text) or {}
|
||||
else:
|
||||
import json
|
||||
|
||||
data = json.loads(text or "{}")
|
||||
|
||||
return Config(**data)
|
||||
153
cli/options.py
Normal file
153
cli/options.py
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
"""Generate Typer CLI options from Pydantic models."""
|
||||
|
||||
import functools
|
||||
import inspect
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Optional, get_args, get_origin
|
||||
|
||||
import typer
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
def _python_name_to_cli_flag(name: str) -> str:
|
||||
"""Convert python_name to --cli-flag."""
|
||||
return "--" + name.replace("_", "-")
|
||||
|
||||
|
||||
def _unwrap_optional(annotation: Any) -> Any:
|
||||
"""Unwrap Optional[X] to X."""
|
||||
origin = get_origin(annotation)
|
||||
if origin is not None:
|
||||
args = get_args(annotation)
|
||||
if type(None) in args:
|
||||
non_none = [a for a in args if a is not type(None)]
|
||||
if non_none:
|
||||
return non_none[0]
|
||||
return annotation
|
||||
|
||||
|
||||
def _is_bool_field(annotation: Any) -> bool:
|
||||
"""Check if field is a boolean (including Optional[bool])."""
|
||||
return _unwrap_optional(annotation) is bool
|
||||
|
||||
|
||||
def _is_list_type(annotation: Any) -> bool:
|
||||
"""Check if type is a List."""
|
||||
return get_origin(annotation) is list
|
||||
|
||||
|
||||
def _get_python_type(annotation: Any) -> type:
|
||||
"""Get the Python type for annotation."""
|
||||
unwrapped = _unwrap_optional(annotation)
|
||||
if unwrapped in (str, int, float, bool, Path):
|
||||
return unwrapped
|
||||
return str
|
||||
|
||||
|
||||
def _collect_config_fields(config_class: type[BaseModel]) -> list[tuple[str, Any]]:
|
||||
"""
|
||||
Collect all fields from a config class, flattening nested models. Returns list of
|
||||
(name, field_info) tuples. Raises ValueError on duplicate field names.
|
||||
"""
|
||||
fields = []
|
||||
seen_names: set[str] = set()
|
||||
|
||||
for name, field_info in config_class.model_fields.items():
|
||||
annotation = field_info.annotation
|
||||
# Skip nested models - recurse into them
|
||||
if isinstance(annotation, type) and issubclass(annotation, BaseModel):
|
||||
for nested_name, nested_field in annotation.model_fields.items():
|
||||
if nested_name in seen_names:
|
||||
raise ValueError(f"Duplicate field name '{nested_name}' in config")
|
||||
seen_names.add(nested_name)
|
||||
fields.append((nested_name, nested_field))
|
||||
else:
|
||||
if name in seen_names:
|
||||
raise ValueError(f"Duplicate field name '{name}' in config")
|
||||
seen_names.add(name)
|
||||
fields.append((name, field_info))
|
||||
return fields
|
||||
|
||||
|
||||
def add_options_from_config(config_class: type[BaseModel]) -> Callable:
|
||||
"""
|
||||
Decorator that adds CLI options for all fields in a Pydantic config model.
|
||||
|
||||
The decorated function should declare a `config_overrides: dict = None` parameter
|
||||
which will receive a dict of all CLI-provided config values.
|
||||
"""
|
||||
fields = _collect_config_fields(config_class)
|
||||
field_names = {
|
||||
name for name, field_info in fields if not _is_list_type(field_info.annotation)
|
||||
}
|
||||
|
||||
def decorator(func: Callable) -> Callable:
|
||||
sig = inspect.signature(func)
|
||||
original_params = list(sig.parameters.values())
|
||||
original_param_names = {p.name for p in original_params}
|
||||
|
||||
# Build new parameters: config fields first, then original params
|
||||
new_params = []
|
||||
|
||||
for field_name, field_info in fields:
|
||||
# Skip fields already defined in function signature (e.g., with envvar)
|
||||
if field_name in original_param_names:
|
||||
continue
|
||||
annotation = field_info.annotation
|
||||
if _is_list_type(annotation):
|
||||
continue
|
||||
|
||||
flag_name = _python_name_to_cli_flag(field_name)
|
||||
help_text = field_info.description or ""
|
||||
|
||||
if _is_bool_field(annotation):
|
||||
default = typer.Option(
|
||||
None,
|
||||
f"{flag_name}/--no-{field_name.replace('_', '-')}",
|
||||
help = help_text,
|
||||
)
|
||||
param = inspect.Parameter(
|
||||
field_name,
|
||||
inspect.Parameter.POSITIONAL_OR_KEYWORD,
|
||||
default = default,
|
||||
annotation = Optional[bool],
|
||||
)
|
||||
else:
|
||||
py_type = _get_python_type(annotation)
|
||||
default = typer.Option(None, flag_name, help = help_text)
|
||||
param = inspect.Parameter(
|
||||
field_name,
|
||||
inspect.Parameter.POSITIONAL_OR_KEYWORD,
|
||||
default = default,
|
||||
annotation = Optional[py_type],
|
||||
)
|
||||
new_params.append(param)
|
||||
|
||||
# Add original params, excluding config_overrides (will be injected)
|
||||
for param in original_params:
|
||||
if param.name != "config_overrides":
|
||||
new_params.append(param)
|
||||
|
||||
new_sig = sig.replace(parameters = new_params)
|
||||
|
||||
@functools.wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
config_overrides = {}
|
||||
for key in list(kwargs.keys()):
|
||||
if key in field_names:
|
||||
if kwargs[key] is not None:
|
||||
config_overrides[key] = kwargs[key]
|
||||
# Only delete if not an explicitly declared parameter
|
||||
if key not in original_param_names:
|
||||
del kwargs[key]
|
||||
|
||||
kwargs["config_overrides"] = config_overrides
|
||||
return func(*args, **kwargs)
|
||||
|
||||
wrapper.__signature__ = new_sig
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
|
|
@ -15,7 +15,7 @@ authors = [
|
|||
{name = "Unsloth AI team"},
|
||||
]
|
||||
maintainers = [
|
||||
{name = "Daniel Han", email = "danielhanchen@gmail.com"},
|
||||
{name = "Daniel Han", email = "daniel@unsloth.ai"},
|
||||
{name = "Michael Han", email = "info@unsloth.ai"},
|
||||
]
|
||||
classifiers = [
|
||||
|
|
@ -24,12 +24,24 @@ classifiers = [
|
|||
"Environment :: GPU :: NVIDIA CUDA",
|
||||
"Topic :: Scientific/Engineering :: Artificial Intelligence",
|
||||
]
|
||||
dependencies = [
|
||||
"typer",
|
||||
"pydantic",
|
||||
"pyyaml",
|
||||
"nest-asyncio",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
unsloth = "cli:app"
|
||||
|
||||
[tool.setuptools.dynamic]
|
||||
version = {attr = "unsloth.models._utils.__version__"}
|
||||
|
||||
[tool.setuptools]
|
||||
include-package-data = false
|
||||
include-package-data = true
|
||||
|
||||
[tool.setuptools.package-data]
|
||||
studio = ["frontend/dist/**/*"]
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
exclude = ["images*", "tests*", "kernels/moe*"]
|
||||
|
|
|
|||
661
studio/LICENSE.AGPL-3.0
Normal file
661
studio/LICENSE.AGPL-3.0
Normal file
|
|
@ -0,0 +1,661 @@
|
|||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
111
studio/Unsloth_Studio_Colab.ipynb
Normal file
111
studio/Unsloth_Studio_Colab.ipynb
Normal file
|
|
@ -0,0 +1,111 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f2b0c6a1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**License Notice**\n",
|
||||
"\n",
|
||||
"SPDX-License-Identifier: AGPL-3.0-only\n",
|
||||
"\n",
|
||||
"Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "447c1156",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# ===========================================\n",
|
||||
"# ⚠️ GPU Check - Run This First!\n",
|
||||
"# ===========================================\n",
|
||||
"import torch\n",
|
||||
"\n",
|
||||
"print(\"🔍 Checking for GPU...\")\n",
|
||||
"if not torch.cuda.is_available():\n",
|
||||
" print(\"❌ ERROR: No GPU detected!\")\n",
|
||||
" print(\"\\n📋 To enable GPU:\")\n",
|
||||
" print(\" 1. Go to: Runtime → Change runtime type\")\n",
|
||||
" print(\" 2. Select: Hardware accelerator → GPU (T4 is free)\")\n",
|
||||
" print(\" 3. Click: Save\")\n",
|
||||
" print(\" 4. Restart and re-run all cells\")\n",
|
||||
" raise RuntimeError(\"⛔ GPU required for Unsloth Studio\")\n",
|
||||
"else:\n",
|
||||
" gpu_name = torch.cuda.get_device_name(0)\n",
|
||||
" print(f\"✅ GPU detected: {gpu_name}\")\n",
|
||||
" print(\" Ready to proceed!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f04a9b46",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# ===========================================\n",
|
||||
"# GitHub Authentication (Private Repo)\n",
|
||||
"# ===========================================\n",
|
||||
"from getpass import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"print(\"🔐 GitHub Token Required\")\n",
|
||||
"print(\"Get token: https://github.com/settings/tokens\")\n",
|
||||
"print(\"Scope needed: 'repo'\")\n",
|
||||
"print(\"-\" * 50)\n",
|
||||
"\n",
|
||||
"github_token = getpass(\"Enter GitHub Token: \")\n",
|
||||
"os.environ['GITHUB_TOKEN'] = github_token\n",
|
||||
"print(\"✅ Token stored\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "27e68f91",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# ===========================================\n",
|
||||
"# Setup: Clone repo and run setup\n",
|
||||
"# ===========================================\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"github_token = os.environ['GITHUB_TOKEN']\n",
|
||||
"!git clone https://{github_token}@github.com/unslothai/new-ui-prototype.git\n",
|
||||
"%cd /content/new-ui-prototype\n",
|
||||
"\n",
|
||||
"# Run setup script\n",
|
||||
"!chmod +x setup.sh\n",
|
||||
"!./setup.sh"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "277e431e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# ===========================================\n",
|
||||
"# Start Unsloth Studio\n",
|
||||
"# ===========================================\n",
|
||||
"import sys\n",
|
||||
"sys.path.insert(0, '/content/new-ui-prototype/studio/backend')\n",
|
||||
"\n",
|
||||
"from colab import start\n",
|
||||
"start()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
2
studio/__init__.py
Normal file
2
studio/__init__.py
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
2
studio/backend/__init__.py
Normal file
2
studio/backend/__init__.py
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
2
studio/backend/assets/__init__.py
Normal file
2
studio/backend/assets/__init__.py
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
2
studio/backend/assets/configs/__init__.py
Normal file
2
studio/backend/assets/configs/__init__.py
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
42
studio/backend/assets/configs/full_finetune.yaml
Normal file
42
studio/backend/assets/configs/full_finetune.yaml
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
model: unsloth/Qwen2.5-0.5B
|
||||
|
||||
data:
|
||||
dataset: tatsu-lab/alpaca
|
||||
format_type: auto
|
||||
|
||||
training:
|
||||
training_type: full
|
||||
max_seq_length: 2048
|
||||
load_in_4bit: false
|
||||
output_dir: outputs
|
||||
num_epochs: 1
|
||||
learning_rate: 0.0002
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 0
|
||||
save_steps: 0
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: "unsloth"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules: ""
|
||||
vision_all_linear: false
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: unsloth-training
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: runs
|
||||
42
studio/backend/assets/configs/lora_text.yaml
Normal file
42
studio/backend/assets/configs/lora_text.yaml
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
model: unsloth/Qwen2.5-0.5B
|
||||
|
||||
data:
|
||||
dataset: tatsu-lab/alpaca
|
||||
format_type: auto
|
||||
|
||||
training:
|
||||
training_type: lora
|
||||
max_seq_length: 2048
|
||||
load_in_4bit: true
|
||||
output_dir: outputs
|
||||
num_epochs: 1
|
||||
learning_rate: 0.0002
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 0
|
||||
save_steps: 0
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: "unsloth"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules: "q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj"
|
||||
vision_all_linear: false
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: unsloth-training
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: runs
|
||||
56
studio/backend/assets/configs/model_defaults/default.yaml
Normal file
56
studio/backend/assets/configs/model_defaults/default.yaml
Normal file
|
|
@ -0,0 +1,56 @@
|
|||
# Default model training parameters
|
||||
# Used for models without specific configurations
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 5e-5
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_ratio: 0.1
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.7
|
||||
top_p: 0.95
|
||||
top_k: -1
|
||||
min_p: 0.01
|
||||
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
# Model defaults for unsloth/Qwen3-Embedding-0.6B
|
||||
# Based on Qwen3_Embedding_(0_6B).py embedding notebook
|
||||
# Also applies to: unsloth/Qwen3-Embedding-4B
|
||||
|
||||
training:
|
||||
max_seq_length: 512
|
||||
# num_epochs: 2
|
||||
num_epochs: 0
|
||||
learning_rate: 3e-5
|
||||
batch_size: 256
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_ratio: 0.03
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: false
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "constant_with_warmup"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "embedding-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 50
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
# Model defaults for unsloth/all-MiniLM-L6-v2
|
||||
# Based on All_MiniLM_L6_v2.py embedding notebook
|
||||
|
||||
training:
|
||||
max_seq_length: 512
|
||||
# num_epochs: 2
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 256
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_ratio: 0.03
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: false
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 128
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "value"
|
||||
- "key"
|
||||
- "dense"
|
||||
- "query"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "embedding-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 50
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
# Model defaults for unsloth/bge-m3
|
||||
# Based on BGE_M3.py embedding notebook
|
||||
|
||||
training:
|
||||
max_seq_length: 512
|
||||
# num_epochs: 2
|
||||
num_epochs: 0
|
||||
learning_rate: 3e-5
|
||||
batch_size: 256
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_ratio: 0.03
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: false
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "constant_with_warmup"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "key"
|
||||
- "query"
|
||||
- "dense"
|
||||
- "value"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "embedding-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 50
|
||||
|
|
@ -0,0 +1,42 @@
|
|||
# Model defaults for unsloth/embeddinggemma-300m
|
||||
# Based on EmbeddingGemma_(300M).py embedding notebook
|
||||
|
||||
training:
|
||||
max_seq_length: 1024
|
||||
# num_epochs: 1
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-5
|
||||
batch_size: 64
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_ratio: 0.03
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "embedding-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 5
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
# Model defaults for unsloth/gte-modernbert-base
|
||||
# Based on ModernBert.py embedding notebook
|
||||
|
||||
training:
|
||||
max_seq_length: 512
|
||||
# num_epochs: 2
|
||||
num_epochs: 0
|
||||
learning_rate: 3e-5
|
||||
batch_size: 256
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_ratio: 0.03
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "constant_with_warmup"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 128
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "Wi"
|
||||
- "Wo"
|
||||
- "Wqkv"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "embedding-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 50
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/ERNIE-4.5-21B-A3B-PT
|
||||
# Based on ERNIE_4_5_21B_A3B_PT-Conversational.ipynb
|
||||
# Also applies to: unsloth/ERNIE-4.5-21B-A3B-PT
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
# Model defaults for unsloth/ERNIE-4.5-VL-28B-A3B-PT
|
||||
# Based on ERNIE_4_5_VL_28B_A3B_PT_Vision.ipynb
|
||||
# Also applies to: unsloth/ERNIE-4.5-VL-28B-A3B-PT
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: true
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: true
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for tiiuae/Falcon-H1-0.5B-Instruct
|
||||
# Based on Falcon_H1_(0.5B)-Alpaca.ipynb
|
||||
# Also applies to: tiiuae/Falcon-H1-0.5B-Instruct, unsloth/Falcon-H1-0.5B-Instruct
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 8
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: false
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.1
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
# Model defaults for unsloth/codegemma-7b-bnb-4bit
|
||||
# Based on CodeGemma_(7B)-Conversational.ipynb
|
||||
# Also applies to: unsloth/codegemma-7b, google/codegemma-7b
|
||||
# added inference parameters from Ollama
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 4096
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0
|
||||
top_p: 0.9
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/functiongemma-270m-it
|
||||
# Based on FunctionGemma_(270M).ipynb
|
||||
# Also applies to: unsloth/functiongemma-270m-it-unsloth-bnb-4bit, google/functiongemma-270m-it, unsloth/functiongemma-270m-it-unsloth-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 4096
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 10
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 128
|
||||
lora_alpha: 256
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_k: 64
|
||||
top_p: 0.95
|
||||
min_p: 0.0
|
||||
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
# Model defaults for unsloth/gemma-2-27b-bnb-4bit
|
||||
# Based on Gemma2_(9B)-Alpaca.ipynb (same defaults for larger models)
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/gemma-2-2b
|
||||
# Based on Gemma2_(2B)-Alpaca.ipynb
|
||||
# Also applies to: unsloth/gemma-2-2b-bnb-4bit, google/gemma-2-2b
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/gemma-3-270m-it
|
||||
# Based on Gemma3_(270M).ipynb
|
||||
# Also applies to: unsloth/gemma-3-270m-it-unsloth-bnb-4bit, google/gemma-3-270m-it, unsloth/gemma-3-270m-it-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 5e-5
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 128
|
||||
lora_alpha: 128
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_k: 64
|
||||
top_p: 0.95
|
||||
min_p: 0.0
|
||||
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/gemma-3-27b-it
|
||||
# Based on Gemma3_(27B)_A100-Conversational.ipynb
|
||||
# Also applies to: unsloth/gemma-3-27b-it-unsloth-bnb-4bit, google/gemma-3-27b-it, unsloth/gemma-3-27b-it-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 8
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_k: 64
|
||||
top_p: 0.95
|
||||
min_p: 0.0
|
||||
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/gemma-3-4b-it
|
||||
# Based on Gemma3_(4B).ipynb
|
||||
# Also applies to: unsloth/gemma-3-4b-it-unsloth-bnb-4bit, google/gemma-3-4b-it, unsloth/gemma-3-4b-it-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 8
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_k: 64
|
||||
top_p: 0.95
|
||||
min_p: 0.0
|
||||
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/gemma-3-4b-pt
|
||||
# Based on Gemma3_(4B)-Vision.ipynb
|
||||
# Also applies to: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit, google/gemma-3-4b-pt, unsloth/gemma-3-4b-pt-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 2
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_ratio: 0.03
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: true
|
||||
optim: "adamw_torch_fused"
|
||||
lr_scheduler_type: "cosine"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_k: 64
|
||||
top_p: 0.95
|
||||
min_p: 0.0
|
||||
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/gemma-3n-E4B-it
|
||||
# Based on Gemma3N_(4B)-Conversational.ipynb
|
||||
# Also applies to: unsloth/gemma-3n-E4B-it-unsloth-bnb-4bit, google/gemma-3n-E4B-it, unsloth/gemma-3n-E4B-it-unsloth-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 1024
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 8
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
audio_input: true
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_k: 64
|
||||
top_p: 0.95
|
||||
min_p: 0.0
|
||||
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/gemma-3n-E4B
|
||||
# Based on Gemma3N_(4B)-Vision.ipynb
|
||||
# Also applies to: unsloth/gemma-3n-E4B-unsloth-bnb-4bit, google/gemma-3n-E4B
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 2
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_ratio: 0.03
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: true
|
||||
optim: "adamw_torch_fused"
|
||||
lr_scheduler_type: "cosine"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
audio_input: true
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_k: 64
|
||||
top_p: 0.95
|
||||
min_p: 0.0
|
||||
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# Model defaults for unsloth/gpt-oss-120b
|
||||
# Based on gpt-oss-(120B)_A100-Fine-tuning.ipynb
|
||||
# Also applies to: openai/gpt-oss-120b, unsloth/gpt-oss-120b-unsloth-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 4096
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_p: 1.0
|
||||
top_k: 0
|
||||
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# Model defaults for unsloth/gpt-oss-20b
|
||||
# Based on gpt-oss-(20B)-Fine-tuning.ipynb
|
||||
# Also applies to: openai/gpt-oss-20b, unsloth/gpt-oss-20b-unsloth-bnb-4bit, unsloth/gpt-oss-20b-BF16
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 1024
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.0
|
||||
top_p: 1.0
|
||||
top_k: 0
|
||||
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
# Model defaults for unsloth/granite-4.0-350m
|
||||
# Based on Granite4.0_350M.ipynb
|
||||
# Also applies to: ibm-granite/granite-4.0-350m, unsloth/granite-4.0-350m-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
- "shared_mlp.input_linear"
|
||||
- "shared_mlp.output_linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.0
|
||||
top_p: 1.0
|
||||
top_k: 0
|
||||
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
# Model defaults for unsloth/granite-4.0-h-micro
|
||||
# Based on Granite4.0.ipynb
|
||||
# Also applies to: ibm-granite/granite-4.0-h-micro, unsloth/granite-4.0-h-micro-bnb-4bit, unsloth/granite-4.0-h-micro-unsloth-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
- "shared_mlp.input_linear"
|
||||
- "shared_mlp.output_linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.0
|
||||
top_p: 1.0
|
||||
top_k: 0
|
||||
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# Model defaults for unsloth/Llama-3.2-11B-Vision-Instruct
|
||||
# Based on Llama3.2_(11B)-Vision.ipynb
|
||||
# Also applies to: unsloth/Llama-3.2-11B-Vision-Instruct-unsloth-bnb-4bit, meta-llama/Llama-3.2-11B-Vision-Instruct, unsloth/Llama-3.2-11B-Vision-Instruct-bnb-4bit
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Llama-3.2-1B-Instruct
|
||||
# Based on Llama3.2_(1B)-RAFT.ipynb
|
||||
# Also applies to: unsloth/Llama-3.2-1B-Instruct-unsloth-bnb-4bit, meta-llama/Llama-3.2-1B-Instruct, unsloth/Llama-3.2-1B-Instruct-bnb-4bit, RedHatAI/Llama-3.2-1B-Instruct-FP8, unsloth/Llama-3.2-1B-Instruct-FP8-Block, unsloth/Llama-3.2-1B-Instruct-FP8-Dynamic
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 5
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-5
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 8
|
||||
warmup_steps: 0
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: true
|
||||
optim: "adamw_torch"
|
||||
lr_scheduler_type: "cosine"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/Llama-3.2-3B-Instruct
|
||||
# Based on Llama3.2_(1B_and_3B)-Conversational.ipynb
|
||||
# Also applies to: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit, meta-llama/Llama-3.2-3B-Instruct, unsloth/Llama-3.2-3B-Instruct-bnb-4bit, RedHatAI/Llama-3.2-3B-Instruct-FP8, unsloth/Llama-3.2-3B-Instruct-FP8-Block, unsloth/Llama-3.2-3B-Instruct-FP8-Dynamic
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/Llama-3.3-70B-Instruct
|
||||
# Based on Llama3.3_(70B)_A100-Conversational.ipynb
|
||||
# Also applies to: unsloth/Llama-3.3-70B-Instruct-unsloth-bnb-4bit, meta-llama/Llama-3.3-70B-Instruct, unsloth/Llama-3.3-70B-Instruct-bnb-4bit, RedHatAI/Llama-3.3-70B-Instruct-FP8, unsloth/Llama-3.3-70B-Instruct-FP8-Block, unsloth/Llama-3.3-70B-Instruct-FP8-Dynamic
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Meta-Llama-3.1-70B-bnb-4bit
|
||||
# Based on Llama3.1_(8B)-Alpaca.ipynb
|
||||
# Also applies to: unsloth/Meta-Llama-3.1-8B-bnb-4bit, unsloth/Meta-Llama-3.1-8B-unsloth-bnb-4bit, meta-llama/Meta-Llama-3.1-8B, unsloth/Meta-Llama-3.1-8B, unsloth/Meta-Llama-3.1-70B, meta-llama/Meta-Llama-3.1-70B, unsloth/Meta-Llama-3.1-405B-bnb-4bit, meta-llama/Meta-Llama-3.1-405B
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
|
||||
# Based on Llama3.1_(8B)-Inference.ipynb
|
||||
# Also applies to: "unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit", "meta-llama/Meta-Llama-3.1-8B-Instruct", "unsloth/Meta-Llama-3.1-8B-Instruct","RedHatAI/Llama-3.1-8B-Instruct-FP8","unsloth/Llama-3.1-8B-Instruct-FP8-Block","unsloth/Llama-3.1-8B-Instruct-FP8-Dynamic"
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 8192
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/llama-3-8b-Instruct-bnb-4bit
|
||||
# Based on Llama3_(8B)-Conversational.ipynb
|
||||
# Also applies to: unsloth/llama-3-8b-Instruct, meta-llama/Meta-Llama-3-8B-Instruct
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/llama-3-8b-bnb-4bit
|
||||
# Based on Llama3_(8B)-Alpaca.ipynb
|
||||
# Also applies to: unsloth/llama-3-8b, meta-llama/Meta-Llama-3-8B
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
# Model defaults for unsloth/Llasa-3B
|
||||
# Based on Llasa_TTS_(3B).ipynb and Llasa_TTS_(1B).ipynb
|
||||
# Also applies to: HKUSTAudio/Llasa-1B
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 5e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 128
|
||||
lora_alpha: 128
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "v_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.2
|
||||
top_p: 1.2
|
||||
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
# Model defaults for unsloth/Magistral-Small-2509
|
||||
# Based on Magistral_(24B)-Reasoning-Conversational.ipynb
|
||||
# Also applies to: mistralai/Magistral-Small-2509, unsloth/Magistral-Small-2509-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.7
|
||||
min_p: 0.01
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
# Model defaults for unsloth/Ministral-3-3B-Instruct-2512
|
||||
# Based on Ministral_3_VL_(3B)_Vision.ipynb
|
||||
# Also applies to: unsloth/Ministral-3-3B-Instruct-2512
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.15
|
||||
top_p: default
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Mistral-Nemo-Base-2407-bnb-4bit
|
||||
# Based on Mistral_Nemo_(12B)-Alpaca.ipynb
|
||||
# Also applies to: "unsloth/Mistral-Nemo-Base-2407", "mistralai/Mistral-Nemo-Base-2407", "unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", "unsloth/Mistral-Nemo-Instruct-2407", "mistralai/Mistral-Nemo-Instruct-2407",
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Mistral-Small-Instruct-2409
|
||||
# Based on Mistral_Small_(22B)-Alpaca.ipynb
|
||||
# Also applies to: unsloth/Mistral-Small-Instruct-2409-bnb-4bit, mistralai/Mistral-Small-Instruct-2409
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# Model defaults for unsloth/Pixtral-12B-2409
|
||||
# Based on Pixtral_(12B)-Vision.ipynb
|
||||
# Also applies to: unsloth/Pixtral-12B-2409-unsloth-bnb-4bit, mistralai/Pixtral-12B-2409, unsloth/Pixtral-12B-2409-bnb-4bit
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "paged_adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 8
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: false
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/mistral-7b-instruct-v0.3-bnb-4bit
|
||||
# Based on Mistral_v0.3_(7B)-Conversational.ipynb
|
||||
# Also applies to: unsloth/mistral-7b-instruct-v0.3, mistralai/Mistral-7B-Instruct-v0.3
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
# Model defaults for unsloth/mistral-7b-v0.3-bnb-4bit
|
||||
# Based on Mistral_v0.3_(7B)-Alpaca.ipynb
|
||||
# Also applies to: "unsloth/mistral-7b-v0.3", "mistralai/Mistral-7B-v0.3",
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for OuteAI/Llama-OuteTTS-1.0-1B
|
||||
# Based on Oute_TTS_(1B).ipynb
|
||||
# Also applies to: OuteAI/Llama-OuteTTS-1.0-1B
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
audio_type: dac
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
eval_steps: 0
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 128
|
||||
lora_alpha: 128
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "v_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.4
|
||||
top_k: 40
|
||||
top_p: 0.9
|
||||
min_p: 0.05
|
||||
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
# Model defaults for Spark-TTS-0.5B/LLM
|
||||
# Based on Spark_TTS_(0_5B).ipynb
|
||||
# Also applies to: Spark-TTS-0.5B/LLM
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
audio_type: bicodec
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
eval_steps: 0
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 128
|
||||
lora_alpha: 128
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.8
|
||||
top_k: 50
|
||||
top_p: 1.0
|
||||
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
# Model defaults for sesame/csm-1b
|
||||
# Based on Sesame_CSM_(1B)-TTS.ipynb
|
||||
# Also applies to: sesame/csm-1b
|
||||
|
||||
audio_type: csm
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
eval_steps: 0
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# Model defaults for unsloth/GLM-4.7-Flash
|
||||
# Based on GLM_Flash_A100(80GB).py
|
||||
# Also applies to: unsloth/GLM-4.7-Flash-unsloth-bnb-4bit, unsloth/GLM-4.7-Flash-bnb-4bit, THUDM/GLM-4.7-Flash
|
||||
|
||||
training:
|
||||
trust_remote_code: true
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 5
|
||||
max_steps: 60
|
||||
save_steps: 60
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
- "out_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: true
|
||||
temperature: 0.7
|
||||
top_p: 0.8
|
||||
top_k: 20
|
||||
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
# Model defaults for unsloth/LFM2-1.2B
|
||||
# Based on Liquid_LFM2_(1.2B)-Conversational.ipynb
|
||||
# Also applies to: unsloth/LFM2-1.2B
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.3
|
||||
min_p: 0.15
|
||||
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/Nemotron-3-Nano-30B-A3B
|
||||
# Based on Nemotron-3-Nano-30B-A3B_A100.ipynb
|
||||
# Also applies to: unsloth/Nemotron-3-Nano-30B-A3B
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: true
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 8
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
- "in_proj"
|
||||
- "out_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: true
|
||||
temperature: 1.0
|
||||
top_p: 1.0
|
||||
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
# Model defaults for unsloth/PaddleOCR-VL
|
||||
# Based on Paddle_OCR_(1B)_Vision.ipynb
|
||||
# Also applies to: unsloth/PaddleOCR-VL
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: true
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 5e-5
|
||||
batch_size: 4
|
||||
gradient_accumulation_steps: 2
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: true
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
# Model defaults for answerdotai/ModernBERT-large
|
||||
# Based on bert_classification.ipynb
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 1
|
||||
num_epochs: 0
|
||||
learning_rate: 5e-5
|
||||
batch_size: 32
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
# Model defaults for unsloth/orpheus-3b-0.1-ft
|
||||
# Based on Orpheus_(3B)-TTS.ipynb
|
||||
# Also applies to: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit, canopylabs/orpheus-3b-0.1-ft, unsloth/orpheus-3b-0.1-ft-bnb-4bit
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
audio_type: snac
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
eval_steps: 0
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/tinyllama
|
||||
# Based on TinyLlama_(1.1B)-Alpaca.ipynb
|
||||
# Also applies to: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 4096
|
||||
# num_epochs: 1
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-5
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_ratio: 0.1
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.1
|
||||
random_seed: 3407
|
||||
packing: true
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
# Model defaults for unsloth/whisper-large-v3
|
||||
# Based on Whisper.ipynb
|
||||
# Also applies to: unsloth/whisper-large-v3, openai/whisper-large-v3
|
||||
|
||||
audio_type: whisper
|
||||
audio_input: true
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
eval_steps: 5
|
||||
max_seq_length: 448
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 1e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "v_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Phi-3-medium-4k-instruct
|
||||
# Based on Phi_3_Medium-Conversational.ipynb
|
||||
# Also applies to: "unsloth/Phi-3-medium-4k-instruct-bnb-4bit", "microsoft/Phi-3-medium-4k-instruct",
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Phi-3.5-mini-instruct
|
||||
# Based on Phi_3.5_Mini-Conversational.ipynb
|
||||
# Also applies to: "unsloth/Phi-3.5-mini-instruct-bnb-4bit", "microsoft/Phi-3.5-mini-instruct"
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/Phi-4
|
||||
# Based on Phi_4-Conversational.ipynb
|
||||
# Also applies to: unsloth/phi-4-unsloth-bnb-4bit, microsoft/phi-4, unsloth/phi-4-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.8
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for imdatta0/tiny_qwen3_moe_2.8B_0.7B
|
||||
# Based on TinyQwen3_MoE.py
|
||||
# Dummy model of qwen3moe architecture created to fit in T4
|
||||
# MoE model - includes gate_up_proj for MoE layers
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_steps: 5
|
||||
max_steps: 50
|
||||
save_steps: 50
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
- "gate_up_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_k: 20
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Qwen2-7B
|
||||
# Based on Qwen2_(7B)-Alpaca.ipynb
|
||||
# Also applies to: unsloth/Qwen2-7B-bnb-4bit, Qwen/Qwen2-7B
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# Model defaults for unsloth/Qwen2-VL-7B-Instruct
|
||||
# Based on Qwen2_VL_(7B)-Vision.ipynb
|
||||
# Also applies to: unsloth/Qwen2-VL-7B-Instruct-unsloth-bnb-4bit, Qwen/Qwen2-VL-7B-Instruct, unsloth/Qwen2-VL-7B-Instruct-bnb-4bit
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Qwen2.5-1.5B-Instruct
|
||||
# Based on nemo_gym_sudoku.ipynb
|
||||
# Also applies to: unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit, Qwen/Qwen2.5-1.5B-Instruct, unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 4096
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 1e-5
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 64
|
||||
warmup_ratio: 0.1
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 42
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 4
|
||||
lora_alpha: 8
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Qwen2.5-7B
|
||||
# Based on Qwen2.5_(7B)-Alpaca.ipynb
|
||||
# Also applies to: unsloth/Qwen2.5-7B-unsloth-bnb-4bit, Qwen/Qwen2.5-7B, unsloth/Qwen2.5-7B-bnb-4bit
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Qwen2.5-Coder-1.5B-Instruct
|
||||
# Based on Qwen2.5_Coder_(1.5B)-Tool_Calling.ipynb
|
||||
# Also applies to: unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit, Qwen/Qwen2.5-Coder-1.5B-Instruct
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/Qwen2.5-Coder-14B-Instruct
|
||||
# Based on Qwen2.5_Coder_(14B)-Conversational.ipynb
|
||||
# Also applies to: unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit, Qwen/Qwen2.5-Coder-14B-Instruct
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "paged_adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
# Model defaults for unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit
|
||||
# Based on CodeForces-cot-Finetune_for_Reasoning_on_CodeForces.ipynb
|
||||
# Also applies to: unsloth/Qwen2.5-Coder-7B-Instruct, Qwen/Qwen2.5-Coder-7B-Instruct
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 32768
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# Model defaults for unsloth/Qwen2.5-VL-7B-Instruct-bnb-4bit
|
||||
# Based on Qwen2.5_VL_(7B)-Vision.ipynb
|
||||
# Also applies to: unsloth/Qwen2.5-VL-7B-Instruct, Qwen/Qwen2.5-VL-7B-Instruct, unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit
|
||||
# added inference parameters from unsloth notebook
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 1.5
|
||||
min_p: 0.1
|
||||
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# Model defaults for unsloth/Qwen3-0.6B
|
||||
# Based on Qwen3_(0_6B)-Phone_Deployment.ipynb
|
||||
# Also applies to: unsloth/Qwen3-0.6B-unsloth-bnb-4bit, Qwen/Qwen3-0.6B, unsloth/Qwen3-0.6B-bnb-4bit, Qwen/Qwen3-0.6B-FP8, unsloth/Qwen3-0.6B-FP8
|
||||
# added inference parameters from Ollama
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 1024
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 5e-5
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_k: 20
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Model defaults for unsloth/Qwen3-14B-Base
|
||||
# Based on Qwen3_(14B)-Alpaca.ipynb
|
||||
# Also applies to: unsloth/Qwen3-14B-Base, Qwen/Qwen3-14B-Base, unsloth/Qwen3-14B-Base-bnb-4bit
|
||||
# added inference parameters from Ollama
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_k: 20
|
||||
top_p: 0.95
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# Model defaults for unsloth/Qwen3-14B
|
||||
# Based on Qwen3_(14B).ipynb
|
||||
# Also applies to: unsloth/Qwen3-14B-unsloth-bnb-4bit, Qwen/Qwen3-14B, unsloth/Qwen3-14B-bnb-4bit, Qwen/Qwen3-14B-FP8, unsloth/Qwen3-14B-FP8
|
||||
# added inference parameters from Ollama
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_k: 20
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/Qwen3-30B-A3B-Instruct-2507
|
||||
# Based on Qwen3_MoE.py
|
||||
# Also applies to: Qwen/Qwen3-30B-A3B-Instruct-2507, unsloth/Qwen3-30B-A3B-Instruct-2507-bnb-4bit
|
||||
# MoE model - includes gate_up_proj for MoE layers
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 1
|
||||
warmup_steps: 5
|
||||
max_steps: 50
|
||||
save_steps: 50
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
- "gate_up_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_k: 20
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# Model defaults for unsloth/Qwen3-32B
|
||||
# Based on Qwen3_(32B)_A100-Reasoning-Conversational.ipynb
|
||||
# Also applies to: unsloth/Qwen3-32B-unsloth-bnb-4bit, Qwen/Qwen3-32B, unsloth/Qwen3-32B-bnb-4bit, Qwen/Qwen3-32B-FP8, unsloth/Qwen3-32B-FP8
|
||||
# added inference parameters from Ollama
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_k: 20
|
||||
top_p: 0.95
|
||||
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/Qwen3-4B-Instruct-2507
|
||||
# Based on Qwen3_(4B)-Instruct.ipynb
|
||||
# Also applies to: unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit, Qwen/Qwen3-4B-Instruct-2507, unsloth/Qwen3-4B-Instruct-2507-bnb-4bit, Qwen/Qwen3-4B-Instruct-2507-FP8, unsloth/Qwen3-4B-Instruct-2507-FP8
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.7
|
||||
top_p: 0.80
|
||||
top_k: 20
|
||||
min_p: 0.00
|
||||
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
# Model defaults for unsloth/Qwen3-4B-Thinking-2507
|
||||
# Based on Qwen3_(4B)-Thinking.ipynb
|
||||
# Also applies to: unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit, Qwen/Qwen3-4B-Thinking-2507, unsloth/Qwen3-4B-Thinking-2507-bnb-4bit, Qwen/Qwen3-4B-Thinking-2507-FP8, unsloth/Qwen3-4B-Thinking-2507-FP8
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 32
|
||||
lora_alpha: 32
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "q_proj"
|
||||
- "k_proj"
|
||||
- "v_proj"
|
||||
- "o_proj"
|
||||
- "gate_proj"
|
||||
- "up_proj"
|
||||
- "down_proj"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.6
|
||||
top_p: 0.95
|
||||
top_k: 20
|
||||
min_p: 0.00
|
||||
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
# Model defaults for unsloth/Qwen3-VL-8B-Instruct
|
||||
# Based on Qwen3_VL_(8B)-Vision.ipynb
|
||||
# Also applies to: Qwen/Qwen3-VL-8B-Instruct-FP8, unsloth/Qwen3-VL-8B-Instruct-FP8, unsloth/Qwen3-VL-8B-Instruct, Qwen/Qwen3-VL-8B-Instruct, unsloth/Qwen3-VL-8B-Instruct-bnb-4bit
|
||||
# added inference parameters from unsloth guides
|
||||
|
||||
training:
|
||||
trust_remote_code: false
|
||||
max_seq_length: 2048
|
||||
# num_epochs: 4
|
||||
num_epochs: 0
|
||||
learning_rate: 2e-4
|
||||
batch_size: 2
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 30
|
||||
save_steps: 30
|
||||
weight_decay: 0.001
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: true
|
||||
gradient_checkpointing: "unsloth"
|
||||
optim: "adamw_8bit"
|
||||
lr_scheduler_type: "linear"
|
||||
|
||||
lora:
|
||||
lora_r: 16
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules:
|
||||
- "all-linear"
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: "llm-finetuning"
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: "runs"
|
||||
log_frequency: 10
|
||||
|
||||
inference:
|
||||
trust_remote_code: false
|
||||
temperature: 0.7
|
||||
top_p: 0.8
|
||||
top_k: 20
|
||||
|
||||
42
studio/backend/assets/configs/vision_lora.yaml
Normal file
42
studio/backend/assets/configs/vision_lora.yaml
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
model: unsloth/Qwen2-VL-2B-Instruct-bnb-4bit
|
||||
|
||||
data:
|
||||
dataset: philschmid/amazon-product-descriptions-vlm
|
||||
format_type: auto
|
||||
|
||||
training:
|
||||
training_type: lora
|
||||
max_seq_length: 2048
|
||||
load_in_4bit: true
|
||||
output_dir: outputs
|
||||
num_epochs: 1
|
||||
learning_rate: 0.0002
|
||||
batch_size: 1
|
||||
gradient_accumulation_steps: 4
|
||||
warmup_steps: 5
|
||||
max_steps: 0
|
||||
save_steps: 0
|
||||
weight_decay: 0.01
|
||||
random_seed: 3407
|
||||
packing: false
|
||||
train_on_completions: false
|
||||
gradient_checkpointing: "unsloth"
|
||||
|
||||
lora:
|
||||
lora_r: 64
|
||||
lora_alpha: 16
|
||||
lora_dropout: 0.0
|
||||
target_modules: "" # vision uses vision_all_linear by default
|
||||
vision_all_linear: true
|
||||
use_rslora: false
|
||||
use_loftq: false
|
||||
finetune_vision_layers: true
|
||||
finetune_language_layers: true
|
||||
finetune_attention_modules: true
|
||||
finetune_mlp_modules: true
|
||||
|
||||
logging:
|
||||
enable_wandb: false
|
||||
wandb_project: unsloth-training
|
||||
enable_tensorboard: false
|
||||
tensorboard_dir: runs
|
||||
1288
studio/backend/assets/datasets/alpaca_unsloth.json
Normal file
1288
studio/backend/assets/datasets/alpaca_unsloth.json
Normal file
File diff suppressed because it is too large
Load diff
0
studio/backend/auth/.gitkeep
Normal file
0
studio/backend/auth/.gitkeep
Normal file
47
studio/backend/auth/__init__.py
Normal file
47
studio/backend/auth/__init__.py
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
"""
|
||||
Authentication module for JWT-based auth with SQLite storage.
|
||||
"""
|
||||
|
||||
from .authentication import (
|
||||
create_access_token,
|
||||
create_refresh_token,
|
||||
refresh_access_token,
|
||||
get_current_subject,
|
||||
reload_secret,
|
||||
)
|
||||
from .storage import (
|
||||
is_initialized,
|
||||
create_initial_user,
|
||||
get_user_and_secret,
|
||||
load_jwt_secret,
|
||||
save_setup_token,
|
||||
consume_setup_token,
|
||||
has_pending_setup_token,
|
||||
save_refresh_token,
|
||||
verify_refresh_token,
|
||||
revoke_user_refresh_tokens,
|
||||
)
|
||||
from .hashing import hash_password, verify_password
|
||||
|
||||
__all__ = [
|
||||
"create_access_token",
|
||||
"create_refresh_token",
|
||||
"refresh_access_token",
|
||||
"get_current_subject",
|
||||
"reload_secret",
|
||||
"is_initialized",
|
||||
"create_initial_user",
|
||||
"get_user_and_secret",
|
||||
"load_jwt_secret",
|
||||
"save_setup_token",
|
||||
"consume_setup_token",
|
||||
"has_pending_setup_token",
|
||||
"save_refresh_token",
|
||||
"verify_refresh_token",
|
||||
"revoke_user_refresh_tokens",
|
||||
"hash_password",
|
||||
"verify_password",
|
||||
]
|
||||
108
studio/backend/auth/authentication.py
Normal file
108
studio/backend/auth/authentication.py
Normal file
|
|
@ -0,0 +1,108 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
import secrets
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import Depends, HTTPException, status
|
||||
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
|
||||
import jwt
|
||||
|
||||
from .storage import load_jwt_secret, save_refresh_token, verify_refresh_token
|
||||
|
||||
ALGORITHM = "HS256"
|
||||
ACCESS_TOKEN_EXPIRE_MINUTES = 60
|
||||
REFRESH_TOKEN_EXPIRE_DAYS = 7
|
||||
|
||||
# Load stable secret from SQLite (set during first-time setup)
|
||||
# This will raise RuntimeError if auth hasn't been initialized yet
|
||||
try:
|
||||
SECRET_KEY = load_jwt_secret()
|
||||
except RuntimeError:
|
||||
# Fallback: use a temporary secret until setup is complete
|
||||
# This allows the app to start, but protected routes will fail until setup
|
||||
SECRET_KEY = secrets.token_urlsafe(64)
|
||||
|
||||
security = HTTPBearer() # Reads Authorization: Bearer <token>
|
||||
|
||||
|
||||
def create_access_token(
|
||||
subject: str,
|
||||
expires_delta: Optional[timedelta] = None,
|
||||
) -> str:
|
||||
"""
|
||||
Create a signed JWT for the given subject (e.g. username).
|
||||
|
||||
Tokens are valid across restarts because SECRET_KEY is stored in SQLite.
|
||||
"""
|
||||
to_encode = {"sub": subject}
|
||||
expire = datetime.now(timezone.utc) + (
|
||||
expires_delta or timedelta(minutes = ACCESS_TOKEN_EXPIRE_MINUTES)
|
||||
)
|
||||
to_encode.update({"exp": expire})
|
||||
return jwt.encode(to_encode, SECRET_KEY, algorithm = ALGORITHM)
|
||||
|
||||
|
||||
def create_refresh_token(subject: str) -> str:
|
||||
"""
|
||||
Create a random refresh token, store its hash in SQLite, and return it.
|
||||
|
||||
Refresh tokens are opaque (not JWTs) and expire after REFRESH_TOKEN_EXPIRE_DAYS.
|
||||
"""
|
||||
token = secrets.token_urlsafe(48)
|
||||
expires_at = datetime.now(timezone.utc) + timedelta(days = REFRESH_TOKEN_EXPIRE_DAYS)
|
||||
save_refresh_token(token, subject, expires_at.isoformat())
|
||||
return token
|
||||
|
||||
|
||||
def refresh_access_token(refresh_token: str) -> Optional[str]:
|
||||
"""
|
||||
Validate a refresh token and issue a new access token.
|
||||
|
||||
The refresh token itself is NOT consumed — it stays valid until expiry.
|
||||
Returns a new access_token or None if the refresh token is invalid/expired.
|
||||
"""
|
||||
username = verify_refresh_token(refresh_token)
|
||||
if username is None:
|
||||
return None
|
||||
return create_access_token(subject = username)
|
||||
|
||||
|
||||
def reload_secret() -> None:
|
||||
"""
|
||||
Reload the JWT secret from SQLite.
|
||||
|
||||
Call this after setup to ensure new tokens use the persistent secret.
|
||||
"""
|
||||
global SECRET_KEY
|
||||
SECRET_KEY = load_jwt_secret()
|
||||
|
||||
|
||||
async def get_current_subject(
|
||||
credentials: HTTPAuthorizationCredentials = Depends(security),
|
||||
) -> str:
|
||||
"""
|
||||
FastAPI dependency to validate the JWT and return the subject.
|
||||
|
||||
Use this as a dependency on routes that should be protected, e.g.:
|
||||
|
||||
@router.get("/secure")
|
||||
async def secure_endpoint(current_subject: str = Depends(get_current_subject)):
|
||||
...
|
||||
"""
|
||||
token = credentials.credentials
|
||||
try:
|
||||
payload = jwt.decode(token, SECRET_KEY, algorithms = [ALGORITHM])
|
||||
subject: Optional[str] = payload.get("sub")
|
||||
if subject is None:
|
||||
raise HTTPException(
|
||||
status_code = status.HTTP_401_UNAUTHORIZED,
|
||||
detail = "Invalid token payload",
|
||||
)
|
||||
return subject
|
||||
except jwt.InvalidTokenError:
|
||||
raise HTTPException(
|
||||
status_code = status.HTTP_401_UNAUTHORIZED,
|
||||
detail = "Invalid or expired token",
|
||||
)
|
||||
43
studio/backend/auth/hashing.py
Normal file
43
studio/backend/auth/hashing.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
"""
|
||||
Password hashing utilities using PBKDF2.
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import hmac
|
||||
import secrets
|
||||
from typing import Tuple
|
||||
|
||||
|
||||
def hash_password(password: str, salt: str | None = None) -> Tuple[str, str]:
|
||||
"""
|
||||
Hash a password using PBKDF2-HMAC-SHA256.
|
||||
|
||||
Returns (salt, hex_hash) tuple.
|
||||
"""
|
||||
if salt is None:
|
||||
salt = secrets.token_hex(16)
|
||||
dk = hashlib.pbkdf2_hmac(
|
||||
"sha256",
|
||||
password.encode("utf-8"),
|
||||
salt.encode("utf-8"),
|
||||
100_000, # 100k iterations
|
||||
)
|
||||
return salt, dk.hex()
|
||||
|
||||
|
||||
def verify_password(password: str, salt: str, hashed: str) -> bool:
|
||||
"""
|
||||
Verify a password against a stored salt and hash.
|
||||
|
||||
Uses constant-time comparison to prevent timing attacks.
|
||||
"""
|
||||
dk = hashlib.pbkdf2_hmac(
|
||||
"sha256",
|
||||
password.encode("utf-8"),
|
||||
salt.encode("utf-8"),
|
||||
100_000,
|
||||
)
|
||||
return hmac.compare_digest(dk.hex(), hashed)
|
||||
263
studio/backend/auth/storage.py
Normal file
263
studio/backend/auth/storage.py
Normal file
|
|
@ -0,0 +1,263 @@
|
|||
# SPDX-License-Identifier: AGPL-3.0-only
|
||||
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
|
||||
|
||||
"""
|
||||
SQLite storage for authentication data (user credentials + JWT secret).
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import sqlite3
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from utils.paths import auth_db_path, ensure_dir
|
||||
|
||||
DB_PATH = auth_db_path()
|
||||
|
||||
|
||||
def _hash_token(token: str) -> str:
|
||||
"""SHA-256 hash a setup token for safe storage."""
|
||||
return hashlib.sha256(token.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def get_connection() -> sqlite3.Connection:
|
||||
"""Get a connection to the auth database, creating tables if needed."""
|
||||
ensure_dir(DB_PATH.parent)
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS auth_user (
|
||||
id INTEGER PRIMARY KEY,
|
||||
username TEXT UNIQUE NOT NULL,
|
||||
password_salt TEXT NOT NULL,
|
||||
password_hash TEXT NOT NULL,
|
||||
jwt_secret TEXT NOT NULL
|
||||
);
|
||||
"""
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS setup_tokens (
|
||||
id INTEGER PRIMARY KEY,
|
||||
token_hash TEXT NOT NULL
|
||||
);
|
||||
"""
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS refresh_tokens (
|
||||
id INTEGER PRIMARY KEY,
|
||||
token_hash TEXT NOT NULL,
|
||||
username TEXT NOT NULL,
|
||||
expires_at TEXT NOT NULL
|
||||
);
|
||||
"""
|
||||
)
|
||||
conn.commit()
|
||||
return conn
|
||||
|
||||
|
||||
def is_initialized() -> bool:
|
||||
"""Check if auth has been set up (user exists in DB)."""
|
||||
conn = get_connection()
|
||||
cur = conn.execute("SELECT COUNT(*) AS c FROM auth_user")
|
||||
row = cur.fetchone()
|
||||
conn.close()
|
||||
return bool(row["c"])
|
||||
|
||||
|
||||
def create_initial_user(username: str, password: str, jwt_secret: str) -> None:
|
||||
"""
|
||||
Create the initial admin user in the database.
|
||||
|
||||
Raises sqlite3.IntegrityError if username already exists.
|
||||
"""
|
||||
from .hashing import hash_password
|
||||
|
||||
salt, pwd_hash = hash_password(password)
|
||||
conn = get_connection()
|
||||
try:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO auth_user (username, password_salt, password_hash, jwt_secret)
|
||||
VALUES (?, ?, ?, ?)
|
||||
""",
|
||||
(username, salt, pwd_hash, jwt_secret),
|
||||
)
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def delete_user(username: str) -> None:
|
||||
"""
|
||||
Delete a user from the database.
|
||||
|
||||
Used for rollback when setup fails after user creation.
|
||||
"""
|
||||
conn = get_connection()
|
||||
try:
|
||||
conn.execute("DELETE FROM auth_user WHERE username = ?", (username,))
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def get_user_and_secret(username: str) -> Optional[Tuple[str, str, str]]:
|
||||
"""
|
||||
Get user's password salt, hash, and JWT secret.
|
||||
|
||||
Returns (password_salt, password_hash, jwt_secret) or None if user not found.
|
||||
"""
|
||||
conn = get_connection()
|
||||
try:
|
||||
cur = conn.execute(
|
||||
"""
|
||||
SELECT password_salt, password_hash, jwt_secret
|
||||
FROM auth_user
|
||||
WHERE username = ?
|
||||
""",
|
||||
(username,),
|
||||
)
|
||||
row = cur.fetchone()
|
||||
if not row:
|
||||
return None
|
||||
return row["password_salt"], row["password_hash"], row["jwt_secret"]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def load_jwt_secret() -> str:
|
||||
"""
|
||||
Load the JWT secret from the database.
|
||||
|
||||
Raises RuntimeError if auth is not initialized.
|
||||
"""
|
||||
conn = get_connection()
|
||||
try:
|
||||
cur = conn.execute("SELECT jwt_secret FROM auth_user LIMIT 1")
|
||||
row = cur.fetchone()
|
||||
if not row:
|
||||
raise RuntimeError(
|
||||
"Auth is not initialized. Please set up a password first."
|
||||
)
|
||||
return row["jwt_secret"]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def save_setup_token(token: str) -> None:
|
||||
"""
|
||||
Store a hashed setup token, replacing any existing one.
|
||||
"""
|
||||
token_hash = _hash_token(token)
|
||||
conn = get_connection()
|
||||
try:
|
||||
conn.execute("DELETE FROM setup_tokens")
|
||||
conn.execute("INSERT INTO setup_tokens (token_hash) VALUES (?)", (token_hash,))
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def consume_setup_token(token: str) -> bool:
|
||||
"""
|
||||
Verify a setup token and delete it if valid.
|
||||
|
||||
Returns True if the token was valid (and is now consumed), False otherwise.
|
||||
"""
|
||||
token_hash = _hash_token(token)
|
||||
conn = get_connection()
|
||||
try:
|
||||
cur = conn.execute(
|
||||
"SELECT id FROM setup_tokens WHERE token_hash = ?", (token_hash,)
|
||||
)
|
||||
row = cur.fetchone()
|
||||
if row is None:
|
||||
return False
|
||||
conn.execute("DELETE FROM setup_tokens WHERE id = ?", (row["id"],))
|
||||
conn.commit()
|
||||
return True
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def has_pending_setup_token() -> bool:
|
||||
"""Check if a setup token is waiting to be consumed."""
|
||||
conn = get_connection()
|
||||
try:
|
||||
cur = conn.execute("SELECT COUNT(*) AS c FROM setup_tokens")
|
||||
row = cur.fetchone()
|
||||
return bool(row["c"])
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def save_refresh_token(token: str, username: str, expires_at: str) -> None:
|
||||
"""
|
||||
Store a hashed refresh token with its associated username and expiry.
|
||||
"""
|
||||
token_hash = _hash_token(token)
|
||||
conn = get_connection()
|
||||
try:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO refresh_tokens (token_hash, username, expires_at)
|
||||
VALUES (?, ?, ?)
|
||||
""",
|
||||
(token_hash, username, expires_at),
|
||||
)
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def verify_refresh_token(token: str) -> Optional[str]:
|
||||
"""
|
||||
Verify a refresh token and return the username.
|
||||
|
||||
Returns the username if valid and not expired, None otherwise.
|
||||
The token is NOT consumed — it stays valid until it expires.
|
||||
"""
|
||||
token_hash = _hash_token(token)
|
||||
conn = get_connection()
|
||||
try:
|
||||
# Clean up any expired tokens while we're here
|
||||
conn.execute(
|
||||
"DELETE FROM refresh_tokens WHERE expires_at < ?",
|
||||
(datetime.now(timezone.utc).isoformat(),),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
cur = conn.execute(
|
||||
"""
|
||||
SELECT id, username, expires_at FROM refresh_tokens
|
||||
WHERE token_hash = ?
|
||||
""",
|
||||
(token_hash,),
|
||||
)
|
||||
row = cur.fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
|
||||
# Check expiry
|
||||
expires_at = datetime.fromisoformat(row["expires_at"])
|
||||
if datetime.now(timezone.utc) > expires_at:
|
||||
conn.execute("DELETE FROM refresh_tokens WHERE id = ?", (row["id"],))
|
||||
conn.commit()
|
||||
return None
|
||||
|
||||
return row["username"]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def revoke_user_refresh_tokens(username: str) -> None:
|
||||
"""Revoke all refresh tokens for a user (e.g. on logout)."""
|
||||
conn = get_connection()
|
||||
try:
|
||||
conn.execute("DELETE FROM refresh_tokens WHERE username = ?", (username,))
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue