Sentiment Analysis Fine Tuner¶
Bases: OpenAIFineTuner
A bolt for fine-tuning OpenAI models for translation tasks.
This bolt uses the OpenAI API to fine-tune a pre-trained model for translation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
BatchInput
|
The batch input data. |
required |
output |
BatchOutput
|
The output data. |
required |
state |
State
|
The state manager. |
required |
CLI Usage:
genius HuggingFaceCommonsenseReasoningFineTuner rise \
batch \
--input_s3_bucket geniusrise-test \
--input_s3_folder train \
batch \
--output_s3_bucket geniusrise-test \
--output_s3_folder model \
fine_tune \
--args model_name=my_model tokenizer_name=my_tokenizer num_train_epochs=3 per_device_train_batch_size=8
YAML Configuration:
version: "1"
bolts:
my_fine_tuner:
name: "HuggingFaceCommonsenseReasoningFineTuner"
method: "fine_tune"
args:
model_name: "my_model"
tokenizer_name: "my_tokenizer"
num_train_epochs: 3
per_device_train_batch_size: 8
data_max_length: 512
input:
type: "batch"
args:
bucket: "my_bucket"
folder: "my_dataset"
output:
type: "batch"
args:
bucket: "my_bucket"
folder: "my_model"
deploy:
type: k8s
args:
kind: deployment
name: my_fine_tuner
context_name: arn:aws:eks:us-east-1:genius-dev:cluster/geniusrise-dev
namespace: geniusrise
image: geniusrise/geniusrise
kube_config_path: ~/.kube/config
Supported Data Formats
- JSONL
- CSV
- Parquet
- JSON
- XML
- YAML
- TSV
- Excel (.xls, .xlsx)
- SQLite (.db)
- Feather
load_dataset(dataset_path, origin='en', target='fr', **kwargs)
¶
Load a dataset from a directory.
Supported Data Formats and Structures for Translation Tasks:¶
JSONL¶
Each line is a JSON object representing an example.
CSV¶
Should contain 'en' and 'fr' columns.
Parquet¶
Should contain 'en' and 'fr' columns.
JSON¶
An array of dictionaries with 'en' and 'fr' keys.
XML¶
Each 'record' element should contain 'en' and 'fr' child elements.
YAML¶
Each document should be a dictionary with 'en' and 'fr' keys.
TSV¶
Should contain 'en' and 'fr' columns separated by tabs.
Excel (.xls, .xlsx)¶
Should contain 'en' and 'fr' columns.
SQLite (.db)¶
Should contain a table with 'en' and 'fr' columns.
Feather¶
Should contain 'en' and 'fr' columns.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_path |
str
|
The path to the directory containing the dataset files. |
required |
max_length |
int
|
The maximum length for tokenization. Defaults to 512. |
required |
origin |
str
|
The origin language. Defaults to 'en'. |
'en'
|
target |
str
|
The target language. Defaults to 'fr'. |
'fr'
|
**kwargs |
Additional keyword arguments. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
DatasetDict |
Dataset | DatasetDict | Optional[Dataset]
|
The loaded dataset. |
prepare_fine_tuning_data(data, data_type)
¶
Prepare the given data for fine-tuning.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
Dataset | DatasetDict | Optional[Dataset]
|
The dataset to prepare. |
required |
data_type |
str
|
Either 'train' or 'eval' to specify the type of data. |
required |
Raises:
Type | Description |
---|---|
ValueError
|
If data_type is not 'train' or 'eval'. |