Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@
"source": [
"### Configure your credentials\n",
"\n",
"Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
"\n",
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",
Expand Down Expand Up @@ -350,7 +350,7 @@
}
],
"source": [
"prompt = \"\"\"Genereate a single line of hashtags for the given topic by in the same style as the following examples:\n",
"prompt = \"\"\"Generate a single line of hashtags for the given topic by in the same style as the following examples:\n",
"\n",
"Topic: Books\n",
"#BooksLover #Books #MyBooks #BestBook #BookOfTheYear\n",
Expand Down
8 changes: 4 additions & 4 deletions Gemma/[Gemma_1]Basics_with_HF.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -881,8 +881,8 @@
}
],
"source": [
"# Note: The token needs to have \"write\" permisssion\n",
"# You can chceck it here:\n",
"# Note: The token needs to have \"write\" permission\n",
"# You can check it here:\n",
"# https://huggingface.co/settings/tokens\n",
"model.push_to_hub(\"my-gemma-2-finetuned-model\")"
]
Expand Down Expand Up @@ -921,9 +921,9 @@
},
"outputs": [],
"source": [
"!model=\"google/gemma-1.1-2b-it\" # ID of the model in Hugging Face hube\n",
"!model=\"google/gemma-1.1-2b-it\" # ID of the model in Hugging Face hub\n",
"# (you can use your own fine-tuned model from\n",
"# the prevous step)\n",
"# the previous step)\n",
"!volume=$PWD/data # Shared directory with the Docker container\n",
"# to avoid downloading weights every run\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion Gemma/[Gemma_1]Common_use_cases.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@
"source": [
"### Configure your credentials\n",
"\n",
"Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
"\n",
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",
Expand Down
2 changes: 1 addition & 1 deletion Gemma/[Gemma_1]Finetune_distributed.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1260,7 +1260,7 @@
"source": [
"# What's next\n",
"\n",
"In this tutorial, you learned how to chat with the Gemma 7B model and fine-tune it to speak like a pirate, using Keras on JAX. You also learned how to load and train the large model in a distributed manner, on powerful TPUs, uising model parallelism.\n",
"In this tutorial, you learned how to chat with the Gemma 7B model and fine-tune it to speak like a pirate, using Keras on JAX. You also learned how to load and train the large model in a distributed manner, on powerful TPUs, using model parallelism.\n",
"\n",
"Here are a few suggestions for what else to learn, about Keras and JAX:\n",
"* [Distributed training with Keras 3](https://keras.io/guides/distribution/).\n",
Expand Down
6 changes: 3 additions & 3 deletions Gemma/[Gemma_1]Minimal_RAG.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -85,9 +85,9 @@
"\n",
"### Chunking the data\n",
"\n",
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
"\n",
"In this cookcook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
"In this cookbook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
]
},
{
Expand Down Expand Up @@ -828,7 +828,7 @@
"id": "uXLpmtoeU0gx"
},
"source": [
"Now load the Gemma model in quanzied 4-bit mode using Hugging Face."
"Now load the Gemma model in quantized 4-bit mode using Hugging Face."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions Gemma/[Gemma_1]RAG_with_ChromaDB.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@
"\n",
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
"\n",
"In this cookcook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
"In this cookbook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
]
},
{
Expand Down Expand Up @@ -400,7 +400,7 @@
"source": [
"### Generate the answer\n",
"\n",
"Now load the Gemma model in quanzied 4-bit mode using Hugging Face."
"Now load the Gemma model in quantized 4-bit mode using Hugging Face."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion Gemma/[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@
"## Load the Model\n",
"You will use the latest [Gemma-2B](https://huggingface.co/google/gemma-1.1-2b-it), this model offers 2 billion parameters, ensuring a lightweight footprint.\n",
"\n",
"The Gemma model can be loaded using the familiar [`from_pretrained`](https://huggingface.co/docs/transformers/v4.38.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method in Transformers. This method downloads the model weights from the Hugging Face Hub the first time it is called, and subsequently intialises the Gemma model using these weights.\n"
"The Gemma model can be loaded using the familiar [`from_pretrained`](https://huggingface.co/docs/transformers/v4.38.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method in Transformers. This method downloads the model weights from the Hugging Face Hub the first time it is called, and subsequently initializes the Gemma model using these weights.\n"
]
},
{
Expand Down
Loading