Skip to content

Commit de2cbef

Browse files
authored
Merge pull request #177 from Rahul-Lashkari/fix/typos-gemma1-notebooks
Fix: Correct typos in Gemma_1 notebooks
2 parents ff52a09 + 4c5ec4a commit de2cbef

7 files changed

+14
-14
lines changed

Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@
8282
"source": [
8383
"### Configure your credentials\n",
8484
"\n",
85-
"Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
85+
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
8686
"\n",
8787
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
8888
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",
@@ -350,7 +350,7 @@
350350
}
351351
],
352352
"source": [
353-
"prompt = \"\"\"Genereate a single line of hashtags for the given topic by in the same style as the following examples:\n",
353+
"prompt = \"\"\"Generate a single line of hashtags for the given topic by in the same style as the following examples:\n",
354354
"\n",
355355
"Topic: Books\n",
356356
"#BooksLover #Books #MyBooks #BestBook #BookOfTheYear\n",

Gemma/[Gemma_1]Basics_with_HF.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -881,8 +881,8 @@
881881
}
882882
],
883883
"source": [
884-
"# Note: The token needs to have \"write\" permisssion\n",
885-
"# You can chceck it here:\n",
884+
"# Note: The token needs to have \"write\" permission\n",
885+
"# You can check it here:\n",
886886
"# https://huggingface.co/settings/tokens\n",
887887
"model.push_to_hub(\"my-gemma-2-finetuned-model\")"
888888
]
@@ -921,9 +921,9 @@
921921
},
922922
"outputs": [],
923923
"source": [
924-
"!model=\"google/gemma-1.1-2b-it\" # ID of the model in Hugging Face hube\n",
924+
"!model=\"google/gemma-1.1-2b-it\" # ID of the model in Hugging Face hub\n",
925925
"# (you can use your own fine-tuned model from\n",
926-
"# the prevous step)\n",
926+
"# the previous step)\n",
927927
"!volume=$PWD/data # Shared directory with the Docker container\n",
928928
"# to avoid downloading weights every run\n",
929929
"\n",

Gemma/[Gemma_1]Common_use_cases.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
"source": [
8282
"### Configure your credentials\n",
8383
"\n",
84-
"Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
84+
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
8585
"\n",
8686
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
8787
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",

Gemma/[Gemma_1]Finetune_distributed.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1260,7 +1260,7 @@
12601260
"source": [
12611261
"# What's next\n",
12621262
"\n",
1263-
"In this tutorial, you learned how to chat with the Gemma 7B model and fine-tune it to speak like a pirate, using Keras on JAX. You also learned how to load and train the large model in a distributed manner, on powerful TPUs, uising model parallelism.\n",
1263+
"In this tutorial, you learned how to chat with the Gemma 7B model and fine-tune it to speak like a pirate, using Keras on JAX. You also learned how to load and train the large model in a distributed manner, on powerful TPUs, using model parallelism.\n",
12641264
"\n",
12651265
"Here are a few suggestions for what else to learn, about Keras and JAX:\n",
12661266
"* [Distributed training with Keras 3](https://keras.io/guides/distribution/).\n",

Gemma/[Gemma_1]Minimal_RAG.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -85,9 +85,9 @@
8585
"\n",
8686
"### Chunking the data\n",
8787
"\n",
88-
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
88+
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
8989
"\n",
90-
"In this cookcook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
90+
"In this cookbook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
9191
]
9292
},
9393
{
@@ -828,7 +828,7 @@
828828
"id": "uXLpmtoeU0gx"
829829
},
830830
"source": [
831-
"Now load the Gemma model in quanzied 4-bit mode using Hugging Face."
831+
"Now load the Gemma model in quantized 4-bit mode using Hugging Face."
832832
]
833833
},
834834
{

Gemma/[Gemma_1]RAG_with_ChromaDB.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@
8787
"\n",
8888
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
8989
"\n",
90-
"In this cookcook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
90+
"In this cookbook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
9191
]
9292
},
9393
{
@@ -400,7 +400,7 @@
400400
"source": [
401401
"### Generate the answer\n",
402402
"\n",
403-
"Now load the Gemma model in quanzied 4-bit mode using Hugging Face."
403+
"Now load the Gemma model in quantized 4-bit mode using Hugging Face."
404404
]
405405
},
406406
{

Gemma/[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
"## Load the Model\n",
218218
"You will use the latest [Gemma-2B](https://huggingface.co/google/gemma-1.1-2b-it), this model offers 2 billion parameters, ensuring a lightweight footprint.\n",
219219
"\n",
220-
"The Gemma model can be loaded using the familiar [`from_pretrained`](https://huggingface.co/docs/transformers/v4.38.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method in Transformers. This method downloads the model weights from the Hugging Face Hub the first time it is called, and subsequently intialises the Gemma model using these weights.\n"
220+
"The Gemma model can be loaded using the familiar [`from_pretrained`](https://huggingface.co/docs/transformers/v4.38.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method in Transformers. This method downloads the model weights from the Hugging Face Hub the first time it is called, and subsequently initializes the Gemma model using these weights.\n"
221221
]
222222
},
223223
{

0 commit comments

Comments
 (0)