Skip to content

Commit 212752f

Browse files
authored
Merge branch 'main' into fix/typos-gemma2-notebooks
2 parents e26db01 + de2cbef commit 212752f

9 files changed

+22
-22
lines changed

CodeGemma/[CodeGemma_1]Common_use_cases.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
"source": [
8282
"### Configure your credentials\n",
8383
"\n",
84-
"Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
84+
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
8585
"\n",
8686
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
8787
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",

CodeGemma/[CodeGemma_1]Finetune_with_SQL.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -401,7 +401,7 @@
401401
"* a python script producing a SQL query\n",
402402
"* two separate scripts producing respectively, python and SQL code. \n",
403403
"\n",
404-
"CodeGemma picked the the first option. Bear it in mind!"
404+
"CodeGemma picked the first option. Bear it in mind!"
405405
]
406406
},
407407
{
@@ -412,7 +412,7 @@
412412
"source": [
413413
"## Fine-tuning the model with LoRA\n",
414414
"\n",
415-
"This section of the guide focuses on training your Large Language Model (LLM) to generate SQL code fron natural language. Here, we will explore the process of fine-tuning your model to enable it to produce high quality SQL queries."
415+
"This section of the guide focuses on training your Large Language Model (LLM) to generate SQL code from natural language. Here, we will explore the process of fine-tuning your model to enable it to produce high quality SQL queries."
416416
]
417417
},
418418
{
@@ -943,7 +943,7 @@
943943
"source": [
944944
"This time the model picked the second option providing two separate scripts producing respectively, python and SQL code! \n",
945945
"\n",
946-
"The model knows we 'prefer' to get a SQL query now but it didn't forget the other porgramming languages it's been trained on."
946+
"The model knows we 'prefer' to get a SQL query now but it didn't forget the other programming languages it's been trained on."
947947
]
948948
},
949949
{
@@ -970,8 +970,8 @@
970970
"id": "HIDWBva0_SX4"
971971
},
972972
"source": [
973-
"# Note: The token needs to have \"write\" permisssion\n",
974-
"# You can chceck it here:\n",
973+
"# Note: The token needs to have \"write\" permission\n",
974+
"# You can check it here:\n",
975975
"# https://huggingface.co/settings/tokens\n",
976976
"model.push_to_hub(\"my-codegemma-7-finetuned-model\")"
977977
]
@@ -1008,9 +1008,9 @@
10081008
"id": "0wEjhtJawvSr"
10091009
},
10101010
"source": [
1011-
"!model=\"google/codegemma-7b-it\" # ID of the model in Hugging Face hube\n",
1011+
"!model=\"google/codegemma-7b-it\" # ID of the model in Hugging Face hub\n",
10121012
"# (you can use your own fine-tuned model from\n",
1013-
"# the prevous step)\n",
1013+
"# the previous step)\n",
10141014
"!volume=$PWD/data # Shared directory with the Docker container\n",
10151015
"# to avoid downloading weights every run\n",
10161016
"\n",

Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@
8282
"source": [
8383
"### Configure your credentials\n",
8484
"\n",
85-
"Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
85+
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
8686
"\n",
8787
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
8888
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",
@@ -350,7 +350,7 @@
350350
}
351351
],
352352
"source": [
353-
"prompt = \"\"\"Genereate a single line of hashtags for the given topic by in the same style as the following examples:\n",
353+
"prompt = \"\"\"Generate a single line of hashtags for the given topic by in the same style as the following examples:\n",
354354
"\n",
355355
"Topic: Books\n",
356356
"#BooksLover #Books #MyBooks #BestBook #BookOfTheYear\n",

Gemma/[Gemma_1]Basics_with_HF.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -881,8 +881,8 @@
881881
}
882882
],
883883
"source": [
884-
"# Note: The token needs to have \"write\" permisssion\n",
885-
"# You can chceck it here:\n",
884+
"# Note: The token needs to have \"write\" permission\n",
885+
"# You can check it here:\n",
886886
"# https://huggingface.co/settings/tokens\n",
887887
"model.push_to_hub(\"my-gemma-2-finetuned-model\")"
888888
]
@@ -921,9 +921,9 @@
921921
},
922922
"outputs": [],
923923
"source": [
924-
"!model=\"google/gemma-1.1-2b-it\" # ID of the model in Hugging Face hube\n",
924+
"!model=\"google/gemma-1.1-2b-it\" # ID of the model in Hugging Face hub\n",
925925
"# (you can use your own fine-tuned model from\n",
926-
"# the prevous step)\n",
926+
"# the previous step)\n",
927927
"!volume=$PWD/data # Shared directory with the Docker container\n",
928928
"# to avoid downloading weights every run\n",
929929
"\n",

Gemma/[Gemma_1]Common_use_cases.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
"source": [
8282
"### Configure your credentials\n",
8383
"\n",
84-
"Add your your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
84+
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
8585
"\n",
8686
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
8787
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",

Gemma/[Gemma_1]Finetune_distributed.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1260,7 +1260,7 @@
12601260
"source": [
12611261
"# What's next\n",
12621262
"\n",
1263-
"In this tutorial, you learned how to chat with the Gemma 7B model and fine-tune it to speak like a pirate, using Keras on JAX. You also learned how to load and train the large model in a distributed manner, on powerful TPUs, uising model parallelism.\n",
1263+
"In this tutorial, you learned how to chat with the Gemma 7B model and fine-tune it to speak like a pirate, using Keras on JAX. You also learned how to load and train the large model in a distributed manner, on powerful TPUs, using model parallelism.\n",
12641264
"\n",
12651265
"Here are a few suggestions for what else to learn, about Keras and JAX:\n",
12661266
"* [Distributed training with Keras 3](https://keras.io/guides/distribution/).\n",

Gemma/[Gemma_1]Minimal_RAG.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -85,9 +85,9 @@
8585
"\n",
8686
"### Chunking the data\n",
8787
"\n",
88-
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
88+
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
8989
"\n",
90-
"In this cookcook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
90+
"In this cookbook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
9191
]
9292
},
9393
{
@@ -828,7 +828,7 @@
828828
"id": "uXLpmtoeU0gx"
829829
},
830830
"source": [
831-
"Now load the Gemma model in quanzied 4-bit mode using Hugging Face."
831+
"Now load the Gemma model in quantized 4-bit mode using Hugging Face."
832832
]
833833
},
834834
{

Gemma/[Gemma_1]RAG_with_ChromaDB.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@
8787
"\n",
8888
"To improve the relevance of content returned by the vector database during retrieval, break down large documents into smaller pieces or chunks while ingesting the document.\n",
8989
"\n",
90-
"In this cookcook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
90+
"In this cookbook, you will use the [Google I/O 2024 Gemma family expansion launch blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/) as the sample document and Google's [Open Source HtmlChunker](https://github.com/google/labs-prototypes/tree/main/seeds/chunker-python) to chunk it up into passages."
9191
]
9292
},
9393
{
@@ -400,7 +400,7 @@
400400
"source": [
401401
"### Generate the answer\n",
402402
"\n",
403-
"Now load the Gemma model in quanzied 4-bit mode using Hugging Face."
403+
"Now load the Gemma model in quantized 4-bit mode using Hugging Face."
404404
]
405405
},
406406
{

Gemma/[Gemma_1]data_parallel_inference_in_jax_tpu.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
"## Load the Model\n",
218218
"You will use the latest [Gemma-2B](https://huggingface.co/google/gemma-1.1-2b-it), this model offers 2 billion parameters, ensuring a lightweight footprint.\n",
219219
"\n",
220-
"The Gemma model can be loaded using the familiar [`from_pretrained`](https://huggingface.co/docs/transformers/v4.38.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method in Transformers. This method downloads the model weights from the Hugging Face Hub the first time it is called, and subsequently intialises the Gemma model using these weights.\n"
220+
"The Gemma model can be loaded using the familiar [`from_pretrained`](https://huggingface.co/docs/transformers/v4.38.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method in Transformers. This method downloads the model weights from the Hugging Face Hub the first time it is called, and subsequently initializes the Gemma model using these weights.\n"
221221
]
222222
},
223223
{

0 commit comments

Comments
 (0)