Skip to content

Commit 89cf371

Browse files
Merge branch 'master' into feature/remove_ov_affinity
2 parents ec61fd1 + 2bdaf5a commit 89cf371

File tree

10 files changed

+73
-72
lines changed

10 files changed

+73
-72
lines changed

docs/articles_en/about-openvino/performance-benchmarks.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,7 @@ implemented in your solutions. Click the buttons below to see the chosen benchma
5656

5757
:material-regular:`table_view;1.4em` LLM performance for AI PC
5858

59-
.. uncomment under
60-
.. .. grid-item::
59+
.. grid-item::
6160

6261
.. button-link:: #
6362
:class: ovms-toolkit-benchmark-llm-result

docs/articles_en/about-openvino/release-notes-openvino.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -641,7 +641,7 @@ Previous 2024 releases
641641
* New samples and pipelines are now available:
642642

643643
* An example IterableStreamer implementation in
644-
`multinomial_causal_lm/python sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python/multinomial_causal_lm>`__
644+
`multinomial_causal_lm/python sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python//text_generation/multinomial_causal_lm>`__
645645

646646
* GenAI compilation is now available as part of OpenVINO via the –DOPENVINO_EXTRA_MODULES CMake
647647
option.

docs/articles_en/learn-openvino/llm_inference_guide/genai-guide.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -367,7 +367,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi
367367
368368
369369
For more information, refer to the
370-
`Python sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python/chat_sample/>`__.
370+
`Python sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python/text_generation/chat_sample/>`__.
371371

372372
.. tab-item:: C++
373373
:sync: cpp
@@ -415,7 +415,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi
415415
416416
417417
For more information, refer to the
418-
`C++ sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/cpp/chat_sample/>`__
418+
`C++ sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/cpp/text_generation/chat_sample/>`__
419419

420420

421421
.. dropdown:: Using GenAI with Vision Language Models
@@ -803,7 +803,7 @@ runs prediction of the next K tokens, thus repeating the cycle.
803803
804804
805805
For more information, refer to the
806-
`Python sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python/speculative_decoding_lm/>`__.
806+
`Python sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/python/text_generation/speculative_decoding_lm/>`__.
807807

808808

809809
.. tab-item:: C++
@@ -859,7 +859,7 @@ runs prediction of the next K tokens, thus repeating the cycle.
859859
860860
861861
For more information, refer to the
862-
`C++ sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/cpp/speculative_decoding_lm/>`__
862+
`C++ sample <https://github.com/openvinotoolkit/openvino.genai/tree/master/samples/cpp/text_generation/speculative_decoding_lm/>`__
863863

864864

865865

docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -206,14 +206,16 @@ Here is an example of how to convert a model obtained with ``torch.export``:
206206
Converting a PyTorch Model from Disk
207207
####################################
208208

209-
PyTorch provides the capability to save models in two distinct formats: ``torch.jit.ScriptModule`` and ``torch.export.ExportedProgram``.
210-
Both formats can be saved to disk as standalone files, enabling them to be reloaded independently of the original Python code.
209+
PyTorch can save models in two formats: ``torch.jit.ScriptModule`` and ``torch.export.ExportedProgram``.
210+
Both formats may be saved to drive as standalone files and reloaded later, independently of the
211+
original Python code.
211212

212213
ExportedProgram Format
213214
++++++++++++++++++++++
214215

215-
The ``ExportedProgram`` format is saved on disk using `torch.export.save() <https://pytorch.org/docs/stable/export.html#serialization>`__.
216-
Below is an example of how to convert an ``ExportedProgram`` from disk:
216+
You can save the ``ExportedProgram`` format using
217+
`torch.export.save() <https://pytorch.org/docs/stable/export.html#serialization>`__.
218+
Here is an example of how to convert it:
217219

218220
.. tab-set::
219221

@@ -236,8 +238,9 @@ Below is an example of how to convert an ``ExportedProgram`` from disk:
236238
ScriptModule Format
237239
+++++++++++++++++++
238240

239-
`torch.jit.save() <https://pytorch.org/docs/stable/generated/torch.jit.save.html>`__ serializes ``ScriptModule`` object on disk.
240-
To convert the serialized ``ScriptModule`` format, run ``convert_model`` function with ``example_input`` parameter as follows:
241+
`torch.jit.save() <https://pytorch.org/docs/stable/generated/torch.jit.save.html>`__ serializes
242+
the ``ScriptModule`` object on a drive. To convert the serialized ``ScriptModule`` format, run
243+
the ``convert_model`` function with ``example_input`` parameter as follows:
241244

242245
.. code-block:: py
243246
:force:
@@ -252,15 +255,15 @@ To convert the serialized ``ScriptModule`` format, run ``convert_model`` functio
252255
Exporting a PyTorch Model to ONNX Format
253256
########################################
254257

255-
An alternative method of converting PyTorch models is exporting a PyTorch model to ONNX with
256-
``torch.onnx.export`` first and then converting the resulting ``.onnx`` file to OpenVINO Model
257-
with ``openvino.convert_model``. It can be considered as a backup solution if a model cannot be
258-
converted directly from PyTorch to OpenVINO as described in the above chapters. Converting through
259-
ONNX can be more expensive in terms of code, conversion time, and allocated memory.
258+
An alternative method of converting a PyTorch models is to export it to ONNX first
259+
(with ``torch.onnx.export``) and then convert the resulting ``.onnx`` file to the OpenVINO IR
260+
model (with ``openvino.convert_model``). It should be considered a backup solution, if a model
261+
cannot be converted directly, as described previously. Converting through ONNX can be more
262+
expensive in terms of code overhead, conversion time, and allocated memory.
260263

261264
1. Refer to the `Exporting PyTorch models to ONNX format <https://pytorch.org/docs/stable/onnx.html>`__
262265
guide to learn how to export models from PyTorch to ONNX.
263-
2. Follow :doc:`Convert an ONNX model <convert-model-onnx>` chapter to produce OpenVINO model.
266+
2. Follow the :doc:`Convert an ONNX model <convert-model-onnx>` guide to produce OpenVINO IR.
264267

265268
Here is an illustration of using these two steps together:
266269

docs/articles_en/openvino-workflow/torch-compile.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,8 @@ PyTorch Deployment via "torch.compile"
55

66
The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
77
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
8-
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
8+
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes
9+
through the following steps:
910

1011
1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:
1112

docs/dev/pypi_publish/pypi-openvino-rt.md

Lines changed: 26 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@
66
Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing and deploying
77
AI inference. It can be used to develop applications and solutions based on deep learning tasks,
88
such as: emulation of human vision, automatic speech recognition, natural language processing,
9-
recommendation systems, etc. It provides high-performance and rich deployment options, from
10-
edge to cloud.
9+
recommendation systems, image generation, etc. It provides high-performance and rich deployment
10+
options, from edge to cloud.
1111

1212
If you have chosen a model, you can integrate it with your application through OpenVINO™ and
1313
deploy it on various devices. The OpenVINO™ Python package includes a set of libraries for easy
@@ -26,7 +26,7 @@ versions. The complete list of supported hardware is available on the
2626
2727
## Install OpenVINO™
2828

29-
### Step 1. Set Up Python Virtual Environment
29+
### Step 1. Set up Python virtual environment
3030

3131
Use a virtual environment to avoid dependency conflicts. To create a virtual environment, use
3232
the following commands:
@@ -43,7 +43,7 @@ python3 -m venv openvino_env
4343

4444
> **NOTE**: On Linux and macOS, you may need to [install pip](https://pip.pypa.io/en/stable/installation/).
4545
46-
### Step 2. Activate the Virtual Environment
46+
### Step 2. Activate the virtual environment
4747

4848
On Windows:
4949
```sh
@@ -55,24 +55,23 @@ On Linux and macOS:
5555
source openvino_env/bin/activate
5656
```
5757

58-
### Step 3. Set Up and Update PIP to the Highest Version
58+
### Step 3. Set up PIP and update it to the highest version
5959

60-
Run the command below:
60+
Run the command:
6161
```sh
6262
python -m pip install --upgrade pip
6363
```
6464

65-
### Step 4. Install the Package
65+
### Step 4. Install the package
6666

67-
Run the command below: <br>
68-
69-
```sh
70-
pip install openvino
71-
```
67+
Run the command:
68+
```sh
69+
pip install openvino
70+
```
7271

73-
### Step 5. Verify that the Package Is Installed
72+
### Step 5. Verify that the package is installed
7473

75-
Run the command below:
74+
Run the command:
7675
```sh
7776
python -c "from openvino import Core; print(Core().available_devices)"
7877
```
@@ -88,30 +87,30 @@ If installation was successful, you will see the list of available devices.
8887
<th>Description</th>
8988
</tr>
9089
<tr>
91-
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference.html">OpenVINO Runtime</a></td>
90+
<td><a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference.html">OpenVINO Runtime</a></td>
9291
<td>`openvino package`</td>
9392
<td>OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common
9493
API to deliver inference solutions on the platform of your choice. Use the OpenVINO
9594
Runtime API to read PyTorch, TensorFlow, TensorFlow Lite, ONNX, and PaddlePaddle models
9695
and execute them on preferred devices. OpenVINO Runtime uses a plugin architecture and
9796
includes the following plugins:
98-
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html">CPU</a>,
99-
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html">GPU</a>,
100-
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching.html">Auto Batch</a>,
101-
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.html">Auto</a>,
102-
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.html">Hetero</a>,
97+
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html">CPU</a>,
98+
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html">GPU</a>,
99+
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching.html">Auto Batch</a>,
100+
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.html">Auto</a>,
101+
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.html">Hetero</a>,
103102
</td>
104103
</tr>
105104
<tr>
106-
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html#convert-a-model-in-cli-ovc">OpenVINO Model Converter (OVC)</a></td>
105+
<td><a href="https://docs.openvino.ai/2025/openvino-workflow/model-preparation.html#convert-a-model-in-cli-ovc">OpenVINO Model Converter (OVC)</a></td>
107106
<td>`ovc`</td>
108107
<td>OpenVINO Model Converter converts models that were trained in popular frameworks to a
109108
format usable by OpenVINO components. </br>Supported frameworks include ONNX, TensorFlow,
110109
TensorFlow Lite, and PaddlePaddle.
111110
</td>
112111
</tr>
113112
<tr>
114-
<td><a href="https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html">Benchmark Tool</a></td>
113+
<td><a href="https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html">Benchmark Tool</a></td>
115114
<td>`benchmark_app`</td>
116115
<td>Benchmark Application** allows you to estimate deep learning inference performance on
117116
supported devices for synchronous and asynchronous modes.
@@ -122,8 +121,8 @@ If installation was successful, you will see the list of available devices.
122121

123122
## Troubleshooting
124123

125-
For general troubleshooting steps and issues, see
126-
[Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2024/get-started/troubleshooting-install-config.html).
124+
For general troubleshooting, see the
125+
[Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2025/get-started/troubleshooting-install-config.html).
127126
The following sections also provide explanations to several error messages.
128127

129128
### Errors with Installing via PIP for Users in China
@@ -145,11 +144,11 @@ the [C++ redistributable (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe)
145144
You can also view a full download list on the
146145
[official support page](https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist).
147146

148-
### ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory
147+
### ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory
149148

150149
To resolve missing external dependency on Ubuntu*, execute the following command:
151150
```sh
152-
sudo apt-get install libpython3.8
151+
sudo apt-get install libpython3.10
153152
```
154153

155154
## Additional Resources
@@ -159,7 +158,7 @@ sudo apt-get install libpython3.8
159158
- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks)
160159
- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)
161160

162-
Copyright © 2018-2024 Intel Corporation
161+
Copyright © 2018-2025 Intel Corporation
163162
> **LEGAL NOTICE**: Your use of this software and any required dependent software (the
164163
“Software Package”) is subject to the terms and conditions of the
165164
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html) for the Software Package,

0 commit comments

Comments
 (0)