📋 المحتوى المنظم
📖 محتوى تعليمي مفصّل
Code Example: Text Generation
نوع: محتوى تعليمي
excellent and the steaks were delicious. I will definitely return soon, highly
recommended!'
# encodes the given text into tokens
encoded_text = tokenizer.encode(text, return_tensors='pt')
# use the generator to generate more tokens.
# do_sample=True prevents GPT-2 from just predicting the most likely word at every step.
generated_tokens = generator.generate(encoded_text,
max_length=200) # max number of new tokens to
generate
# decode the generates tokens to convert them to words
# skip_special_tokens=True is used to avoid special tokens such as '>' or '-' characters.
print(tokenizer.decode(generated_tokens[0], skip_special_tokens=True))
Generated Text Output 1
نوع: محتوى تعليمي
We had dinner at this restaurant yesterday. It is very close to my house.
All my friends were there, we had a great time. The location is excellent
and the steaks were delicious.I will definitely return soon, highly
recommended!
I've been coming here for a while now and I've been coming here for a while
now and I've been coming here for a while now and I've been coming here for
a while now and I've been coming here for a while now and I've been coming
here for a while now and I've been coming here for a while now and I've been
coming here for a while now and I've been coming here for a while now and
I've been coming here for a while now and I've been coming here for a while
now and I've been coming here for a while now and I've been coming here for
a while now and I've been coming here for a while now and I've been coming
here for a while now and I've been coming here for a while now and I've been
coming here for a while now and
Code Example: Text Generation (do_sample=True)
نوع: محتوى تعليمي
# use the generator to generate more tokens.
# do_sample=True prevents GPT-2 from just predicting the most likely word at every step.
generated_tokens = generator.generate(encoded_text,
max_length=200,
do_sample=True)
Generated Text Output 2
نوع: محتوى تعليمي
print(tokenizer.decode(generated_tokens[0], skip_special_tokens=True))
We had dinner at this restaurant yesterday. It is very close to my house.
All my friends were there, we had a great time. The location is excellent
and the steaks were delicious.I will definitely return soon, highly
recommended!
If you just found this place helpful. If you like to watch videos or
go to the pool while you're there, go for it! Good service - I'm from
Colorado and love to get in and out of this place. The food was amazing!
Also, we were happy to see the waitstaff with their great hands - I went
for dinner. I ordered a small side salad (with garlic on top), and had a
slice of tuna instead. When I was eating, I was able to get up and eat my
salad while waiting for my friend to pick up the plate, so I had a great
time too. Staff was welcoming and accommodating. Parking is cheap in this
neighborhood, and it is in the neighborhood that it needs to
Ministry of Education Footer
نوع: METADATA
وزارة التعليم
187
Ministry of Education
2023 - 1447
🔍 عناصر مرئية
Ministry of Education Logo
A green, abstract, geometric logo with a stylized Arabic script, representing the Ministry of Education. Below the Arabic text, 'Ministry of Education' is written in English, along with the number '187' and the years '2023 - 1447'.
📄 النص الكامل للصفحة
--- SECTION: Code Example: Text Generation ---
excellent and the steaks were delicious. I will definitely return soon, highly
recommended!'
# encodes the given text into tokens
encoded_text = tokenizer.encode(text, return_tensors='pt')
# use the generator to generate more tokens.
# do_sample=True prevents GPT-2 from just predicting the most likely word at every step.
generated_tokens = generator.generate(encoded_text,
max_length=200) # max number of new tokens to
generate
# decode the generates tokens to convert them to words
# skip_special_tokens=True is used to avoid special tokens such as '>' or '-' characters.
print(tokenizer.decode(generated_tokens[0], skip_special_tokens=True))
--- SECTION: Generated Text Output 1 ---
We had dinner at this restaurant yesterday. It is very close to my house.
All my friends were there, we had a great time. The location is excellent
and the steaks were delicious.I will definitely return soon, highly
recommended!
I've been coming here for a while now and I've been coming here for a while
now and I've been coming here for a while now and I've been coming here for
a while now and I've been coming here for a while now and I've been coming
here for a while now and I've been coming here for a while now and I've been
coming here for a while now and I've been coming here for a while now and
I've been coming here for a while now and I've been coming here for a while
now and I've been coming here for a while now and I've been coming here for
a while now and I've been coming here for a while now and I've been coming
here for a while now and I've been coming here for a while now and I've been
coming here for a while now and
--- SECTION: Code Example: Text Generation (do_sample=True) ---
# use the generator to generate more tokens.
# do_sample=True prevents GPT-2 from just predicting the most likely word at every step.
generated_tokens = generator.generate(encoded_text,
max_length=200,
do_sample=True)
--- SECTION: Generated Text Output 2 ---
print(tokenizer.decode(generated_tokens[0], skip_special_tokens=True))
We had dinner at this restaurant yesterday. It is very close to my house.
All my friends were there, we had a great time. The location is excellent
and the steaks were delicious.I will definitely return soon, highly
recommended!
If you just found this place helpful. If you like to watch videos or
go to the pool while you're there, go for it! Good service - I'm from
Colorado and love to get in and out of this place. The food was amazing!
Also, we were happy to see the waitstaff with their great hands - I went
for dinner. I ordered a small side salad (with garlic on top), and had a
slice of tuna instead. When I was eating, I was able to get up and eat my
salad while waiting for my friend to pick up the plate, so I had a great
time too. Staff was welcoming and accommodating. Parking is cheap in this
neighborhood, and it is in the neighborhood that it needs to
--- SECTION: Ministry of Education Footer ---
وزارة التعليم
187
Ministry of Education
2023 - 1447
--- VISUAL CONTEXT ---
**IMAGE**: Ministry of Education Logo
Description: A green, abstract, geometric logo with a stylized Arabic script, representing the Ministry of Education. Below the Arabic text, 'Ministry of Education' is written in English, along with the number '187' and the years '2023 - 1447'.
Table Structure:
Headers: N/A
Data: N/A
Context: This logo serves as a footer or institutional branding for the textbook page.
🎴 بطاقات تعليمية للمراجعة
عدد البطاقات: 4 بطاقة لهذه الصفحة
ما هو الغرض الرئيسي من استخدام `do_sample=True` في دالة `generator.generate` عند إنشاء نص باستخدام نماذج مثل GPT-2؟
الإجابة: يمنع `do_sample=True` نموذج GPT-2 من اختيار الكلمة الأكثر احتمالاً بشكل صارم في كل خطوة، مما يؤدي إلى إنشاء نص أكثر تنوعًا وإبداعًا وأقل تكرارًا.
الشرح: عندما يكون `do_sample=False` (وهو الوضع الافتراضي)، يختار النموذج دائمًا الكلمة التي يعتقد أنها الأكثر احتمالاً لتتبع الكلمات السابقة. هذا يمكن أن يؤدي إلى نصوص متوقعة أو متكررة. عند تعيين `do_sample=True`، يتم استخدام تقنيات أخذ العينات (مثل احتمالات الكلمات) للسماح باختيار كلمات أقل احتمالاً، مما يجعل النص الناتج أكثر حيوية وتنوعًا.
تلميح: فكر في طبيعة الاختيار عند توليد النص، وماذا يعني 'أخذ عينة' بدلاً من 'الاختيار الحتمي'.
ماذا يعني استخدام `return_tensors='pt'` في دالة `tokenizer.encode`؟
الإجابة: يعني أن النص المشفر سيتم إرجاعه كـ PyTorch tensors.
الشرح: عند ترميز النص، غالبًا ما نحتاج إلى تنسيق البيانات بطريقة تفهمها نماذج التعلم الآلي. `'pt'` هو اختصار لـ PyTorch، وهي مكتبة شائعة للتعلم العميق. استخدام هذا المعامل يضمن أن المخرجات جاهزة للاستخدام مباشرة مع نماذج PyTorch.
تلميح: ما هي المكتبة الشائعة التي يتم استخدامها في سياق التعلم العميق والتي تشير إليها الأحرف 'pt'؟
لماذا يُستخدم `skip_special_tokens=True` عند فك ترميز النص الناتج من نموذج توليد النص؟
الإجابة: لتجنب عرض أي رموز خاصة (مثل رموز خاصة بالترميز أو علامات الترقيم غير المرغوبة) في النص النهائي، مما يجعله أكثر قابلية للقراءة.
الشرح: نماذج اللغة تعالج النص عن طريق تحويله إلى أرقام (رموز). أثناء هذه العملية، قد يتم استخدام رموز خاصة للإشارة إلى بداية النص، نهايته، أو وظائف أخرى. عند فك الترميز، نريد الحصول على النص البشري الواضح، لذلك نستخدم `skip_special_tokens=True` لإزالة هذه الرموز الداخلية.
تلميح: فكر في أنواع الرموز التي قد لا تكون جزءًا من اللغة البشرية الطبيعية وتستخدمها النماذج داخليًا.
في المثال الأول لتوليد النص، لاحظنا تكرارًا كبيرًا لعبارة 'I've been coming here for a while now and'. ما هو التفسير المحتمل لهذا السلوك مع إعدادات توليد النص الافتراضية؟
الإجابة: عند استخدام الإعدادات الافتراضية (عادةً `do_sample=False`)، يميل النموذج إلى تكرار الكلمات أو العبارات الأكثر احتمالاً، مما يؤدي إلى إنتاج نص متكرر وغير إبداعي.
الشرح: النموذج في المثال الأول لم يستخدم `do_sample=True`. هذا يعني أنه في كل خطوة، اختار الكلمة التي كانت الأكثر احتمالاً بناءً على السياق السابق. في بعض الأحيان، تؤدي هذه الاختيارات الحتمية إلى الوقوع في 'حلقات' حيث تكون نفس الكلمات أو العبارات هي الأكثر احتمالاً للاستمرار، مما ينتج عنه تكرار.
تلميح: تذكر الغرض من `do_sample=True` الذي تم تقديمه في المثال الثاني. ما الذي يمنعه النموذج عندما لا يكون هذا الخيار مفعلًا؟