Real world examples of projects /u/RockPaperOctopus Python Education

Real world examples of projects /u/RockPaperOctopus Python Education

Hi folks, this is perhaps a stupid question, I’m not sure, but I was wondering if there are any examples of real world problems/projects that I could look at. Things which a beginner developer might face in the day to day. I’ve been trying to beat my head against coding specifically python for a while now but being taught the likes of how to flip a coin 10 times and create a madlibs generator isn’t really cutting it in terms of being able to know how to actually apply coding or utilise it in the real world. There seems to be an intuitive step which I can’t grasp without real world examples and the various tutorials aren’t really that helpful in terms of making that jump

submitted by /u/RockPaperOctopus
[link] [comments]

​r/learnpython Hi folks, this is perhaps a stupid question, I’m not sure, but I was wondering if there are any examples of real world problems/projects that I could look at. Things which a beginner developer might face in the day to day. I’ve been trying to beat my head against coding specifically python for a while now but being taught the likes of how to flip a coin 10 times and create a madlibs generator isn’t really cutting it in terms of being able to know how to actually apply coding or utilise it in the real world. There seems to be an intuitive step which I can’t grasp without real world examples and the various tutorials aren’t really that helpful in terms of making that jump submitted by /u/RockPaperOctopus [link] [comments] 

Hi folks, this is perhaps a stupid question, I’m not sure, but I was wondering if there are any examples of real world problems/projects that I could look at. Things which a beginner developer might face in the day to day. I’ve been trying to beat my head against coding specifically python for a while now but being taught the likes of how to flip a coin 10 times and create a madlibs generator isn’t really cutting it in terms of being able to know how to actually apply coding or utilise it in the real world. There seems to be an intuitive step which I can’t grasp without real world examples and the various tutorials aren’t really that helpful in terms of making that jump

submitted by /u/RockPaperOctopus
[link] [comments]  Hi folks, this is perhaps a stupid question, I’m not sure, but I was wondering if there are any examples of real world problems/projects that I could look at. Things which a beginner developer might face in the day to day. I’ve been trying to beat my head against coding specifically python for a while now but being taught the likes of how to flip a coin 10 times and create a madlibs generator isn’t really cutting it in terms of being able to know how to actually apply coding or utilise it in the real world. There seems to be an intuitive step which I can’t grasp without real world examples and the various tutorials aren’t really that helpful in terms of making that jump submitted by /u/RockPaperOctopus [link] [comments]

Read more

Having a hard time with cron-job.org API /u/LionHeart_soul Python Education

Having a hard time with cron-job.org API /u/LionHeart_soul Python Education

I want to create a cron job on cron-job.org through their API using python.

As a result, I get 404.

I don’t know exactly what is not found

Here’s full code:

from datetime import datetime import requests, json # Cron-job.org API URL api_url = "https://cron-job.org/api/1.0/user/cronjobs" # api_url = "https://api.cron-job.org" # Your cron-job.org API key api_key = "*******************************************" # Cron job details command_url = f"https://httpbin.org/json" # Extract hour and minute try: hour = 10 minute = 30 # Prepare cron job schedule (cron-job.org takes time in hour and minute, not the standard cron format) schedule = { "minute": minute, "hour": hour, "mdays": ["*"], # Every day of the month "months": ["*"], # Every month "wdays": ["*"], # Every day of the week "enabled": True, "url": command_url } # Create headers with API key for authentication headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json"} # Make the API request to create the cron job response = requests.post(api_url, headers=headers, data=json.dumps( schedule)) # Check if the cron job was created successfully if response.status_code == 201: print("Cron job created successfully.") else: print(f"Failed to create cron job: {response}") print(response.text) except Exception as e: print(f'Errrrrrror: {e}') 

submitted by /u/LionHeart_soul
[link] [comments]

​r/learnpython I want to create a cron job on cron-job.org through their API using python. As a result, I get 404. I don’t know exactly what is not found Here’s full code: from datetime import datetime import requests, json # Cron-job.org API URL api_url = “https://cron-job.org/api/1.0/user/cronjobs” # api_url = “https://api.cron-job.org” # Your cron-job.org API key api_key = “*******************************************” # Cron job details command_url = f”https://httpbin.org/json” # Extract hour and minute try: hour = 10 minute = 30 # Prepare cron job schedule (cron-job.org takes time in hour and minute, not the standard cron format) schedule = { “minute”: minute, “hour”: hour, “mdays”: [“*”], # Every day of the month “months”: [“*”], # Every month “wdays”: [“*”], # Every day of the week “enabled”: True, “url”: command_url } # Create headers with API key for authentication headers = { “Authorization”: f”Bearer {api_key}”, “Content-Type”: “application/json”} # Make the API request to create the cron job response = requests.post(api_url, headers=headers, data=json.dumps( schedule)) # Check if the cron job was created successfully if response.status_code == 201: print(“Cron job created successfully.”) else: print(f”Failed to create cron job: {response}”) print(response.text) except Exception as e: print(f’Errrrrrror: {e}’) submitted by /u/LionHeart_soul [link] [comments] 

I want to create a cron job on cron-job.org through their API using python.

As a result, I get 404.

I don’t know exactly what is not found

Here’s full code:

from datetime import datetime import requests, json # Cron-job.org API URL api_url = "https://cron-job.org/api/1.0/user/cronjobs" # api_url = "https://api.cron-job.org" # Your cron-job.org API key api_key = "*******************************************" # Cron job details command_url = f"https://httpbin.org/json" # Extract hour and minute try: hour = 10 minute = 30 # Prepare cron job schedule (cron-job.org takes time in hour and minute, not the standard cron format) schedule = { "minute": minute, "hour": hour, "mdays": ["*"], # Every day of the month "months": ["*"], # Every month "wdays": ["*"], # Every day of the week "enabled": True, "url": command_url } # Create headers with API key for authentication headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json"} # Make the API request to create the cron job response = requests.post(api_url, headers=headers, data=json.dumps( schedule)) # Check if the cron job was created successfully if response.status_code == 201: print("Cron job created successfully.") else: print(f"Failed to create cron job: {response}") print(response.text) except Exception as e: print(f'Errrrrrror: {e}') 

submitted by /u/LionHeart_soul
[link] [comments]  I want to create a cron job on cron-job.org through their API using python. As a result, I get 404. I don’t know exactly what is not found Here’s full code: from datetime import datetime import requests, json # Cron-job.org API URL api_url = “https://cron-job.org/api/1.0/user/cronjobs” # api_url = “https://api.cron-job.org” # Your cron-job.org API key api_key = “*******************************************” # Cron job details command_url = f”https://httpbin.org/json” # Extract hour and minute try: hour = 10 minute = 30 # Prepare cron job schedule (cron-job.org takes time in hour and minute, not the standard cron format) schedule = { “minute”: minute, “hour”: hour, “mdays”: [“*”], # Every day of the month “months”: [“*”], # Every month “wdays”: [“*”], # Every day of the week “enabled”: True, “url”: command_url } # Create headers with API key for authentication headers = { “Authorization”: f”Bearer {api_key}”, “Content-Type”: “application/json”} # Make the API request to create the cron job response = requests.post(api_url, headers=headers, data=json.dumps( schedule)) # Check if the cron job was created successfully if response.status_code == 201: print(“Cron job created successfully.”) else: print(f”Failed to create cron job: {response}”) print(response.text) except Exception as e: print(f’Errrrrrror: {e}’) submitted by /u/LionHeart_soul [link] [comments]

Read more

Looking for people to learn programming/python with /u/Teranmix Python Education

Looking for people to learn programming/python with /u/Teranmix Python Education

Ik some python and want to have some friends work on programming problems and projects .Dm me if ur interested.

submitted by /u/Teranmix
[link] [comments]

​r/learnpython Ik some python and want to have some friends work on programming problems and projects .Dm me if ur interested. submitted by /u/Teranmix [link] [comments] 

Ik some python and want to have some friends work on programming problems and projects .Dm me if ur interested.

submitted by /u/Teranmix
[link] [comments]  Ik some python and want to have some friends work on programming problems and projects .Dm me if ur interested. submitted by /u/Teranmix [link] [comments]

Read more

Which is th best resource to learn python programming? /u/AdTemporary6204 Python Education

Which is th best resource to learn python programming? /u/AdTemporary6204 Python Education

I have figured out 3 resources, 1.Corey Schafer’s python tutorials playlist. 2.Telusko(Navin Reddy) Python for Beginners playlist. 3.Python Programming by Mooc.fi.

Out of these 3 which is the most effective one for thorough and enough understanding of python?

Those who have learned python from the above sources, please share your experience.

submitted by /u/AdTemporary6204
[link] [comments]

​r/learnpython I have figured out 3 resources, 1.Corey Schafer’s python tutorials playlist. 2.Telusko(Navin Reddy) Python for Beginners playlist. 3.Python Programming by Mooc.fi. Out of these 3 which is the most effective one for thorough and enough understanding of python? Those who have learned python from the above sources, please share your experience. submitted by /u/AdTemporary6204 [link] [comments] 

I have figured out 3 resources, 1.Corey Schafer’s python tutorials playlist. 2.Telusko(Navin Reddy) Python for Beginners playlist. 3.Python Programming by Mooc.fi.

Out of these 3 which is the most effective one for thorough and enough understanding of python?

Those who have learned python from the above sources, please share your experience.

submitted by /u/AdTemporary6204
[link] [comments]  I have figured out 3 resources, 1.Corey Schafer’s python tutorials playlist. 2.Telusko(Navin Reddy) Python for Beginners playlist. 3.Python Programming by Mooc.fi. Out of these 3 which is the most effective one for thorough and enough understanding of python? Those who have learned python from the above sources, please share your experience. submitted by /u/AdTemporary6204 [link] [comments]

Read more

binary data to string /u/Right-Key-4310 Python Education

binary data to string /u/Right-Key-4310 Python Education

Hi how can i convert a binary data (read from any file) to a string which contains the exact same binary data so that i can manipulate it

submitted by /u/Right-Key-4310
[link] [comments]

​r/learnpython Hi how can i convert a binary data (read from any file) to a string which contains the exact same binary data so that i can manipulate it submitted by /u/Right-Key-4310 [link] [comments] 

Hi how can i convert a binary data (read from any file) to a string which contains the exact same binary data so that i can manipulate it

submitted by /u/Right-Key-4310
[link] [comments]  Hi how can i convert a binary data (read from any file) to a string which contains the exact same binary data so that i can manipulate it submitted by /u/Right-Key-4310 [link] [comments]

Read more

Connection with a MariaDB Server /u/ziSamyy Python Education

Connection with a MariaDB Server /u/ziSamyy Python Education

I’m making a software were i need to comunícate My app with a MariaDB database. First i use MariaDB library Buy when i try to start My app inside a Dockerfile when i do “pip install mariadb” giveo me a error about config file or similar. I don’t k ow how fix it. Help please

submitted by /u/ziSamyy
[link] [comments]

​r/learnpython I’m making a software were i need to comunícate My app with a MariaDB database. First i use MariaDB library Buy when i try to start My app inside a Dockerfile when i do “pip install mariadb” giveo me a error about config file or similar. I don’t k ow how fix it. Help please submitted by /u/ziSamyy [link] [comments] 

I’m making a software were i need to comunícate My app with a MariaDB database. First i use MariaDB library Buy when i try to start My app inside a Dockerfile when i do “pip install mariadb” giveo me a error about config file or similar. I don’t k ow how fix it. Help please

submitted by /u/ziSamyy
[link] [comments]  I’m making a software were i need to comunícate My app with a MariaDB database. First i use MariaDB library Buy when i try to start My app inside a Dockerfile when i do “pip install mariadb” giveo me a error about config file or similar. I don’t k ow how fix it. Help please submitted by /u/ziSamyy [link] [comments]

Read more

Extracting text from PDFs without missing spaces. /u/blademan9999 Python Education

Extracting text from PDFs without missing spaces. /u/blademan9999 Python Education

I’m being trying to create some code to extract text from PDFs and putting them into a database, I’ve been using extract_text() for this.

However, for some reason not some of the spaces between words are disappearing, How do I deal with this or what alternative method/in built function should I use instead.

submitted by /u/blademan9999
[link] [comments]

​r/learnpython I’m being trying to create some code to extract text from PDFs and putting them into a database, I’ve been using extract_text() for this. However, for some reason not some of the spaces between words are disappearing, How do I deal with this or what alternative method/in built function should I use instead. submitted by /u/blademan9999 [link] [comments] 

I’m being trying to create some code to extract text from PDFs and putting them into a database, I’ve been using extract_text() for this.

However, for some reason not some of the spaces between words are disappearing, How do I deal with this or what alternative method/in built function should I use instead.

submitted by /u/blademan9999
[link] [comments]  I’m being trying to create some code to extract text from PDFs and putting them into a database, I’ve been using extract_text() for this. However, for some reason not some of the spaces between words are disappearing, How do I deal with this or what alternative method/in built function should I use instead. submitted by /u/blademan9999 [link] [comments]

Read more

this is my language translation code /u/Longjumping-Class420 Python Education

this is my language translation code /u/Longjumping-Class420 Python Education

import pandas as pd import tensorflow as tf from tensorflow.keras.layers import TextVectorization, Embedding, Dense, Input, LayerNormalization, MultiHeadAttention, Dropout from tensorflow.keras.models import Model import numpy as np # STEP 1: DATA LOADING data = pd.read_csv('eng_-french.csv') # Ensure this file exists with correct columns source_texts = data['English words/sentences'].tolist() target_texts = data['French words/sentences'].tolist() # STEP 2: DATA PARSING start_token = '[start]' end_token = '[end]' target_texts = [f"{start_token} {sentence} {end_token}" for sentence in target_texts] # Text cleaning function def clean_text(text): text = text.lower() text = text.replace('.', '').replace(',', '').replace('?', '').replace('!', '') return text source_texts = [clean_text(sentence) for sentence in source_texts] target_texts = [clean_text(sentence) for sentence in target_texts] # STEP 3: TEXT VECTORIZATION vocab_size = 10000 # Vocabulary size sequence_length = 50 # Max sequence length # Vectorization for source (English) source_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) source_vectorizer.adapt(source_texts) # Vectorization for target (Spanish) target_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) target_vectorizer.adapt(target_texts) # STEP 4: BUILD TRANSFORMER MODEL # Encoder Layer class TransformerEncoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation="relu"), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) def call(self, x, training): attn_output = self.attention(x, x) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(x + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) # Decoder Layer class TransformerDecoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention1 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.attention2 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation="relu"), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.layernorm3 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) self.dropout3 = Dropout(dropout) def call(self, x, enc_output, training): attn1 = self.attention1(x, x) attn1 = self.dropout1(attn1, training=training) out1 = self.layernorm1(x + attn1) attn2 = self.attention2(out1, enc_output) attn2 = self.dropout2(attn2, training=training) out2 = self.layernorm2(out1 + attn2) ffn_output = self.ffn(out2) ffn_output = self.dropout3(ffn_output, training=training) return self.layernorm3(out2 + ffn_output) # Model Hyperparameters embed_dim = 256 # Embedding dimension num_heads = 4 # Number of attention heads ff_dim = 512 # Feedforward network dimension # Encoder and Decoder inputs encoder_inputs = Input(shape=(sequence_length,)) decoder_inputs = Input(shape=(sequence_length,)) # Embedding layers encoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(encoder_inputs) decoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(decoder_inputs) # Transformer Encoder and Decoder # Transformer Encoder and Decoder encoder_output = TransformerEncoder(embed_dim, num_heads, ff_dim)(encoder_embedding, training=True) decoder_output = TransformerDecoder(embed_dim, num_heads, ff_dim)(decoder_embedding, encoder_output, training=True) # Output layer output = Dense(vocab_size, activation="softmax")(decoder_output) # Compile the model transformer = Model([encoder_inputs, decoder_inputs], output) transformer.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) transformer.summary() # STEP 5: PREPARE DATA FOR TRAINING # Vectorize the data source_sequences = source_vectorizer(source_texts) target_sequences = target_vectorizer(target_texts) # Shift target sequences for decoder input and output decoder_input_sequences = target_sequences[:, :-1] # Remove last token decoder_input_sequences = tf.pad(decoder_input_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length decoder_output_sequences = target_sequences[:, 1:] # Remove first token decoder_output_sequences = tf.pad(decoder_output_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length # STEP 6: TRAIN THE MODEL transformer.fit( [source_sequences, decoder_input_sequences], np.expand_dims(decoder_output_sequences, -1), batch_size=32, epochs=30, # Change to 30 for full training validation_split=0.2 ) # STEP 7: TRANSLATION FUNCTION def translate(sentence): sentence_vector = source_vectorizer([clean_text(sentence)]) output_sentence = "[start]" for _ in range(sequence_length): # Prepare decoder input target_vector = target_vectorizer([output_sentence]) # Predict next token prediction = transformer.predict([sentence_vector, target_vector], verbose=0) predicted_token = np.argmax(prediction[0, -1, :]) predicted_word = target_vectorizer.get_vocabulary()[predicted_token] # Break if end token is reached if predicted_word == "[end]" or predicted_word == "": break output_sentence += " " + predicted_word # Return cleaned-up sentence return output_sentence.replace("[start]", "").replace("[end]", "").strip() # Test the translation test_sentence = "Hi." print("English:", test_sentence) print("french:", translate(test_sentence)) ######this code just gives me french blank, nothing at all, no error but just a blank 

submitted by /u/Longjumping-Class420
[link] [comments]

​r/learnpython import pandas as pd import tensorflow as tf from tensorflow.keras.layers import TextVectorization, Embedding, Dense, Input, LayerNormalization, MultiHeadAttention, Dropout from tensorflow.keras.models import Model import numpy as np # STEP 1: DATA LOADING data = pd.read_csv(‘eng_-french.csv’) # Ensure this file exists with correct columns source_texts = data[‘English words/sentences’].tolist() target_texts = data[‘French words/sentences’].tolist() # STEP 2: DATA PARSING start_token = ‘[start]’ end_token = ‘[end]’ target_texts = [f”{start_token} {sentence} {end_token}” for sentence in target_texts] # Text cleaning function def clean_text(text): text = text.lower() text = text.replace(‘.’, ”).replace(‘,’, ”).replace(‘?’, ”).replace(‘!’, ”) return text source_texts = [clean_text(sentence) for sentence in source_texts] target_texts = [clean_text(sentence) for sentence in target_texts] # STEP 3: TEXT VECTORIZATION vocab_size = 10000 # Vocabulary size sequence_length = 50 # Max sequence length # Vectorization for source (English) source_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) source_vectorizer.adapt(source_texts) # Vectorization for target (Spanish) target_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) target_vectorizer.adapt(target_texts) # STEP 4: BUILD TRANSFORMER MODEL # Encoder Layer class TransformerEncoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation=”relu”), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) def call(self, x, training): attn_output = self.attention(x, x) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(x + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) # Decoder Layer class TransformerDecoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention1 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.attention2 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation=”relu”), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.layernorm3 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) self.dropout3 = Dropout(dropout) def call(self, x, enc_output, training): attn1 = self.attention1(x, x) attn1 = self.dropout1(attn1, training=training) out1 = self.layernorm1(x + attn1) attn2 = self.attention2(out1, enc_output) attn2 = self.dropout2(attn2, training=training) out2 = self.layernorm2(out1 + attn2) ffn_output = self.ffn(out2) ffn_output = self.dropout3(ffn_output, training=training) return self.layernorm3(out2 + ffn_output) # Model Hyperparameters embed_dim = 256 # Embedding dimension num_heads = 4 # Number of attention heads ff_dim = 512 # Feedforward network dimension # Encoder and Decoder inputs encoder_inputs = Input(shape=(sequence_length,)) decoder_inputs = Input(shape=(sequence_length,)) # Embedding layers encoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(encoder_inputs) decoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(decoder_inputs) # Transformer Encoder and Decoder # Transformer Encoder and Decoder encoder_output = TransformerEncoder(embed_dim, num_heads, ff_dim)(encoder_embedding, training=True) decoder_output = TransformerDecoder(embed_dim, num_heads, ff_dim)(decoder_embedding, encoder_output, training=True) # Output layer output = Dense(vocab_size, activation=”softmax”)(decoder_output) # Compile the model transformer = Model([encoder_inputs, decoder_inputs], output) transformer.compile(optimizer=”adam”, loss=”sparse_categorical_crossentropy”, metrics=[“accuracy”]) transformer.summary() # STEP 5: PREPARE DATA FOR TRAINING # Vectorize the data source_sequences = source_vectorizer(source_texts) target_sequences = target_vectorizer(target_texts) # Shift target sequences for decoder input and output decoder_input_sequences = target_sequences[:, :-1] # Remove last token decoder_input_sequences = tf.pad(decoder_input_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length decoder_output_sequences = target_sequences[:, 1:] # Remove first token decoder_output_sequences = tf.pad(decoder_output_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length # STEP 6: TRAIN THE MODEL transformer.fit( [source_sequences, decoder_input_sequences], np.expand_dims(decoder_output_sequences, -1), batch_size=32, epochs=30, # Change to 30 for full training validation_split=0.2 ) # STEP 7: TRANSLATION FUNCTION def translate(sentence): sentence_vector = source_vectorizer([clean_text(sentence)]) output_sentence = “[start]” for _ in range(sequence_length): # Prepare decoder input target_vector = target_vectorizer([output_sentence]) # Predict next token prediction = transformer.predict([sentence_vector, target_vector], verbose=0) predicted_token = np.argmax(prediction[0, -1, :]) predicted_word = target_vectorizer.get_vocabulary()[predicted_token] # Break if end token is reached if predicted_word == “[end]” or predicted_word == “”: break output_sentence += ” ” + predicted_word # Return cleaned-up sentence return output_sentence.replace(“[start]”, “”).replace(“[end]”, “”).strip() # Test the translation test_sentence = “Hi.” print(“English:”, test_sentence) print(“french:”, translate(test_sentence)) ######this code just gives me french blank, nothing at all, no error but just a blank submitted by /u/Longjumping-Class420 [link] [comments] 

import pandas as pd import tensorflow as tf from tensorflow.keras.layers import TextVectorization, Embedding, Dense, Input, LayerNormalization, MultiHeadAttention, Dropout from tensorflow.keras.models import Model import numpy as np # STEP 1: DATA LOADING data = pd.read_csv('eng_-french.csv') # Ensure this file exists with correct columns source_texts = data['English words/sentences'].tolist() target_texts = data['French words/sentences'].tolist() # STEP 2: DATA PARSING start_token = '[start]' end_token = '[end]' target_texts = [f"{start_token} {sentence} {end_token}" for sentence in target_texts] # Text cleaning function def clean_text(text): text = text.lower() text = text.replace('.', '').replace(',', '').replace('?', '').replace('!', '') return text source_texts = [clean_text(sentence) for sentence in source_texts] target_texts = [clean_text(sentence) for sentence in target_texts] # STEP 3: TEXT VECTORIZATION vocab_size = 10000 # Vocabulary size sequence_length = 50 # Max sequence length # Vectorization for source (English) source_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) source_vectorizer.adapt(source_texts) # Vectorization for target (Spanish) target_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) target_vectorizer.adapt(target_texts) # STEP 4: BUILD TRANSFORMER MODEL # Encoder Layer class TransformerEncoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation="relu"), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) def call(self, x, training): attn_output = self.attention(x, x) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(x + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) # Decoder Layer class TransformerDecoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention1 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.attention2 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation="relu"), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.layernorm3 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) self.dropout3 = Dropout(dropout) def call(self, x, enc_output, training): attn1 = self.attention1(x, x) attn1 = self.dropout1(attn1, training=training) out1 = self.layernorm1(x + attn1) attn2 = self.attention2(out1, enc_output) attn2 = self.dropout2(attn2, training=training) out2 = self.layernorm2(out1 + attn2) ffn_output = self.ffn(out2) ffn_output = self.dropout3(ffn_output, training=training) return self.layernorm3(out2 + ffn_output) # Model Hyperparameters embed_dim = 256 # Embedding dimension num_heads = 4 # Number of attention heads ff_dim = 512 # Feedforward network dimension # Encoder and Decoder inputs encoder_inputs = Input(shape=(sequence_length,)) decoder_inputs = Input(shape=(sequence_length,)) # Embedding layers encoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(encoder_inputs) decoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(decoder_inputs) # Transformer Encoder and Decoder # Transformer Encoder and Decoder encoder_output = TransformerEncoder(embed_dim, num_heads, ff_dim)(encoder_embedding, training=True) decoder_output = TransformerDecoder(embed_dim, num_heads, ff_dim)(decoder_embedding, encoder_output, training=True) # Output layer output = Dense(vocab_size, activation="softmax")(decoder_output) # Compile the model transformer = Model([encoder_inputs, decoder_inputs], output) transformer.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) transformer.summary() # STEP 5: PREPARE DATA FOR TRAINING # Vectorize the data source_sequences = source_vectorizer(source_texts) target_sequences = target_vectorizer(target_texts) # Shift target sequences for decoder input and output decoder_input_sequences = target_sequences[:, :-1] # Remove last token decoder_input_sequences = tf.pad(decoder_input_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length decoder_output_sequences = target_sequences[:, 1:] # Remove first token decoder_output_sequences = tf.pad(decoder_output_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length # STEP 6: TRAIN THE MODEL transformer.fit( [source_sequences, decoder_input_sequences], np.expand_dims(decoder_output_sequences, -1), batch_size=32, epochs=30, # Change to 30 for full training validation_split=0.2 ) # STEP 7: TRANSLATION FUNCTION def translate(sentence): sentence_vector = source_vectorizer([clean_text(sentence)]) output_sentence = "[start]" for _ in range(sequence_length): # Prepare decoder input target_vector = target_vectorizer([output_sentence]) # Predict next token prediction = transformer.predict([sentence_vector, target_vector], verbose=0) predicted_token = np.argmax(prediction[0, -1, :]) predicted_word = target_vectorizer.get_vocabulary()[predicted_token] # Break if end token is reached if predicted_word == "[end]" or predicted_word == "": break output_sentence += " " + predicted_word # Return cleaned-up sentence return output_sentence.replace("[start]", "").replace("[end]", "").strip() # Test the translation test_sentence = "Hi." print("English:", test_sentence) print("french:", translate(test_sentence)) ######this code just gives me french blank, nothing at all, no error but just a blank 

submitted by /u/Longjumping-Class420
[link] [comments]  import pandas as pd import tensorflow as tf from tensorflow.keras.layers import TextVectorization, Embedding, Dense, Input, LayerNormalization, MultiHeadAttention, Dropout from tensorflow.keras.models import Model import numpy as np # STEP 1: DATA LOADING data = pd.read_csv(‘eng_-french.csv’) # Ensure this file exists with correct columns source_texts = data[‘English words/sentences’].tolist() target_texts = data[‘French words/sentences’].tolist() # STEP 2: DATA PARSING start_token = ‘[start]’ end_token = ‘[end]’ target_texts = [f”{start_token} {sentence} {end_token}” for sentence in target_texts] # Text cleaning function def clean_text(text): text = text.lower() text = text.replace(‘.’, ”).replace(‘,’, ”).replace(‘?’, ”).replace(‘!’, ”) return text source_texts = [clean_text(sentence) for sentence in source_texts] target_texts = [clean_text(sentence) for sentence in target_texts] # STEP 3: TEXT VECTORIZATION vocab_size = 10000 # Vocabulary size sequence_length = 50 # Max sequence length # Vectorization for source (English) source_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) source_vectorizer.adapt(source_texts) # Vectorization for target (Spanish) target_vectorizer = TextVectorization(max_tokens=vocab_size, output_sequence_length=sequence_length) target_vectorizer.adapt(target_texts) # STEP 4: BUILD TRANSFORMER MODEL # Encoder Layer class TransformerEncoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation=”relu”), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) def call(self, x, training): attn_output = self.attention(x, x) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(x + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) # Decoder Layer class TransformerDecoder(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, dropout=0.1): super().__init__() self.attention1 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.attention2 = MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([Dense(ff_dim, activation=”relu”), Dense(embed_dim)]) self.layernorm1 = LayerNormalization(epsilon=1e-6) self.layernorm2 = LayerNormalization(epsilon=1e-6) self.layernorm3 = LayerNormalization(epsilon=1e-6) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) self.dropout3 = Dropout(dropout) def call(self, x, enc_output, training): attn1 = self.attention1(x, x) attn1 = self.dropout1(attn1, training=training) out1 = self.layernorm1(x + attn1) attn2 = self.attention2(out1, enc_output) attn2 = self.dropout2(attn2, training=training) out2 = self.layernorm2(out1 + attn2) ffn_output = self.ffn(out2) ffn_output = self.dropout3(ffn_output, training=training) return self.layernorm3(out2 + ffn_output) # Model Hyperparameters embed_dim = 256 # Embedding dimension num_heads = 4 # Number of attention heads ff_dim = 512 # Feedforward network dimension # Encoder and Decoder inputs encoder_inputs = Input(shape=(sequence_length,)) decoder_inputs = Input(shape=(sequence_length,)) # Embedding layers encoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(encoder_inputs) decoder_embedding = Embedding(input_dim=vocab_size, output_dim=embed_dim)(decoder_inputs) # Transformer Encoder and Decoder # Transformer Encoder and Decoder encoder_output = TransformerEncoder(embed_dim, num_heads, ff_dim)(encoder_embedding, training=True) decoder_output = TransformerDecoder(embed_dim, num_heads, ff_dim)(decoder_embedding, encoder_output, training=True) # Output layer output = Dense(vocab_size, activation=”softmax”)(decoder_output) # Compile the model transformer = Model([encoder_inputs, decoder_inputs], output) transformer.compile(optimizer=”adam”, loss=”sparse_categorical_crossentropy”, metrics=[“accuracy”]) transformer.summary() # STEP 5: PREPARE DATA FOR TRAINING # Vectorize the data source_sequences = source_vectorizer(source_texts) target_sequences = target_vectorizer(target_texts) # Shift target sequences for decoder input and output decoder_input_sequences = target_sequences[:, :-1] # Remove last token decoder_input_sequences = tf.pad(decoder_input_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length decoder_output_sequences = target_sequences[:, 1:] # Remove first token decoder_output_sequences = tf.pad(decoder_output_sequences, [[0, 0], [0, 1]]) # Pad to match sequence length # STEP 6: TRAIN THE MODEL transformer.fit( [source_sequences, decoder_input_sequences], np.expand_dims(decoder_output_sequences, -1), batch_size=32, epochs=30, # Change to 30 for full training validation_split=0.2 ) # STEP 7: TRANSLATION FUNCTION def translate(sentence): sentence_vector = source_vectorizer([clean_text(sentence)]) output_sentence = “[start]” for _ in range(sequence_length): # Prepare decoder input target_vector = target_vectorizer([output_sentence]) # Predict next token prediction = transformer.predict([sentence_vector, target_vector], verbose=0) predicted_token = np.argmax(prediction[0, -1, :]) predicted_word = target_vectorizer.get_vocabulary()[predicted_token] # Break if end token is reached if predicted_word == “[end]” or predicted_word == “”: break output_sentence += ” ” + predicted_word # Return cleaned-up sentence return output_sentence.replace(“[start]”, “”).replace(“[end]”, “”).strip() # Test the translation test_sentence = “Hi.” print(“English:”, test_sentence) print(“french:”, translate(test_sentence)) ######this code just gives me french blank, nothing at all, no error but just a blank submitted by /u/Longjumping-Class420 [link] [comments]

Read more