Learning python /u/AcrobaticStudy1708 Python Education

Learning python /u/AcrobaticStudy1708 Python Education

I am extremely interested in learning python but i just can’t find a good course or book/pdf any recommendations would be appreciated.

submitted by /u/AcrobaticStudy1708
[link] [comments]

​r/learnpython I am extremely interested in learning python but i just can’t find a good course or book/pdf any recommendations would be appreciated. submitted by /u/AcrobaticStudy1708 [link] [comments] 

I am extremely interested in learning python but i just can’t find a good course or book/pdf any recommendations would be appreciated.

submitted by /u/AcrobaticStudy1708
[link] [comments]  I am extremely interested in learning python but i just can’t find a good course or book/pdf any recommendations would be appreciated. submitted by /u/AcrobaticStudy1708 [link] [comments]

Read more

Analyzing Custody Data from Facebook Messenger Chat Logs /u/akolozvary Python Education

Analyzing Custody Data from Facebook Messenger Chat Logs /u/akolozvary Python Education

Hi everyone,

I’m reaching out for advice on how to analyze a large dataset of chat logs between my ex and me. About 99% of our custody-related communication was done via Facebook Messenger. Luckily facebook allows you download the entire history. Here’s some context:

  • Background:My ex and I have been separated since 2013 and have co-parented without court involvement until March of this year. Unfortunately, she broke all communication, removed me as the father at my daughter’s school (possibly moved her to a different school or homeschooling), and has prevented me from seeing my daughter. She’s doing it out of spite and is working well for her so far since the court system is a bit skewed.
  • Legal Situation: A few years ago, my ex pursued child support, so I opened a paternity rights case. She later backed out, but I kept my case open because I felt excluded from important parenting decisions. Then started it up after she cut me off. Recently, I hired a more aggressive lawyer to prepare for mediation after delays with my previous lawyer.
  • Current Problem: To prepare for mediation/court, I need to compile evidence of custody arrangements. Unfortunately, I never logged my time or had formal agreements signed. Now I’m scrambling to organize this data to lessen the financial blow.

What I’ve Done So Far:

  1. Exported all chat logs and used AI tools to import the data into a CSV format (left sample code further below).
  2. The CSV includes columns like `Sender`, `Timestamp`, `Message`, and `Action` (e.g., Pickup/Drop-Off/Other).
  3. I’ve identified some common keywords like “ready,” “meet,” “leaving,” “driving,” etc., which are often used in discussions about custody exchanges.

Challenges:

  • I have almost no programming experience and am struggling with analyzing the data at a granular level.
  • I need help identifying and flagging messages related to custody arrangements (e.g., pickups/drop-offs) and discussions about money.
  • My goal is to calculate overnight stays and create a clear timeline of custody exchanges.

What I’m Looking For:

  1. Tips or Python scripts that can help me filter messages by keywords (e.g., “ready,” “meet”) and flag relevant rows in the CSV.
  2. Guidance on how to calculate overnight stays based on timestamps (e.g., pickups after 5 PM or drop-offs before 10 AM).
  3. Suggestions for visualizing this data (e.g., timelines or charts) to present in court.

Here’s an example of what my CSV looks like:

“`

Sender,Timestamp,Message,Action

Alex (full name hidden),2021-03-04 20:45:33,"So she should have told you we already did the reading assignment. Lol",Pickup

Brandy (full name hidden),2021-03-05 10:18:43,"Shes already forgot about it as of now",Pickup

Brandy (full name hidden),2021-03-06 09:07:52,"Hey. Can you meet around noon... I'm going to be at 436 and palm springs by Publix.",Pickup

“`

I’ve tried using AI tools like Perplexity.ai for analysis, but it didn’t fully analyze the file as needed. I’m open to hiring someone if necessary but would love any tips or pointers from this community first.

Thanks in advance for any help or advice you can provide!

from bs4 import BeautifulSoup

import pandas as pd

# Load your Facebook HTML file

html_file = r"message_1.html"

with open(html_file, "r", encoding="utf-8") as file:

soup = BeautifulSoup(file, "html.parser")

# Extract message threads

messages = []

message_blocks = soup.find_all('div', class_='_a6-g') # Main container for each message

for message in message_blocks:

try:

# Extract sender name

sender_tag = message.find('div', class_='_2ph_ _a6-h _a6-i')

sender = sender_tag.text.strip() if sender_tag else "Unknown"

# Extract timestamp

timestamp_tag = message.find('div', class_='_a72d')

timestamp = timestamp_tag.text.strip() if timestamp_tag else "Unknown"

# Extract message content

content_tag = message.find('div', class_='_2ph_ _a6-p')

if content_tag:

content = content_tag.get_text(separator=" ").strip()

else:

content = "Unknown"

# Append extracted data to list

messages.append({'Sender': sender, 'Timestamp': timestamp, 'Message': content})

except AttributeError as e:

print(f"Error parsing message: {e}")

# Convert to Pandas DataFrame

df = pd.DataFrame(messages)

# Debugging: Print the first few rows of the DataFrame and check for missing columns

print("DataFrame contents:")

print(df.head())

# Remove duplicates and empty messages

df = df.drop_duplicates()

df = df[df['Message'].str.strip() != ""]

# Parse and clean Timestamp column

df['Timestamp'] = pd.to_datetime(df['Timestamp'], format='%b %d, %Y %I:%M:%S %p', errors='coerce')

# Drop rows with invalid timestamps (optional)

df = df.dropna(subset=['Timestamp'])

# Sort DataFrame by Timestamp

df = df.sort_values(by='Timestamp')

# Save sorted DataFrame to CSV (ensure no PermissionError)

csv_file_path = r'C:tempsorted_custody_schedule_new.csv'

try:

df.to_csv(csv_file_path, index=False)

print(f"Sorted custody-related messages saved to {csv_file_path}.")

except PermissionError as e:

print(f"PermissionError: {e}. Please close any program using the file or save with a different name.")

submitted by /u/akolozvary
[link] [comments]

​r/learnpython Hi everyone, I’m reaching out for advice on how to analyze a large dataset of chat logs between my ex and me. About 99% of our custody-related communication was done via Facebook Messenger. Luckily facebook allows you download the entire history. Here’s some context: Background:My ex and I have been separated since 2013 and have co-parented without court involvement until March of this year. Unfortunately, she broke all communication, removed me as the father at my daughter’s school (possibly moved her to a different school or homeschooling), and has prevented me from seeing my daughter. She’s doing it out of spite and is working well for her so far since the court system is a bit skewed. Legal Situation: A few years ago, my ex pursued child support, so I opened a paternity rights case. She later backed out, but I kept my case open because I felt excluded from important parenting decisions. Then started it up after she cut me off. Recently, I hired a more aggressive lawyer to prepare for mediation after delays with my previous lawyer. Current Problem: To prepare for mediation/court, I need to compile evidence of custody arrangements. Unfortunately, I never logged my time or had formal agreements signed. Now I’m scrambling to organize this data to lessen the financial blow. What I’ve Done So Far: Exported all chat logs and used AI tools to import the data into a CSV format (left sample code further below). The CSV includes columns like `Sender`, `Timestamp`, `Message`, and `Action` (e.g., Pickup/Drop-Off/Other). I’ve identified some common keywords like “ready,” “meet,” “leaving,” “driving,” etc., which are often used in discussions about custody exchanges. Challenges: I have almost no programming experience and am struggling with analyzing the data at a granular level. I need help identifying and flagging messages related to custody arrangements (e.g., pickups/drop-offs) and discussions about money. My goal is to calculate overnight stays and create a clear timeline of custody exchanges. What I’m Looking For: Tips or Python scripts that can help me filter messages by keywords (e.g., “ready,” “meet”) and flag relevant rows in the CSV. Guidance on how to calculate overnight stays based on timestamps (e.g., pickups after 5 PM or drop-offs before 10 AM). Suggestions for visualizing this data (e.g., timelines or charts) to present in court. Here’s an example of what my CSV looks like: “` Sender,Timestamp,Message,Action Alex (full name hidden),2021-03-04 20:45:33,”So she should have told you we already did the reading assignment. Lol”,Pickup Brandy (full name hidden),2021-03-05 10:18:43,”Shes already forgot about it as of now”,Pickup Brandy (full name hidden),2021-03-06 09:07:52,”Hey. Can you meet around noon… I’m going to be at 436 and palm springs by Publix.”,Pickup “` I’ve tried using AI tools like Perplexity.ai for analysis, but it didn’t fully analyze the file as needed. I’m open to hiring someone if necessary but would love any tips or pointers from this community first. Thanks in advance for any help or advice you can provide! from bs4 import BeautifulSoup import pandas as pd # Load your Facebook HTML file html_file = r”message_1.html” with open(html_file, “r”, encoding=”utf-8″) as file: soup = BeautifulSoup(file, “html.parser”) # Extract message threads messages = [] message_blocks = soup.find_all(‘div’, class_=’_a6-g’) # Main container for each message for message in message_blocks: try: # Extract sender name sender_tag = message.find(‘div’, class_=’_2ph_ _a6-h _a6-i’) sender = sender_tag.text.strip() if sender_tag else “Unknown” # Extract timestamp timestamp_tag = message.find(‘div’, class_=’_a72d’) timestamp = timestamp_tag.text.strip() if timestamp_tag else “Unknown” # Extract message content content_tag = message.find(‘div’, class_=’_2ph_ _a6-p’) if content_tag: content = content_tag.get_text(separator=” “).strip() else: content = “Unknown” # Append extracted data to list messages.append({‘Sender’: sender, ‘Timestamp’: timestamp, ‘Message’: content}) except AttributeError as e: print(f”Error parsing message: {e}”) # Convert to Pandas DataFrame df = pd.DataFrame(messages) # Debugging: Print the first few rows of the DataFrame and check for missing columns print(“DataFrame contents:”) print(df.head()) # Remove duplicates and empty messages df = df.drop_duplicates() df = df[df[‘Message’].str.strip() != “”] # Parse and clean Timestamp column df[‘Timestamp’] = pd.to_datetime(df[‘Timestamp’], format=’%b %d, %Y %I:%M:%S %p’, errors=’coerce’) # Drop rows with invalid timestamps (optional) df = df.dropna(subset=[‘Timestamp’]) # Sort DataFrame by Timestamp df = df.sort_values(by=’Timestamp’) # Save sorted DataFrame to CSV (ensure no PermissionError) csv_file_path = r’C:tempsorted_custody_schedule_new.csv’ try: df.to_csv(csv_file_path, index=False) print(f”Sorted custody-related messages saved to {csv_file_path}.”) except PermissionError as e: print(f”PermissionError: {e}. Please close any program using the file or save with a different name.”) submitted by /u/akolozvary [link] [comments] 

Hi everyone,

I’m reaching out for advice on how to analyze a large dataset of chat logs between my ex and me. About 99% of our custody-related communication was done via Facebook Messenger. Luckily facebook allows you download the entire history. Here’s some context:

  • Background:My ex and I have been separated since 2013 and have co-parented without court involvement until March of this year. Unfortunately, she broke all communication, removed me as the father at my daughter’s school (possibly moved her to a different school or homeschooling), and has prevented me from seeing my daughter. She’s doing it out of spite and is working well for her so far since the court system is a bit skewed.
  • Legal Situation: A few years ago, my ex pursued child support, so I opened a paternity rights case. She later backed out, but I kept my case open because I felt excluded from important parenting decisions. Then started it up after she cut me off. Recently, I hired a more aggressive lawyer to prepare for mediation after delays with my previous lawyer.
  • Current Problem: To prepare for mediation/court, I need to compile evidence of custody arrangements. Unfortunately, I never logged my time or had formal agreements signed. Now I’m scrambling to organize this data to lessen the financial blow.

What I’ve Done So Far:

  1. Exported all chat logs and used AI tools to import the data into a CSV format (left sample code further below).
  2. The CSV includes columns like `Sender`, `Timestamp`, `Message`, and `Action` (e.g., Pickup/Drop-Off/Other).
  3. I’ve identified some common keywords like “ready,” “meet,” “leaving,” “driving,” etc., which are often used in discussions about custody exchanges.

Challenges:

  • I have almost no programming experience and am struggling with analyzing the data at a granular level.
  • I need help identifying and flagging messages related to custody arrangements (e.g., pickups/drop-offs) and discussions about money.
  • My goal is to calculate overnight stays and create a clear timeline of custody exchanges.

What I’m Looking For:

  1. Tips or Python scripts that can help me filter messages by keywords (e.g., “ready,” “meet”) and flag relevant rows in the CSV.
  2. Guidance on how to calculate overnight stays based on timestamps (e.g., pickups after 5 PM or drop-offs before 10 AM).
  3. Suggestions for visualizing this data (e.g., timelines or charts) to present in court.

Here’s an example of what my CSV looks like:

“`

Sender,Timestamp,Message,Action

Alex (full name hidden),2021-03-04 20:45:33,"So she should have told you we already did the reading assignment. Lol",Pickup

Brandy (full name hidden),2021-03-05 10:18:43,"Shes already forgot about it as of now",Pickup

Brandy (full name hidden),2021-03-06 09:07:52,"Hey. Can you meet around noon... I'm going to be at 436 and palm springs by Publix.",Pickup

“`

I’ve tried using AI tools like Perplexity.ai for analysis, but it didn’t fully analyze the file as needed. I’m open to hiring someone if necessary but would love any tips or pointers from this community first.

Thanks in advance for any help or advice you can provide!

from bs4 import BeautifulSoup

import pandas as pd

# Load your Facebook HTML file

html_file = r"message_1.html"

with open(html_file, "r", encoding="utf-8") as file:

soup = BeautifulSoup(file, "html.parser")

# Extract message threads

messages = []

message_blocks = soup.find_all('div', class_='_a6-g') # Main container for each message

for message in message_blocks:

try:

# Extract sender name

sender_tag = message.find('div', class_='_2ph_ _a6-h _a6-i')

sender = sender_tag.text.strip() if sender_tag else "Unknown"

# Extract timestamp

timestamp_tag = message.find('div', class_='_a72d')

timestamp = timestamp_tag.text.strip() if timestamp_tag else "Unknown"

# Extract message content

content_tag = message.find('div', class_='_2ph_ _a6-p')

if content_tag:

content = content_tag.get_text(separator=" ").strip()

else:

content = "Unknown"

# Append extracted data to list

messages.append({'Sender': sender, 'Timestamp': timestamp, 'Message': content})

except AttributeError as e:

print(f"Error parsing message: {e}")

# Convert to Pandas DataFrame

df = pd.DataFrame(messages)

# Debugging: Print the first few rows of the DataFrame and check for missing columns

print("DataFrame contents:")

print(df.head())

# Remove duplicates and empty messages

df = df.drop_duplicates()

df = df[df['Message'].str.strip() != ""]

# Parse and clean Timestamp column

df['Timestamp'] = pd.to_datetime(df['Timestamp'], format='%b %d, %Y %I:%M:%S %p', errors='coerce')

# Drop rows with invalid timestamps (optional)

df = df.dropna(subset=['Timestamp'])

# Sort DataFrame by Timestamp

df = df.sort_values(by='Timestamp')

# Save sorted DataFrame to CSV (ensure no PermissionError)

csv_file_path = r'C:tempsorted_custody_schedule_new.csv'

try:

df.to_csv(csv_file_path, index=False)

print(f"Sorted custody-related messages saved to {csv_file_path}.")

except PermissionError as e:

print(f"PermissionError: {e}. Please close any program using the file or save with a different name.")

submitted by /u/akolozvary
[link] [comments]  Hi everyone, I’m reaching out for advice on how to analyze a large dataset of chat logs between my ex and me. About 99% of our custody-related communication was done via Facebook Messenger. Luckily facebook allows you download the entire history. Here’s some context: Background:My ex and I have been separated since 2013 and have co-parented without court involvement until March of this year. Unfortunately, she broke all communication, removed me as the father at my daughter’s school (possibly moved her to a different school or homeschooling), and has prevented me from seeing my daughter. She’s doing it out of spite and is working well for her so far since the court system is a bit skewed. Legal Situation: A few years ago, my ex pursued child support, so I opened a paternity rights case. She later backed out, but I kept my case open because I felt excluded from important parenting decisions. Then started it up after she cut me off. Recently, I hired a more aggressive lawyer to prepare for mediation after delays with my previous lawyer. Current Problem: To prepare for mediation/court, I need to compile evidence of custody arrangements. Unfortunately, I never logged my time or had formal agreements signed. Now I’m scrambling to organize this data to lessen the financial blow. What I’ve Done So Far: Exported all chat logs and used AI tools to import the data into a CSV format (left sample code further below). The CSV includes columns like `Sender`, `Timestamp`, `Message`, and `Action` (e.g., Pickup/Drop-Off/Other). I’ve identified some common keywords like “ready,” “meet,” “leaving,” “driving,” etc., which are often used in discussions about custody exchanges. Challenges: I have almost no programming experience and am struggling with analyzing the data at a granular level. I need help identifying and flagging messages related to custody arrangements (e.g., pickups/drop-offs) and discussions about money. My goal is to calculate overnight stays and create a clear timeline of custody exchanges. What I’m Looking For: Tips or Python scripts that can help me filter messages by keywords (e.g., “ready,” “meet”) and flag relevant rows in the CSV. Guidance on how to calculate overnight stays based on timestamps (e.g., pickups after 5 PM or drop-offs before 10 AM). Suggestions for visualizing this data (e.g., timelines or charts) to present in court. Here’s an example of what my CSV looks like: “` Sender,Timestamp,Message,Action Alex (full name hidden),2021-03-04 20:45:33,”So she should have told you we already did the reading assignment. Lol”,Pickup Brandy (full name hidden),2021-03-05 10:18:43,”Shes already forgot about it as of now”,Pickup Brandy (full name hidden),2021-03-06 09:07:52,”Hey. Can you meet around noon… I’m going to be at 436 and palm springs by Publix.”,Pickup “` I’ve tried using AI tools like Perplexity.ai for analysis, but it didn’t fully analyze the file as needed. I’m open to hiring someone if necessary but would love any tips or pointers from this community first. Thanks in advance for any help or advice you can provide! from bs4 import BeautifulSoup import pandas as pd # Load your Facebook HTML file html_file = r”message_1.html” with open(html_file, “r”, encoding=”utf-8″) as file: soup = BeautifulSoup(file, “html.parser”) # Extract message threads messages = [] message_blocks = soup.find_all(‘div’, class_=’_a6-g’) # Main container for each message for message in message_blocks: try: # Extract sender name sender_tag = message.find(‘div’, class_=’_2ph_ _a6-h _a6-i’) sender = sender_tag.text.strip() if sender_tag else “Unknown” # Extract timestamp timestamp_tag = message.find(‘div’, class_=’_a72d’) timestamp = timestamp_tag.text.strip() if timestamp_tag else “Unknown” # Extract message content content_tag = message.find(‘div’, class_=’_2ph_ _a6-p’) if content_tag: content = content_tag.get_text(separator=” “).strip() else: content = “Unknown” # Append extracted data to list messages.append({‘Sender’: sender, ‘Timestamp’: timestamp, ‘Message’: content}) except AttributeError as e: print(f”Error parsing message: {e}”) # Convert to Pandas DataFrame df = pd.DataFrame(messages) # Debugging: Print the first few rows of the DataFrame and check for missing columns print(“DataFrame contents:”) print(df.head()) # Remove duplicates and empty messages df = df.drop_duplicates() df = df[df[‘Message’].str.strip() != “”] # Parse and clean Timestamp column df[‘Timestamp’] = pd.to_datetime(df[‘Timestamp’], format=’%b %d, %Y %I:%M:%S %p’, errors=’coerce’) # Drop rows with invalid timestamps (optional) df = df.dropna(subset=[‘Timestamp’]) # Sort DataFrame by Timestamp df = df.sort_values(by=’Timestamp’) # Save sorted DataFrame to CSV (ensure no PermissionError) csv_file_path = r’C:tempsorted_custody_schedule_new.csv’ try: df.to_csv(csv_file_path, index=False) print(f”Sorted custody-related messages saved to {csv_file_path}.”) except PermissionError as e: print(f”PermissionError: {e}. Please close any program using the file or save with a different name.”) submitted by /u/akolozvary [link] [comments]

Read more

Issue Importing Dynamically Installed Module at Runtime /u/No_Increase_7127 Python Education

Issue Importing Dynamically Installed Module at Runtime /u/No_Increase_7127 Python Education

Hey, everyone! I’ve been having lots of trouble installing and importing a module at runtime. I have no other option, since my app is running on a cluster and cannot be restarted, and the module I want to import is custom and is stored on the cloud. Installing it beforehand is also not an option.
Basically, this is what I do:

import sys import os import importlib import subprocess # Install subprocess.run(f"cd /my_libs/FOO/ && pip3 install -e .", check=True, shell=True) # Add to system path sys.path.insert(0, os.path.abspath("/my_libs/FOO/foo")) # Import utils = importlib.import_module("foo") 

When I run this test code, the pip instalation goes fine, but I get a “No module named foo” error on the importlib line. However, if I run the code a second time, it works as expected. This leads me to believe that Python only updates the installed modules on startup.

Has anyone encountered this issue before or have suggestions on how I can make this work? I’ve tried several approaches, but nothing seems to fix it.

Thanks in advance for any help!

submitted by /u/No_Increase_7127
[link] [comments]

​r/learnpython Hey, everyone! I’ve been having lots of trouble installing and importing a module at runtime. I have no other option, since my app is running on a cluster and cannot be restarted, and the module I want to import is custom and is stored on the cloud. Installing it beforehand is also not an option. Basically, this is what I do: import sys import os import importlib import subprocess # Install subprocess.run(f”cd /my_libs/FOO/ && pip3 install -e .”, check=True, shell=True) # Add to system path sys.path.insert(0, os.path.abspath(“/my_libs/FOO/foo”)) # Import utils = importlib.import_module(“foo”) When I run this test code, the pip instalation goes fine, but I get a “No module named foo” error on the importlib line. However, if I run the code a second time, it works as expected. This leads me to believe that Python only updates the installed modules on startup. Has anyone encountered this issue before or have suggestions on how I can make this work? I’ve tried several approaches, but nothing seems to fix it. Thanks in advance for any help! submitted by /u/No_Increase_7127 [link] [comments] 

Hey, everyone! I’ve been having lots of trouble installing and importing a module at runtime. I have no other option, since my app is running on a cluster and cannot be restarted, and the module I want to import is custom and is stored on the cloud. Installing it beforehand is also not an option.
Basically, this is what I do:

import sys import os import importlib import subprocess # Install subprocess.run(f"cd /my_libs/FOO/ && pip3 install -e .", check=True, shell=True) # Add to system path sys.path.insert(0, os.path.abspath("/my_libs/FOO/foo")) # Import utils = importlib.import_module("foo") 

When I run this test code, the pip instalation goes fine, but I get a “No module named foo” error on the importlib line. However, if I run the code a second time, it works as expected. This leads me to believe that Python only updates the installed modules on startup.

Has anyone encountered this issue before or have suggestions on how I can make this work? I’ve tried several approaches, but nothing seems to fix it.

Thanks in advance for any help!

submitted by /u/No_Increase_7127
[link] [comments]  Hey, everyone! I’ve been having lots of trouble installing and importing a module at runtime. I have no other option, since my app is running on a cluster and cannot be restarted, and the module I want to import is custom and is stored on the cloud. Installing it beforehand is also not an option. Basically, this is what I do: import sys import os import importlib import subprocess # Install subprocess.run(f”cd /my_libs/FOO/ && pip3 install -e .”, check=True, shell=True) # Add to system path sys.path.insert(0, os.path.abspath(“/my_libs/FOO/foo”)) # Import utils = importlib.import_module(“foo”) When I run this test code, the pip instalation goes fine, but I get a “No module named foo” error on the importlib line. However, if I run the code a second time, it works as expected. This leads me to believe that Python only updates the installed modules on startup. Has anyone encountered this issue before or have suggestions on how I can make this work? I’ve tried several approaches, but nothing seems to fix it. Thanks in advance for any help! submitted by /u/No_Increase_7127 [link] [comments]

Read more

NBA Betting Prediction Model /u/ZealousidealGuest276 Python Education

NBA Betting Prediction Model /u/ZealousidealGuest276 Python Education

Hello! 👋

I’ve been working on a script to help me analyze NBA stats for sports bets and research. My goal is to build a strong foundation using Python and tools like the nba_api library. For context, I use data apps like Hall of Fame Bets and Outlier Pro, but I wanted to create something of my own to start learning scripting and stat analysis.

The script fetches player game logs, projects key averages (Points, Rebounds, Assists, etc.), and exports the results to a CSV file. It even supports partial player name searches (like ‘Tatum’ for Jayson Tatum).

🔧 What I’ve Done So Far:

  1. Fetch NBA player stats using the nba_api library.
  2. Calculate stat projections based on user-specified recent games (default = last 5).
  3. Export results to a CSV file for further analysis.

🚀 What’s Next?

I’d love feedback, ideas for features to add, or help with improving the code structure.
My scripting knowledge is still limited, so contributions or suggestions would be incredibly helpful!

GitHub Repo:
https://github.com/parlayparlor/nba-prop-prediction-model

Feel free to test it out and let me know what you think. Let’s make this the start of something special!

submitted by /u/ZealousidealGuest276
[link] [comments]

​r/learnpython Hello! 👋 I’ve been working on a script to help me analyze NBA stats for sports bets and research. My goal is to build a strong foundation using Python and tools like the nba_api library. For context, I use data apps like Hall of Fame Bets and Outlier Pro, but I wanted to create something of my own to start learning scripting and stat analysis. The script fetches player game logs, projects key averages (Points, Rebounds, Assists, etc.), and exports the results to a CSV file. It even supports partial player name searches (like ‘Tatum’ for Jayson Tatum). 🔧 What I’ve Done So Far: Fetch NBA player stats using the nba_api library. Calculate stat projections based on user-specified recent games (default = last 5). Export results to a CSV file for further analysis. 🚀 What’s Next? I’d love feedback, ideas for features to add, or help with improving the code structure. My scripting knowledge is still limited, so contributions or suggestions would be incredibly helpful! GitHub Repo: https://github.com/parlayparlor/nba-prop-prediction-model Feel free to test it out and let me know what you think. Let’s make this the start of something special! submitted by /u/ZealousidealGuest276 [link] [comments] 

Hello! 👋

I’ve been working on a script to help me analyze NBA stats for sports bets and research. My goal is to build a strong foundation using Python and tools like the nba_api library. For context, I use data apps like Hall of Fame Bets and Outlier Pro, but I wanted to create something of my own to start learning scripting and stat analysis.

The script fetches player game logs, projects key averages (Points, Rebounds, Assists, etc.), and exports the results to a CSV file. It even supports partial player name searches (like ‘Tatum’ for Jayson Tatum).

🔧 What I’ve Done So Far:

  1. Fetch NBA player stats using the nba_api library.
  2. Calculate stat projections based on user-specified recent games (default = last 5).
  3. Export results to a CSV file for further analysis.

🚀 What’s Next?

I’d love feedback, ideas for features to add, or help with improving the code structure.
My scripting knowledge is still limited, so contributions or suggestions would be incredibly helpful!

GitHub Repo:
https://github.com/parlayparlor/nba-prop-prediction-model

Feel free to test it out and let me know what you think. Let’s make this the start of something special!

submitted by /u/ZealousidealGuest276
[link] [comments]  Hello! 👋 I’ve been working on a script to help me analyze NBA stats for sports bets and research. My goal is to build a strong foundation using Python and tools like the nba_api library. For context, I use data apps like Hall of Fame Bets and Outlier Pro, but I wanted to create something of my own to start learning scripting and stat analysis. The script fetches player game logs, projects key averages (Points, Rebounds, Assists, etc.), and exports the results to a CSV file. It even supports partial player name searches (like ‘Tatum’ for Jayson Tatum). 🔧 What I’ve Done So Far: Fetch NBA player stats using the nba_api library. Calculate stat projections based on user-specified recent games (default = last 5). Export results to a CSV file for further analysis. 🚀 What’s Next? I’d love feedback, ideas for features to add, or help with improving the code structure. My scripting knowledge is still limited, so contributions or suggestions would be incredibly helpful! GitHub Repo: https://github.com/parlayparlor/nba-prop-prediction-model Feel free to test it out and let me know what you think. Let’s make this the start of something special! submitted by /u/ZealousidealGuest276 [link] [comments]

Read more

I think, i learned the basics but now i feel, i am lost. /u/cahit135 Python Education

I think, i learned the basics but now i feel, i am lost. /u/cahit135 Python Education

Recently i have completed the first part of Python Crash Course 3 and the book continues with necessary libraries from now on, like pygame, django and etc. But the thing, which i couldnt figure out, is should i learn those libraries? I want to professionalize on data analysis as a MIS student myself. In this case it sounds a bit nonsense to me to learn a library like pygame and its a bit complex as well, as it seems from the part that i learned so far, but on the other hand it is pretty handy for visualisation,thats, how i been told. How should i continue? You guys have any suggestions?

submitted by /u/cahit135
[link] [comments]

​r/learnpython Recently i have completed the first part of Python Crash Course 3 and the book continues with necessary libraries from now on, like pygame, django and etc. But the thing, which i couldnt figure out, is should i learn those libraries? I want to professionalize on data analysis as a MIS student myself. In this case it sounds a bit nonsense to me to learn a library like pygame and its a bit complex as well, as it seems from the part that i learned so far, but on the other hand it is pretty handy for visualisation,thats, how i been told. How should i continue? You guys have any suggestions? submitted by /u/cahit135 [link] [comments] 

Recently i have completed the first part of Python Crash Course 3 and the book continues with necessary libraries from now on, like pygame, django and etc. But the thing, which i couldnt figure out, is should i learn those libraries? I want to professionalize on data analysis as a MIS student myself. In this case it sounds a bit nonsense to me to learn a library like pygame and its a bit complex as well, as it seems from the part that i learned so far, but on the other hand it is pretty handy for visualisation,thats, how i been told. How should i continue? You guys have any suggestions?

submitted by /u/cahit135
[link] [comments]  Recently i have completed the first part of Python Crash Course 3 and the book continues with necessary libraries from now on, like pygame, django and etc. But the thing, which i couldnt figure out, is should i learn those libraries? I want to professionalize on data analysis as a MIS student myself. In this case it sounds a bit nonsense to me to learn a library like pygame and its a bit complex as well, as it seems from the part that i learned so far, but on the other hand it is pretty handy for visualisation,thats, how i been told. How should i continue? You guys have any suggestions? submitted by /u/cahit135 [link] [comments]

Read more

Laptop for Programming /u/PToe1705 Python Education

Laptop for Programming /u/PToe1705 Python Education

What do you think is the best Laptop for programming? I am searching in a budget of 300-500€, you can also go above it, but I don’t want to spend too much money on it. I would like to have a windows one and at least 16GB RAM. Please comment you’re Laptop or a Laptop you know.

submitted by /u/PToe1705
[link] [comments]

​r/learnpython What do you think is the best Laptop for programming? I am searching in a budget of 300-500€, you can also go above it, but I don’t want to spend too much money on it. I would like to have a windows one and at least 16GB RAM. Please comment you’re Laptop or a Laptop you know. submitted by /u/PToe1705 [link] [comments] 

What do you think is the best Laptop for programming? I am searching in a budget of 300-500€, you can also go above it, but I don’t want to spend too much money on it. I would like to have a windows one and at least 16GB RAM. Please comment you’re Laptop or a Laptop you know.

submitted by /u/PToe1705
[link] [comments]  What do you think is the best Laptop for programming? I am searching in a budget of 300-500€, you can also go above it, but I don’t want to spend too much money on it. I would like to have a windows one and at least 16GB RAM. Please comment you’re Laptop or a Laptop you know. submitted by /u/PToe1705 [link] [comments]

Read more

Library for characterizing time interval data? /u/spacester Python Education

Library for characterizing time interval data? /u/spacester Python Education

I need to find or create functions that can look at multiple series of data, basically y values for regularly spaced x values (where x is actually the time coordinate) that lie on a continuous curve. The function would report back in some form I can use in if statements to get my overall program’s results. I need to understand the larger trends between multiple data series, for very many series.

I am thinking I just need simple capability compared to sophisticated data analysis: does the data always increase or always decrease? Is it non-linear? Are there gaps? Does it have multiple minimums or maximums? What is the index value before and after an inflection point? As a bonus, I could use a function that tells me the before and after indices where two series cross.

I have been trying to do this with for loops which got ugly and then with list comprehension which is brand new to me and I am struggling with that as well.

I found this list:

https://github.com/MaxBenChrist/awesome_time_series_in_python

But I do not know enough to choose one with any confidence.

submitted by /u/spacester
[link] [comments]

​r/learnpython I need to find or create functions that can look at multiple series of data, basically y values for regularly spaced x values (where x is actually the time coordinate) that lie on a continuous curve. The function would report back in some form I can use in if statements to get my overall program’s results. I need to understand the larger trends between multiple data series, for very many series. I am thinking I just need simple capability compared to sophisticated data analysis: does the data always increase or always decrease? Is it non-linear? Are there gaps? Does it have multiple minimums or maximums? What is the index value before and after an inflection point? As a bonus, I could use a function that tells me the before and after indices where two series cross. I have been trying to do this with for loops which got ugly and then with list comprehension which is brand new to me and I am struggling with that as well. I found this list: https://github.com/MaxBenChrist/awesome_time_series_in_python But I do not know enough to choose one with any confidence. submitted by /u/spacester [link] [comments] 

I need to find or create functions that can look at multiple series of data, basically y values for regularly spaced x values (where x is actually the time coordinate) that lie on a continuous curve. The function would report back in some form I can use in if statements to get my overall program’s results. I need to understand the larger trends between multiple data series, for very many series.

I am thinking I just need simple capability compared to sophisticated data analysis: does the data always increase or always decrease? Is it non-linear? Are there gaps? Does it have multiple minimums or maximums? What is the index value before and after an inflection point? As a bonus, I could use a function that tells me the before and after indices where two series cross.

I have been trying to do this with for loops which got ugly and then with list comprehension which is brand new to me and I am struggling with that as well.

I found this list:

https://github.com/MaxBenChrist/awesome_time_series_in_python

But I do not know enough to choose one with any confidence.

submitted by /u/spacester
[link] [comments]  I need to find or create functions that can look at multiple series of data, basically y values for regularly spaced x values (where x is actually the time coordinate) that lie on a continuous curve. The function would report back in some form I can use in if statements to get my overall program’s results. I need to understand the larger trends between multiple data series, for very many series. I am thinking I just need simple capability compared to sophisticated data analysis: does the data always increase or always decrease? Is it non-linear? Are there gaps? Does it have multiple minimums or maximums? What is the index value before and after an inflection point? As a bonus, I could use a function that tells me the before and after indices where two series cross. I have been trying to do this with for loops which got ugly and then with list comprehension which is brand new to me and I am struggling with that as well. I found this list: https://github.com/MaxBenChrist/awesome_time_series_in_python But I do not know enough to choose one with any confidence. submitted by /u/spacester [link] [comments]

Read more

Ruff Vscode Formatter /u/youngblackkidz Python Education

Ruff Vscode Formatter /u/youngblackkidz Python Education

Hello,
i have my ruff config set to extend-safe-fixes to = [“E711”]
But when i save the file (I do have format on save for python enable in vscode)
It doenst fix that change for me
But if i do it via the cli it works just fine

The same for indent-width = 2
Ruff doesnt throw an error for me nor does it fix the file for me
Here is my ruff toml file

# Exclude a variety of commonly ignored directories. exclude = [ ".bzr", ".direnv", ".eggs", ".git", ".git-rewrite", ".hg", ".ipynb_checkpoints", ".mypy_cache", ".nox", ".pants.d", ".pyenv", ".pytest_cache", ".pytype", ".ruff_cache", ".svn", ".tox", ".venv", ".vscode", "__pypackages__", "_build", "buck-out", "build", "dist", "node_modules", "site-packages", "venv", ] # Same as Black. line-length = 180 indent-width = 4 fix = true # unsafe-fixes = true # Assume Python 3.9 target-version = "py310" [lint] # Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default. # Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or # McCabe complexity (`C901`) by default. select = ["E4", "E7", "E9", "F"] extend-safe-fixes = ["E711"] ignore = [] # Allow fix for all enabled rules (when `--fix`) is provided. fixable = ["ALL"] unfixable = [] # Allow unused variables when underscore-prefixed. dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$" [format] # Like Black, use double quotes for strings. quote-style = "double" # Like Black, indent with spaces, rather than tabs. indent-style = "space" # Like Black, respect magic trailing commas. skip-magic-trailing-comma = false # Like Black, automatically detect the appropriate line ending. line-ending = "auto" # Enable auto-formatting of code examples in docstrings. Markdown, # reStructuredText code/literal blocks and doctests are all supported. # # This is currently disabled by default, but it is planned for this # to be opt-out in the future. docstring-code-format = false # Set the line length limit used when formatting code snippets in # docstrings. # # This only has an effect when the `docstring-code-format` setting is # enabled. docstring-code-line-length = "dynamic" 

submitted by /u/youngblackkidz
[link] [comments]

​r/learnpython Hello, i have my ruff config set to extend-safe-fixes to = [“E711”] But when i save the file (I do have format on save for python enable in vscode) It doenst fix that change for me But if i do it via the cli it works just fine The same for indent-width = 2 Ruff doesnt throw an error for me nor does it fix the file for me Here is my ruff toml file # Exclude a variety of commonly ignored directories. exclude = [ “.bzr”, “.direnv”, “.eggs”, “.git”, “.git-rewrite”, “.hg”, “.ipynb_checkpoints”, “.mypy_cache”, “.nox”, “.pants.d”, “.pyenv”, “.pytest_cache”, “.pytype”, “.ruff_cache”, “.svn”, “.tox”, “.venv”, “.vscode”, “__pypackages__”, “_build”, “buck-out”, “build”, “dist”, “node_modules”, “site-packages”, “venv”, ] # Same as Black. line-length = 180 indent-width = 4 fix = true # unsafe-fixes = true # Assume Python 3.9 target-version = “py310” [lint] # Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default. # Unlike Flake8, Ruff doesn’t enable pycodestyle warnings (`W`) or # McCabe complexity (`C901`) by default. select = [“E4”, “E7”, “E9”, “F”] extend-safe-fixes = [“E711”] ignore = [] # Allow fix for all enabled rules (when `–fix`) is provided. fixable = [“ALL”] unfixable = [] # Allow unused variables when underscore-prefixed. dummy-variable-rgx = “^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$” [format] # Like Black, use double quotes for strings. quote-style = “double” # Like Black, indent with spaces, rather than tabs. indent-style = “space” # Like Black, respect magic trailing commas. skip-magic-trailing-comma = false # Like Black, automatically detect the appropriate line ending. line-ending = “auto” # Enable auto-formatting of code examples in docstrings. Markdown, # reStructuredText code/literal blocks and doctests are all supported. # # This is currently disabled by default, but it is planned for this # to be opt-out in the future. docstring-code-format = false # Set the line length limit used when formatting code snippets in # docstrings. # # This only has an effect when the `docstring-code-format` setting is # enabled. docstring-code-line-length = “dynamic” submitted by /u/youngblackkidz [link] [comments] 

Hello,
i have my ruff config set to extend-safe-fixes to = [“E711”]
But when i save the file (I do have format on save for python enable in vscode)
It doenst fix that change for me
But if i do it via the cli it works just fine

The same for indent-width = 2
Ruff doesnt throw an error for me nor does it fix the file for me
Here is my ruff toml file

# Exclude a variety of commonly ignored directories. exclude = [ ".bzr", ".direnv", ".eggs", ".git", ".git-rewrite", ".hg", ".ipynb_checkpoints", ".mypy_cache", ".nox", ".pants.d", ".pyenv", ".pytest_cache", ".pytype", ".ruff_cache", ".svn", ".tox", ".venv", ".vscode", "__pypackages__", "_build", "buck-out", "build", "dist", "node_modules", "site-packages", "venv", ] # Same as Black. line-length = 180 indent-width = 4 fix = true # unsafe-fixes = true # Assume Python 3.9 target-version = "py310" [lint] # Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default. # Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or # McCabe complexity (`C901`) by default. select = ["E4", "E7", "E9", "F"] extend-safe-fixes = ["E711"] ignore = [] # Allow fix for all enabled rules (when `--fix`) is provided. fixable = ["ALL"] unfixable = [] # Allow unused variables when underscore-prefixed. dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$" [format] # Like Black, use double quotes for strings. quote-style = "double" # Like Black, indent with spaces, rather than tabs. indent-style = "space" # Like Black, respect magic trailing commas. skip-magic-trailing-comma = false # Like Black, automatically detect the appropriate line ending. line-ending = "auto" # Enable auto-formatting of code examples in docstrings. Markdown, # reStructuredText code/literal blocks and doctests are all supported. # # This is currently disabled by default, but it is planned for this # to be opt-out in the future. docstring-code-format = false # Set the line length limit used when formatting code snippets in # docstrings. # # This only has an effect when the `docstring-code-format` setting is # enabled. docstring-code-line-length = "dynamic" 

submitted by /u/youngblackkidz
[link] [comments]  Hello, i have my ruff config set to extend-safe-fixes to = [“E711”] But when i save the file (I do have format on save for python enable in vscode) It doenst fix that change for me But if i do it via the cli it works just fine The same for indent-width = 2 Ruff doesnt throw an error for me nor does it fix the file for me Here is my ruff toml file # Exclude a variety of commonly ignored directories. exclude = [ “.bzr”, “.direnv”, “.eggs”, “.git”, “.git-rewrite”, “.hg”, “.ipynb_checkpoints”, “.mypy_cache”, “.nox”, “.pants.d”, “.pyenv”, “.pytest_cache”, “.pytype”, “.ruff_cache”, “.svn”, “.tox”, “.venv”, “.vscode”, “__pypackages__”, “_build”, “buck-out”, “build”, “dist”, “node_modules”, “site-packages”, “venv”, ] # Same as Black. line-length = 180 indent-width = 4 fix = true # unsafe-fixes = true # Assume Python 3.9 target-version = “py310” [lint] # Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default. # Unlike Flake8, Ruff doesn’t enable pycodestyle warnings (`W`) or # McCabe complexity (`C901`) by default. select = [“E4”, “E7”, “E9”, “F”] extend-safe-fixes = [“E711”] ignore = [] # Allow fix for all enabled rules (when `–fix`) is provided. fixable = [“ALL”] unfixable = [] # Allow unused variables when underscore-prefixed. dummy-variable-rgx = “^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$” [format] # Like Black, use double quotes for strings. quote-style = “double” # Like Black, indent with spaces, rather than tabs. indent-style = “space” # Like Black, respect magic trailing commas. skip-magic-trailing-comma = false # Like Black, automatically detect the appropriate line ending. line-ending = “auto” # Enable auto-formatting of code examples in docstrings. Markdown, # reStructuredText code/literal blocks and doctests are all supported. # # This is currently disabled by default, but it is planned for this # to be opt-out in the future. docstring-code-format = false # Set the line length limit used when formatting code snippets in # docstrings. # # This only has an effect when the `docstring-code-format` setting is # enabled. docstring-code-line-length = “dynamic” submitted by /u/youngblackkidz [link] [comments]

Read more