This blog takes 8 stocks and simulates how well a portfolio would do if invested. It also tells what the best weighting for each stock would be the most optimal. It gives important information such as risk, volatility, profit, Sharpe Ratio, etc.  
  8 stocks have been chosen at random which are Apple (AAPL), NVIDIA (NVDA), Microsoft (MSFT), Tesla (TSLA), Amazon (AMZN), Netflix (NFLX), Qualcomm (QCOM), and Starbucks (SBUX)  
# Reading in the stocks of each stock and then creating a central data frame of each. 

import numpy as np
import pandas as pd
import pandas_datareader.data as web
# Get stock data  
all_data = {ticker: web.DataReader(ticker,'stooq')
           for ticker in ['AAPL', 'NVDA', 'MSFT', 'TSLA', 'AMZN', 'NFLX', 'QCOM', 'SBUX']}
# Extract the 'Adjusted Closing Price'
price = pd.DataFrame({ticker: data['Close']
                     for ticker, data in all_data.items() })

price
AAPL NVDA MSFT TSLA AMZN NFLX QCOM SBUX
Date
2023-08-31 187.8700 493.5500 327.760 258.0800 138.0100 433.68 114.5300 97.4400
2023-08-30 187.6500 492.6400 328.790 256.9000 135.0700 434.67 113.2700 99.2400
2023-08-29 184.1200 487.8400 328.410 257.1800 134.9100 429.99 113.7800 99.1500
2023-08-28 180.1900 468.3500 323.700 238.8200 133.1400 418.06 111.6800 97.0400
2023-08-25 178.6100 460.1800 322.980 238.5900 133.2600 416.03 110.3200 95.4800
... ... ... ... ... ... ... ... ...
2018-09-10 52.4989 68.1180 104.340 19.0333 96.9505 348.41 64.6572 50.5281
2018-09-07 53.2147 67.4025 103.227 17.5493 97.6035 348.68 62.9223 50.4279
2018-09-06 53.6434 67.6153 103.714 18.7300 97.9155 346.46 62.9066 49.8271
2018-09-05 54.5516 69.0242 103.476 18.7160 99.7410 341.18 63.3783 49.4433
2018-09-04 54.9145 70.3372 106.564 19.2633 101.9760 363.60 62.5625 49.2066

1257 rows × 8 columns

  The standard deviation is how investors measure volatility and risk. STD measures the average distance from the mean, so the higher the STD the less concise the data is. In the finance world, if the STD is higher, that means that the price of the stock can be more unpredictable. This equates to risk. The lower the STD the better, and vice versa.  
# finding standard deviation
price.std()
AAPL     47.308642
NVDA    103.024832
MSFT     73.252391
TSLA    112.412565
AMZN     32.617489
NFLX    116.845479
QCOM     35.430275
SBUX     17.016572
dtype: float64
  The correlation between each of the stocks shows how well the portfolio will do. The correlation in math shows the relationship and the proportion between two variables. In finance the correlation dictates how two stocks will react in relationship. In layman terms it shows if one stock will go up what will another stock do. Correlation ranges from -1 to 1, where -1 is the most optimal.  
# finding correlation of stocks
price.corr()
AAPL NVDA MSFT TSLA AMZN NFLX QCOM SBUX
AAPL 1.000000 0.886188 0.974270 0.918142 0.606694 0.243717 0.882692 0.712729
NVDA 0.886188 1.000000 0.909157 0.768159 0.484619 0.305933 0.687077 0.636798
MSFT 0.974270 0.909157 1.000000 0.911839 0.659577 0.352849 0.872158 0.760491
TSLA 0.918142 0.768159 0.911839 1.000000 0.724146 0.353540 0.928892 0.667152
AMZN 0.606694 0.484619 0.659577 0.724146 1.000000 0.773137 0.759325 0.590871
NFLX 0.243717 0.305933 0.352849 0.353540 0.773137 1.000000 0.402736 0.513420
QCOM 0.882692 0.687077 0.872158 0.928892 0.759325 0.402736 1.000000 0.735419
SBUX 0.712729 0.636798 0.760491 0.667152 0.590871 0.513420 0.735419 1.000000
  By finding the correlation of the entire portfolio, it will show how well it will do. The closer to -1 the better.  
# Finding average correlation to show profability of portfolio
averageCorr = price.corr()
averageCorrMean = averageCorr.mean()
averageCorrMean

column_sum = 0

# For loop to find the mean of the entire data table
for i in range(len(averageCorrMean)):
    column_sum += averageCorrMean[i]
column_sum = column_sum/len(averageCorrMean)
column_sum
0.7194289789100449
  This is where the math and the real fun begins. This next code finds the optimal weights of each stock. It does this by running through 6000 differnt scenarios each with different weighting. It finds the weights by finding the retention factor and the volatility of each stock and then compare it to each of the other 7 stocks. Then once it has done that it’ll compare it to average yearly stock prices where it will then compare all 6000 scenarios and output the most optimal.  
# finding weights, return, volitilty, and sharpe ratio. 

stocks = pd.concat([price['AAPL'], price['NVDA'], price['MSFT'], price['TSLA'], price['AMZN'], price['NFLX'], price['QCOM'], price['SBUX']], axis = 1)
log_ret = np.log(stocks/stocks.shift(1))

# setting up variables
np.random.seed(42)
num_ports = 6000
num_stocks = 8
all_weights = np.zeros((num_ports, len(stocks.columns)))
ret_arr = np.zeros(num_ports)
vol_arr = np.zeros(num_ports)
sharpe_arr = np.zeros(num_ports)

# going through all possible weights
for x in range(num_ports):
    # Weights
    weights = np.array(np.random.random(num_stocks))
    weights = weights/np.sum(weights)
    
    # Save weights
    all_weights[x,:] = weights
    
    # Expected return
    ret_arr[x] = np.sum( (log_ret.mean() * weights * 252))
    
    # Expected volatility
    vol_arr[x] = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov()*252, weights)))
    
    # Sharpe Ratio
    sharpe_arr[x] = ret_arr[x]/vol_arr[x]
  The Sharpe Ratio is how investors determine the profability of a portfolio in a single number that can be compared to other portfolios. In finance, the Sharpe ratio measures the performance of an investment such as a security or portfolio compared to a risk-free asset, after adjusting for its risk.  
# printing the max sharpe ratio
print("Max Sharpe Ratio = ",sharpe_arr.max())
sharpe_arr.argmax()
max_sr_ret =  ret_arr[sharpe_arr.argmax()]
max_sr_vol =  vol_arr[sharpe_arr.argmax()]
Max Sharpe Ratio =  -0.31075639590643395