Compte tenu de ma spécialisation en gestion des risques, les contenus que je prévois de partager dans cette section dédiée à la 3ème année (3A) porteront principalement sur les enseignements spécifiques à cette spécialisation. Je m’efforcerai de rendre ces partages aussi pertinents et enrichissants que possible, en espérant qu’ils serviront de guide précieux pour ceux qui suivront une voie similaire.

The prediction risk

The risk prediction is the expected loss between the predicted value and the true value on new, unseen data. In formula form, it can be expressed as: \[ \text{R(X)} = \mathbb{E}[(Y - \hat{Y})^2] = \mathbb{E}[(Y - \mathbb{text{E}}(Y|X))^2] \] where \(Y\) is the true value, \(\hat{Y}\) is the predicted value, and \(X\) is the set of covariates used for prediction.

Observed Default Rate (ODR)

How can we estimate the observed default rate (ODR) of a portfolio of loans?

This is example of a database where we want to estimate the ODR and the yearly ODR as the the average of the quarterly ODRs.

import pandas as pd
import numpy as np


# Étape 1 : Création de la table "frequentist_PD"
# Simulons 6 trimestres de données pour 2 clusters (A et B)
clusters = ['A'] * 6 + ['B'] * 6
dates = pd.date_range('2020-01-01', periods=6, freq='QE').tolist() * 2
odrQ = [0.020, 0.025, 0.030, 0.028, 0.027, 0.029,
        0.030, 0.032, 0.031, 0.033, 0.035, 0.034]

frequentist_PD = pd.DataFrame({
    'cluster': clusters,
    'hpm_arret': dates,
    'odrQ': odrQ
})

# On ajoute des colonnes d'index pour simuler "obs"
frequentist_PD['obs'] = frequentist_PD.groupby('cluster').cumcount()

# On trie pour garantir l'ordre temporel par cluster
frequentist_PD = frequentist_PD.sort_values(['cluster', 'hpm_arret']).reset_index(drop=True)

frequentist_PD
cluster hpm_arret odrQ obs
0 A 2020-03-31 0.020 0
1 A 2020-06-30 0.025 1
2 A 2020-09-30 0.030 2
3 A 2020-12-31 0.028 3
4 A 2021-03-31 0.027 4
5 A 2021-06-30 0.029 5
6 B 2020-03-31 0.030 0
7 B 2020-06-30 0.032 1
8 B 2020-09-30 0.031 2
9 B 2020-12-31 0.033 3
10 B 2021-03-31 0.035 4
11 B 2021-06-30 0.034 5
# Étape 2 : Simuler les frequentist_PD_2_j (décalages pour j = i+1, i = 1, 2, 3)

# On crée un dictionnaire pour stocker les décalages
frequentist_PD_2 = {}

# On prépare une copie de base à laquelle on ajoutera les colonnes décalées
base = frequentist_PD.copy()

# Pour chaque i (1 to 3), on génère le j = i + 1 et on décale les colonnes
for i in range(1, 4):
    j = i + 1
    shifted = base.copy()
    shifted['obs'] = shifted['obs'] - i  # décale l'observation comme point=obs+i en SAS
    shifted = shifted.rename(columns={
        'cluster': f'cluster{j}',
        'hpm_arret': f'hpm_arret{j}',
        'odrQ': f'odrQ{j}'
    })
    frequentist_PD_2[j] = shifted[['obs', f'cluster{j}', f'hpm_arret{j}', f'odrQ{j}']]

# On merge progressivement pour simuler les frequentist_PD_3_j
merged = base.copy()
for j in range(2, 5):
    merged = pd.merge(merged, frequentist_PD_2[j], on='obs', how='left')

merged
cluster hpm_arret odrQ obs cluster2 hpm_arret2 odrQ2 cluster3 hpm_arret3 odrQ3 cluster4 hpm_arret4 odrQ4
0 A 2020-03-31 0.020 0 A 2020-06-30 0.025 A 2020-09-30 0.030 A 2020-12-31 0.028
1 A 2020-03-31 0.020 0 A 2020-06-30 0.025 A 2020-09-30 0.030 B 2020-12-31 0.033
2 A 2020-03-31 0.020 0 A 2020-06-30 0.025 B 2020-09-30 0.031 A 2020-12-31 0.028
3 A 2020-03-31 0.020 0 A 2020-06-30 0.025 B 2020-09-30 0.031 B 2020-12-31 0.033
4 A 2020-03-31 0.020 0 B 2020-06-30 0.032 A 2020-09-30 0.030 A 2020-12-31 0.028
... ... ... ... ... ... ... ... ... ... ... ... ... ...
57 B 2020-12-31 0.033 3 B 2021-03-31 0.035 A 2021-06-30 0.029 NaN NaT NaN
58 B 2020-12-31 0.033 3 B 2021-03-31 0.035 B 2021-06-30 0.034 NaN NaT NaN
59 B 2021-03-31 0.035 4 A 2021-06-30 0.029 NaN NaT NaN NaN NaT NaN
60 B 2021-03-31 0.035 4 B 2021-06-30 0.034 NaN NaT NaN NaN NaT NaN
61 B 2021-06-30 0.034 5 NaN NaT NaN NaN NaT NaN NaN NaT NaN

62 rows × 13 columns

The SAS script is given below:

%macro test1;

  %do i=1 %to 3 %by 1;
    %let j = %sysevalf(&i.+1);

    /* Crée une table contenant les colonnes odrQ décalées */
    data frequentist_PD_2_&j.;
      obs1 = 1;
      do while (obs1 < nobs);
        set frequentist_PD nobs=nobs;
        obs&j. = obs1 + &i.;

        set frequentist_PD (
          rename=(
            cluster=cluster&j.
            lb=lb&j.
            ub=ub&j.
            hpm_arret=hpm_arret&j.
            def=defQ&j.
            n=nQ&j.
            odrQ=odrQ&j.
          )
        ) point=obs&j.;

        output;
        keep cluster lb ub hpm_arret def defQ&j. n nQ&j. odrQ odrQ&j.;
        obs1 + 1;
      end;
    run;

    /* Initialisation au premier tour */
    %if &i. = 1 %then %do;
      data frequentist_PD_3_&j.;
        set frequentist_PD_2_&j.;
      run;
    %end;

    /* Jointure gauche avec la table précédente pour empiler les odrQ */
    %else %do;
      proc sql;
        create table frequentist_PD_3_&j. as
        select a.*,
               b.defQ&j.,
               b.nQ&j.,
               b.odrQ&j.
        from frequentist_PD_3_&i. as a
        left join frequentist_PD_2_&j. as b
          on a.cluster = b.cluster
          and a.hpm_arret = b.hpm_arret;
      quit;
    %end;

  %end;

  /* Résultat final : table contenant odrQ, odrQ2, odrQ3, odrQ4 */
  data frequentist_PD_4;
    set frequentist_PD_3_&j.;
    
    /* Calcul du ODR annuel glissant comme moyenne des 4 trimestres */
    odrY = mean(of odrQ odrQ2 odrQ3 odrQ4);
  run;

  /* Décale hpm_arret de 9 mois (équivalent intnx) */
  data frequentist_PD_4;
    set frequentist_PD_4;
    hpm_arret = intnx('month', hpm_arret, 9, 'same');
    if year(hpm_arret) = 2019 then delete; /* Exclusion des années trop anciennes */
  run;

  /* Sauvegarde finale */
  data frequentist_PD_5;
    set frequentist_PD_4;
  run;

%mend;

/* Exécution de la macro */
%test1;

May 27, 2025

Model Selection

We have a data with many covariates. But we want to include only the covariates that are relevant to the response variable. This allows us to have a parsimonious model, with fewer covariates, which is easier to interpret and to use for prediction.

Generally, when we add more covariates to a model, the bias of the model decreases, but the variance increases.

The adulterous woman

A woman was caught in adultery. The Pharisees brought her to Jesus and asked him what should be done with her. They said that according the law of Moses, she should be stoned to death. Jesus replied, “Let anyone among you who is without sin be the first to throw a stone at her.”

I like this story because it shows that Jesus is a feminist.

Isaac Asimov’s “Three Laws of Robotics”

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Density probability function using plotly

import pandas as pd
import numpy as np

# Simuler les données
df = pd.DataFrame({
    '2012': np.random.randn(200),
    '2013': np.random.randn(200) + 1
})

# Calculer les statistiques
stats_df = pd.DataFrame({
    'min': df.min(),
    'mean': df.mean(),
    'median': df.median(),
    'max': df.max(),
    'std': df.std()
})

# Affichage
print(stats_df)
           min     mean    median       max       std
2012 -2.399943  0.07926  0.075151  2.206767  0.880948
2013 -1.076208  1.02229  0.963917  3.607347  0.920831
import plotly.express as px
import pandas as pd
import numpy as np

# Simuler les données
df = pd.DataFrame({
    '2012': np.random.randn(200),
    '2013': np.random.randn(200) + 1
})

# Convertir au format long
df_long = df.melt(var_name='Year', value_name='Value')

# Créer le boxplot
fig = px.box(df_long, x='Year', y='Value', points=False, title="Boxplot for 2012 and 2013", color='Year',
             labels={'Year': 'Year', 'Value': 'Values'})
fig.show()
import numpy as np
import pandas as pd
import plotly.graph_objects as go
from scipy.stats import gaussian_kde

# Example data (you can replace this with your real df["age"])
age = np.array([15, 16, 16, 17, 18, 19, 20, 21, 22, 22, 23, 24, 25, 25, 26, 27, 28])

# Remove NA if needed
age = age[~np.isnan(age)]

# KDE estimate
kde = gaussian_kde(age)

# Define range of x values
x_vals = np.linspace(age.min(), age.max(), 200)
y_vals = kde(x_vals)

# Plot using Plotly
fig = go.Figure()
fig.add_trace(go.Scatter(x=x_vals, y=y_vals, mode='lines', name='Density', line=dict(width=2)))
fig.update_layout(title="Density of age", xaxis_title="Age", yaxis_title="Density")
fig.show()

Exploring Variable Relationships in Python

The graphic analysis is a tool to understand the relationship between the covariates and the response variable. It is very important when we want to perform a linear regression.

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from statsmodels.api import OLS, add_constant

# Load the dataset
df = pd.read_csv('data/Multiple_Regression_Dataset.csv')
df.head()
R Age S Ed Ex0 Ex1 LF M N NW U1 U2 W X
0 79.1 151 1 91 58 56 510 950 33 301 108 41 394 261
1 163.5 143 0 113 103 95 583 1012 13 102 96 36 557 194
2 57.8 142 1 89 45 44 533 969 18 219 94 33 318 250
3 196.9 136 0 121 149 141 577 994 157 80 102 39 673 167
4 123.4 141 0 121 109 101 591 985 18 30 91 20 578 174
# Create a new figure

# Extract response variable and covariates
response = 'R'
covariates = [col for col in df.columns if col != response]

fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(5, 5), sharex=False, sharey=True)
axes = axes.flatten()

# Plot boxplot for binary variable 'S'
sns.boxplot(data=df, x='S', y='R', ax=axes[0])
axes[0].set_title('Boxplot of R by S')
axes[0].set_xlabel('S')
axes[0].set_ylabel('R')

# Plot regression lines for all other covariates
plot_index = 1
for cov in covariates:
    if cov != 'S':
        sns.regplot(data=df, x=cov, y='R', ax=axes[plot_index], scatter=True, line_kws={"color": "red"})
        axes[plot_index].set_title(f'{cov} vs R')
        axes[plot_index].set_xlabel(cov)
        axes[plot_index].set_ylabel('R')
        plot_index += 1

# Hide unused subplots
for i in range(plot_index, len(axes)):
    fig.delaxes(axes[i])

fig.tight_layout()
plt.show()

Spearman Correlation Matrix

# Calculate Spearman correlation matrix
spearman_corr = df.corr(method='spearman')
plt.figure(figsize=(12, 10))
sns.heatmap(spearman_corr, annot=True, cmap="coolwarm", fmt=".2f", linewidths=0.5)
plt.title("Correlation Matrix Heatmap")
plt.show()

When several variables are correlated

When several variables are correlated with each other, we keep only one of them. The one the most correlated with the response variable.

# Step 2: Correlation of each variable with response R
spearman_corr_with_R = spearman_corr['R'].drop('R')  # exclude R-R

# Step 3: Identify pairs of covariates with strong inter-correlation (e.g., > 0.9)
strong_pairs = []
threshold = 0.6
covariates = spearman_corr_with_R.index

for i, var1 in enumerate(covariates):
    for var2 in covariates[i+1:]:
        if abs(spearman_corr.loc[var1, var2]) > threshold:
            strong_pairs.append((var1, var2))

# Step 4: From each correlated pair, keep only the variable most correlated with R
to_keep = set()
to_discard = set()

for var1, var2 in strong_pairs:
    if abs(spearman_corr_with_R[var1]) >= abs(spearman_corr_with_R[var2]):
        to_keep.add(var1)
        to_discard.add(var2)
    else:
        to_keep.add(var2)
        to_discard.add(var1)

# Final selection: all covariates excluding the ones to discard due to redundancy
final_selected_variables = [var for var in covariates if var not in to_discard]

final_selected_variables
['Ex1', 'LF', 'M', 'N', 'NW', 'U2']

Variance Inflation Factor (VIF) for selected variables after standardization

from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant

X = df[final_selected_variables]  

X_with_const = add_constant(X)  

vif_data = pd.DataFrame()
vif_data["variable"] = X_with_const.columns
vif_data["VIF"] = [variance_inflation_factor(X_with_const.values, i)
                   for i in range(X_with_const.shape[1])]

vif_data = vif_data[vif_data["variable"] != "const"]

print(vif_data)
  variable       VIF
1      Ex1  1.586841
2       LF  2.006338
3        M  2.042110
4        N  2.018055
5       NW  1.242898
6       U2  1.536967

Fit a linear regression model on a six variables after standardization not split data into train and test

# Variables
X = df[final_selected_variables]
y = df['R']

# Standardisation des variables explicatives
scaler = StandardScaler()
X_scaled_vars = scaler.fit_transform(X)

# ➕ Remettre les noms des colonnes (après standardisation)
X_scaled_df = pd.DataFrame(X_scaled_vars, columns=final_selected_variables)

# ➕ Ajouter l'intercept (constante)
X_scaled_df = add_constant(X_scaled_df)

# Régression avec noms conservés
model = OLS(y, X_scaled_df).fit()
print(model.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      R   R-squared:                       0.568
Model:                            OLS   Adj. R-squared:                  0.503
Method:                 Least Squares   F-statistic:                     8.773
Date:                Sat, 31 May 2025   Prob (F-statistic):           4.07e-06
Time:                        16:51:47   Log-Likelihood:                -218.24
No. Observations:                  47   AIC:                             450.5
Df Residuals:                      40   BIC:                             463.4
Df Model:                           6                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const         90.5085      3.975     22.767      0.000      82.474      98.543
Ex1           25.3077      5.008      5.054      0.000      15.187      35.429
LF             4.9155      5.631      0.873      0.388      -6.465      16.296
M              9.9027      5.681      1.743      0.089      -1.579      21.384
N              2.6733      5.647      0.473      0.639      -8.741      14.087
NW            11.1950      4.432      2.526      0.016       2.238      20.152
U2             3.1268      4.928      0.634      0.529      -6.834      13.088
==============================================================================
Omnibus:                        1.619   Durbin-Watson:                   2.089
Prob(Omnibus):                  0.445   Jarque-Bera (JB):                1.126
Skew:                           0.045   Prob(JB):                        0.569
Kurtosis:                       2.247   Cond. No.                         3.06
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

Don’t Trust ChatGPT

Each year, we see rapid advancements in artificial language tools, especially from companies like OpenAI. These models are evolving fast. Today, we can generate images with ease, solve complex problems in computer science, mathematics, physics, and more. But can we really trust these tools?

I don’t think so.

Today, I was writing an article about regression to the mean. The equation is given by:

\(\mathbb{E}(Y|X) = \alpha + \beta X\)

My goal was to explain the meaning of the regression coefficient and offer an intuitive technique to predict the value of \(Y\) given a value of \(X\). I assumed that the variances of \(X\) and \(Y\) were both equal to one, and that \(X\) was centered (i.e., had mean zero). I then stated that under these conditions, the slope \(\beta\) is equal to the correlation between \(X\) and \(Y\).

But ChatGPT told me this was incorrect. It insisted that this only holds true if \(Y\) is also centered.

Yet, we know that if both \(X\) and \(Y\) are scaled to have unit variance, then the slope of the regression is indeed equal to the correlation between them—even if \(Y\) is not centered. That’s a basic identity in linear regression under standardized variables.

So while these AI tools can be helpful, they’re not always right. You still need to think critically, check the math, and trust your own understanding.

Sports Bring Us Together

Today, like most days, I went for a run and stopped at the park to do some exercises—push-ups, a few pull-ups, and some jump rope.

Every time, I find it amazing. I meet people from all walks of life—different backgrounds, all genders, all ages. Some look wealthy, others look like they’re struggling. Some seem strong and healthy, others clearly carry the weight of illness. There are professionals and amateurs. Some are trying sports for the very first time—and probably won’t be back. Others stick with it for a while before giving up. And then, of course, there are the regulars who keep showing up.

But what stays constant is the atmosphere. It’s always welcoming. When you greet someone, they greet you back—with kindness. That simple act of saying hello when you arrive creates a sense of safety. You never know what might happen out there, but that greeting—it’s a small sign of trust, of connection. It reminds me of Marcel Mauss’s idea of the gift and counter-gift: you offer a smile, and in return, you receive not just a smile, but a sense of protection.

Why certain couples marry and others do not?

Today, saturday, May 10 2025, I attended the wedding of a friend–a girl I met in high school. We were in the same grade from ninth grade all the way through senior year.

She was smart, almost among the top students in our class. She was attentive, displined, serious and hardworking. She rarely laughted out loud, but she always had a warm smile. Given her academic achievements and her work ethic, I expected her to pursue advanced studies and build a successful career. And that’s exactly what happened–she went on to study at ENSAE, one of the most prestigious schools in France, and she is now an actuary.

Now she is married too, and I hope she will find the same success in her marriage as she did in her studies and career. But this made me wonder: why do some people marry and others do not? What are the factors that influence the decision to marry?

At her wedding, I began to reflect on this question using the story of my friend as a starting point. First, she and her husband come from similar social backgrounds—which I believe played a stabilizing role in their relationship.

Then, there was distance. After her master’s, she earned a scholarship to continue her studies at ENSAE and moved to Paris, while her husband remained in Cameroon. They had to maintain a long-distance relationship for two years and waited until the third year to get married. Despite the distance, they managed to stay connected and committed.

Lastly, their social circle made a real difference. The groom repeatedly thanked his friends during the ceremony for supporting their relationship. He said they reassured his bride during moments of doubt and helped keep them grounded and hopeful.

By the end of the ceremony, I had identified three key factors that seemed to have supported their union: shared social status, the ability to overcome physical distance, and the strength of their support network. I believe there are many other factors yet to discover, and I’m curious to explore what else might influence the decision to marry.

Regression to the mean

Regression to the mean was discovered and named by Sir Francis Galton in the 19th century. It refers to the phenomenon where extreme observations tend to be followed by more moderate ones, and moving closer to the average.

This concept is often misunderstood, and interpreted in terms of causality. Take this proposition: > “Highly intelligent women tend to marry men who are less intelligent than they are.” What is the explanation?

Some may think of highly intelligent women wanting to avoid the competition equally intelligent men, or being forced to compromise in their choice of partner because intelligent men do not want to compete with intelligent women. This explanation is wrong.

The correlation between the intelligence scores of spouces is less than perfect. If the correlation between the intelligence scores of spouses is not perfect( and if men and women on average do not differ in intelligence), then it is mathematically inevitable that the higly intelligent women will be married to men who are on overage less intelligent than they are.

Should You Invest, Save, or Keep Your Money Under the Mattress?

A quick read of the bible can help you to answer this question. I recommend reading Mathieu 25:14, the parable of the servants and the master’s reward.

In this parable, a master gives three servants different amounts of money to manage during his absence. The first servant receives five talents, the second two talents, and the third one talent. The first two servants invest their money and double it, while the third servant hides his talent in the ground.

When the master returned from his journey, the first two servants presented their profits. The master was proud of them and rewarded them. But the third servant explained he knew that his master was hard and demanding. Because he was afraid of losing the money, he hid the talent in the ground. The master was angry and disappointed, and he took the talent. He told him that if he was so afraid, the least he could have done was put the money in a bank to earn interest.

This parable teaches us that the most important to get money is to invest it wisely. If you are afraid of losing your money, you should at least put it in a bank to earn interest. Keep money under a pillow is not a good idea, and should be used as the last resort.

How can we choose to modeling time series using ARDL?

What ultimately determines the choice of a model is the data. This assumes that we’ve already carried out a preliminary analysis to understand the nature of the risk, define its scope, and identify the relevant risk factors.

That’s why it’s misguided to say, for example, that we should—or should not—use an ARDL model to analyze a time series. What truly matters is ensuring that the residuals of the model are not autocorrelated.

What drives stock prices?

It’s been about a week since I invested in Eutelsat stock at €3.59 through my PEA (Plan d’Épargne en Actions). When I woke up on Monday, May 5th, I was surprised to see that the stock had jumped to €4.20. Suddenly, it started dropping, and I was tempted to sell. Fortunately, I didn’t—because by the end of the day, the stock had climbed to over €4.65.

That’s a 29.5% increase relative to my investment:

\[ \frac{4.65 - 3.59}{3.59} \times 100 ≈ 29.5\% \]

Naturally, I started wondering what could have caused such a sharp rise. I looked it up on Google and checked the news on TV—but found nothing.

Later that afternoon—maybe by chance, maybe because I had been actively searching—I stumbled upon an article in Les Échos. It mentioned that Eutelsat’s CEO, Eva Berneke, had been replaced by Jean-François Fallacher, the former CEO of Orange.

That was likely one reason for the stock’s spike. Another possible explanation is the French government’s involvement with the company. In the context of European and French national defense, Eutelsat is seen as a strategic alternative to Starlink, SpaceX’s satellite internet service.

It’s incredibly hard to predict stock prices.

On a similar note, I also noticed that oil prices had dropped recently. And again, there’s a potential explanation: the number of oil producers within OPEC has increased by three. This rise in oil production leads to a greater supply, which in turn explains the drop in oil prices.

We are champions the champions : The first trophy of Harry Kane in his career.

Yesterday, after Bayer Leverkusen’s draw, Bayern Munich officially clinched the Bundesliga title. This marks the very first trophy of Harry Kane’s career.

One question comes to mind: Is this title more meaningful to Harry Kane than it is to other Bayern players like Thomas Müller or Manuel Neuer, who’ve already won countless trophies?I want to take it even further: Is Kane happier about this title than a die-hard Bayern Munich fan might be? To answer those questions properly, we’d have to consider several factors—but let me just share my opinion.

It’s true that after a long, demanding season, winning a title is always a great source of satisfaction for any player. But for a club like Bayern Munich, winning the Bundesliga is important, yes—but it’s also expected. The fans and the club’s leadership invest a lot of money in top-tier talent to win the Champions League. So when you’re a Bayern player, lifting the Bundesliga trophy isn’t enough—you need to win the Champions League to feel truly fulfilled.

Now, for someone like Harry Kane, who’s never won a single trophy in his career and who’s been through a lot of disappointment and heartbreak—with Tottenham and even with England—this title must feel incredibly rewarding. Let’s not forget: he lost the Champions League final in 2019 with Tottenham against Liverpool, and the Euro 2020 final with England against Italy. People even started saying he was cursed. Let’s hope he fully enjoys this title—and that it’s just the beginning of many more to come with Bayern Munich.

Warren Buffett retires at 94

The most famous investor in the world, Warren Buffett, has announced his retirement at the age of 94 in the end of the year. He has announced this decision during the annual meeting of Berkshire Hathaway, the company he founded in 1965.

What explains his success? He is known for his long-term investment strategy, which focuses on buying and holding quality companies. He has also been a strong advocate of value investing, which involves looking for undervalued stocks with strong fundamentals.

Do all these elements fully explain his success? I don’t think so. To explain his success, we also need to take luck into account. He wrote a book, and many others have written about him. But not many people have achieved the kind of success he has. That’s why I want to propose a model to predict Warren Buffett’s success, which will be formulated as follows:

Success = f(strategy, long-term investment, value investing, …) + luck

Diagnostic de performance énergétique (DPE)

Le DPE renseigne sur les performances énergétiques et environnementales d’un logement et d’un bâtiment, en évaluant ses émissions de gaz à effet de serre (GHG).

Le DPE contient les informations suivantes : - Les informations sur les caractéristiques du bâtiment telles que la surface, les orientations, les mûrs, les fenêtres, les matériaux, etc. - Les informations sur les équipements du logement tels que le chauffage, la climatisation, l’eau chaude sanitaire, la ventilation, etc.

Le contenu et les modalités du DPE sont réglementés. Ainsi, les données sur les DPE peuvent être utilisées comme facteur de risque ESG (environnemental, social et de gouvernance).

Pour plus d’informations, veuillez consulter le site de l’ademe ici

Aimé Césaire inspired by the Bible

When we take time to reflect on the condition of the oppressed, the poor, and the suffering, we see that the Bible has inspired many well-known writers—among them, Aimé Césaire.

In Proverbs 31:8–9, it is written: “Speak up for those who cannot speak for themselves, for the rights of all who are abandoned.Speak up and judge fairly; defend the rights of the poor and needy.”

With this in mind, we can believe that Aimé Césaire—a powerful French poet—was deeply influenced by this text when he wrote: “My mouth will be the mouth of those who have no mouth, my voice, the freedom of those who sink into the dungeons of despair.”

La clé Lamine Yamal

Hier soir, le barça affrontait l’Inter Milan dans le cadre du match aller de la demi-finale de la Ligue des champions. Ce match opposait la meilleure attaque de la compétition, le barça, à la meilleure défense, l’Inter Milan. On s’attendait donc à un match difficile et fermé avec peu de buts. Cependant, on a assisté à spectable incroyable, magnifique et inoubliable.

L’Inter Milan a ouvert le score grâce à un but superbe de Marcus Thuram au tout début du match. L’inter de Milan a même doublé la mise grâce à un but de Dunfries. On s’est dit en ce moment que c’est fini pour le barça. L’Inter Milan a commencé à défendre, à mettre le bus. Mais il a fallu que Yamal trouve la clé pour ouvrir le cadenas mis en place par l’Inter.

Après avoir effacé Henrikh Mkhitaryan avec une facilité déconcertante, le joyau de la Masia a fixé Alessandro Bastoni puis décoché un remarquable tir du pied gauche qui est aller fracasser le poteau droit de Yann Sommer avant de franchir la ligne. On a vu en lui du Ronaldinho, du Neymar et même du Messi. On se pose même la question de savoir si c’est le nouveau Messi du Barça. Ce qui est sûr, c’est que Lamine Yamal est un génie, comme l’a dit son entraîneur, ancien entraîneur du Bayern Munich, Hans-Dieter Flick.

Il est difficile de croire mais cet enfant sort de l’adolescence, mais il a une grande maturité. Il n’a que 17 ans et ce match est son 100e match avec le Barça. Il est le plus jeune joueur de l’histoire a avoir marqué en demi-finale de la Ligue des champions, dépassant ainsi le record d’un certain Kylian Mbappé.

Grâce à lui, le Barça a pu arracher un match nul 3-3. Nous sommes impatients de voir Yamal briller lors du match retour à Milan.

Le livre des réponses

J’adore la bible parce que tu peux y trouver des réponses à toutes tes questions. Par exemple, si tu est quelqu’un qui travaille beaucoup et dort peu, si ton entourage te dit qu’il faut dormir, que le sommeil est réparateur. Si tu es fatigué par eux, tu peux leur répondre que d’après Proverbes 20:13, “N’aime pas le sommeil, tu risquerais de t’appauvrir. Garde les yeux ouverts et tu seras rassasié de pain.”

Lire la bible, notamment le livre des proverbes ou de l’Ecclésiaste, écrit par Salomon, te donnera de l’intelligence et de la sagesse.

Categories

How tall is Junior? If Junior is 1.5m. Your answer is a function of his age. He is very tall if I tell you that he is 6 years old. Very short if he is 20 years old. Your brain automatically returns the relevant norm, which allows you to make a quick decision.

We are also able to match intensity across categories and answer the question: “How expensive is a restaurant that matches Junior’s height?”

Our world is broken into categories for which we have a norm. And those norms allow us to make quick decisions.

What evokes you the 100 days of Donald Trump?

Even if Trump sticks to his program, the first 100 days have nonetheless been marked by a major disruption in the global economy, which has had a negative impact on various markets, including Wall Street, the European market, and the Asian market.

Furthermore, we were shocked by his stances on the war in Ukraine, his attack on the Chairman of the Federal Reserve, Jerome Powell, as well as the increase in tariffs — especially in the context of the trade war with China.

La puissance de la parole.

La bible met en évidence la puissance de la parole. Par la parole, Dieu à créé le monde. Pour des non croyants, ceci peut être ridicule. Je pense qu’ils conviennent avec moi que ceux qui maitrisent la parole ont un pouvoir. Ils peuvent séduire, enchanter, persuader, parfois ils peuvent même manipuler.

Dans la bible, notamment dans prophète, l’auteur préconise l’usage adéquat de la parole. Il dit que la langue, qui permet de parler, a un pouvoir de vie et de mort; ceux qui aiment parler en goûteront les fruits. Ensuite, il conseille de refléchir avant de parler. Celui qui répond avant d’avoir écouté fait preuve de folie et se couvre de honte.

Plusieurs écrivaints ont souligné cette importance de la parole, notamment les mots. Jean Paul Sartre, un écrivain français, disait que “Les mots sont des pistolets chargés.”

Back to top