Hello, Thank you for your response !
Python is indeed compiled into lightweight bytecode format before execution ( the *.pyc files you are referencing are only generated by CPython implementation, other Python implementations like IronPython will compile to CLR codes or Jython will compile to JVM codes ).
However, like JAVA the generated bytecode is also *interpreted* by a "Virtual Machine", ( in real time "par abus de language" because the python interpreter still has to do type checking and name binding )
Most modern interpreted languages pre-compile code into an intermediary byte code, otherwise they would rely on what is called a tree-walk interpreter and would be too slow/unefficient to use. The only language I know of that did this is Ruby (before version 1.9).
Bytecode is not Machine code and does not directly run on bare metal ! This is why languages like Python and even JAVA are slower than traditional compiled languages like C/C++ or Rust.
Yes Dynamic Typing, makes Python slower than languages like JAVA but it's not the only reason.
Finally, No one is blaming Python for being slow here :) Python is a great language.
It being slower than compiled languages is a fact but it also provides an easier to use syntax and makes developping complex projects like Data Science projetcts much faster.
Though most of packages are written using C or Cython, the glue code you are referring to has to exist and can sometimes despite best efforts be unneficcient and this is exactly why projects such as numba exist, otherwise they would have no purpose.
Typically User-defined functions for a dataframe (the example we gave in this article), the only choice for efficiently running them on many data points is either to use numba, cython and so on or to parallelize the operations ☺️