The Kalman filtering update on Wikipedia is inefficient if the latent space dimension $n$ is higher-dimensional than the observation space dimension $m$. This is because the covariance update involves matrix multiplications in the full latent space, which has $\mathcal{O}(n^3)$ complexity. Here is a faster way (all notation as on Wikipedia):
\begin{align}
C_k &\gets H P_{k|k-1}\\
S_k &\gets R_k + C_k H^\top\\
K_k &\gets ( S_k^{-1} C_k )^\top\\
P_{k|k} &\gets P_{k|k-1} - K C_k
\end{align}
Where $C_k = H P_{k|k-1}$ is a $m\times n$ intermediate value.
Complexity is limited by the $\mathcal{O}(m n^2)$ multiplication in the last step. This is in contrast to the standard update, which includes four $\mathcal{O}(n^3)$ multiplications in the covariance update. The speed up of $\mathcal{O}(n/m)$ seems minor, but this form of the update also optimizes constant factors and the speed-up was substantial my application. This Technical Note on Manipulating Multivariate Gaussian Distributions may be helpful.
Rotating the latent space so that the projection $H$ can be computed simply by dropping rows/columns also leads to significant speed-ups, as it removes two further matrix multiplications. In this form, the update uses only one $\mathcal{O}(m^2 n)$ and one $\mathcal{O}(mn^2)$ operation, in contrast to the eleven matrix multiplications in naively implementing the standard covariance update.