Quoted By:
think of functions as vectors, and transforms in terms of vectorspaces, their duals, and an inner product. a transform (like the fourier transform) is simply the expansion coefficients of a function/vector with respect to some basis (for FT it's the fourier basis). this works very similarly to how a 3d vector can be represented by x,y,z coordinates with respect to some orthonormal basis. the inner product is what allows you to compute the expansion coefficients; it let's you take a linear combination of basis vectors and kill off all terms except one, letting you isolate the coordinate.
some PDEs have solutions that enable a PDE to be converted into an algebraic equation. harmonic PDEs are a great example; complex exponentials are solutions and their derivatives are also complex exponentials multiplied by the derivative of their argument, so all the derivatives evaporate.
sorry, i don't have many good resources. i've just learned this stuff over the years. it's challenging because there are a lot of subtlety between use cases (discrete vs continuous variables and outputs, countable vs uncountable vectorspaces, etc), but there is a lot of similarity between the ideas too.
perhaps take a look at signal processing books, or physics books with pertinent PDEs.
Oppenheim's Digital Signal Processing is well known, but specific.