>>13865476Okay, you have to understand in Statistics there are two types of independence. Linear independence and statistical independence. I assume you really care about the later and not the former.
The simplest test for linear independence is simply the correlation function. This will give you a number between -1 and 1 telling you how correlated (or linearly dependent) two datasets are. 0 implies linear independence. BUT linear independence DOES NOT imply statistical independence. Look at my. The top row shows you when linear independence (correlation) implies statistical independence. When the correlation is 0 the relationship is just meaningless random noise.
On the bottom we see a different story, all of those relationship have ZERO correlation, but have a relationship (i.e. they are not statistically independent). The goal of statistical modeling is finding the transformation on the bottom row that make it more like the middle or top row.
Anyway, best thing to do is check the correlation to start with. Visualize the scatterplot of your data, see if it is obviously linear, then fit a simple linear model to the data. The summary statistics of that linear model essentially act as the 'test' and the F statistic acts as the test statistic for the null hypothesis that there is no linear relationship between the data. Be careful about the p-values though, they are calculated assuming normality which may not be guaranteed (the test statistics themselves are fine though).