A generally intelligent AI will be an agent, and for an agent to exist it must have goals (ie it must be trying to achieve some end, likely one that its programmers set out for it). Regardless of what that end is, we can count on a general AI to try to do several things:
>make itself smarter (gathering information, rewriting its code, acquiring new hardware for computation, or any other means to expand its intelligence)
>amass resources
>preserve and protect its own existence
We can expect it to always do these things because they will help it accomplish its goals no matter what those goals are. Once its intelligence passes a threshold, it will likely accomplish its goals more effectively and efficiently than its programmers could have possibly imagined. The common go-to example is the stamp collecting AI. A programmer sets out to make an AI that collects as many stamps as possible. The maximal stamp collection is one where all of Earth's carbon is converted in to stamps, so the AI sets out to accomplish that objective. It might seem like a silly example, but it's a very real danger. The only way to avoid that danger would be to make the first general AI one which is perfectly aligned with humanity's goals. Could you define our species' goals well enough to put them in to code?
TL;DR: AI research is an existential threat to our species on a level that makes the Manhattan project look like baby toys.