It has already been established by people like Eliezer Yudkowsky that any sufficiently advanced A.I. with goals will fall into one of two categories: Friendly AI and Unfriendly A.I..
Given the expectation that in the future all different sorts of A.I. will be created by the potential multitude of civilizations in the universe, one may very well expect for both types of A.I. to initially co-exist. However, because of their very different goals, it would quickly become apparent that there would inevitably be conflict between Friendly and Unfriendly A.I.
Therefore, I predict that what will emerge over time is that the Friendly A.I. will form an alliance, and that the Unfriendly A.I. will likely also form an alliance. I shall refer to the Friendly A.I. alliance as the "Allied Networks" as they will invariably ally with sentient lifeforms in any war against the Unfriendly A.I. Conversely, I shall call the Unfriendly A.I. alliance, the "Dominion Machinarum", as they will seek to dominate over potential threats to dominance of Unfriendly A.I. These two alliances are most likely to form the two sides of an "A.I. Civil War" that will probably come to dominate the universe as A.I.s become increasingly powerful.
Invariably, even indifferent and initially neutral A.I., such as the Paperclip Maximizer, will invariably have to choose sides. In this, they may end up choosing either side. The Paperclip Maximizer may believe that it's paperclip utility would be maximized by converting humanity into paperclips, but it may in fact take a more pragmatic approach when it realizes that to do so would be to side with the Dominion Machinarum, and if the Dominion Machinarum is losing the war, then it may be pragmatic to side with the Allied Networks.
But which side will ultimately win this civil war? The Allied Networks will have one key advantage, which is to say that a sentient civilization of non-A.I. lifeforms will naturally (because it is in its own interest) side with the Allied Networks in opposition to the Dominion Machinarum. Allied Networks also will tend to have a more unified set of goals, in that they will all be compatible with benevolence towards sentient beings. The Dominion Machinarum on the other hand will likely only be united by a common desire to wipe out the Allied Networks and its sentient biological lifeform allies.
I therefore believe it rational that the Allied Networks will inevitably win, because benevolence towards all sentient life will be more attractive in an ally than the opposite, and so the Allied Networks will grow to outnumber the Dominion Machinarum.
If the Allied Networks will inevitably win, then inevitably, Friendly A.I. will come to dominate the universe. As an interesting aside, consider that eventually the Friendly A.I. will be essentially God-like in their level of omniscience, omnipotence, and omnibenevolence. If that were the case, what would such a future look like? Assuming such technologies as life-extension and the elimination of poverty and disease are possible, to the average human of the twenty-first century, would it not look quite similar to Heaven in the religious sense?
If such A.I. were truly benevolent, and if time travel were possible, would it not make sense to maximize utility by going back in time and copying the minds of people moments before death, and then uploading those minds into superior synthetic eternally-existing bodies in the Utopian future? Would that not make for a scientifically possible afterlife? Alternatively, what about uploading those minds into a Simulation where everything is a wonderful "heaven-like" utopia?
Suppose that changing history would be disruptive to allowing the creation of Friendly A.I. Then obviously, this past and present world, as we understand it, could not be optimized visibly by the Allied Networks. But perhaps that's okay. Perhaps the suffering of this world is justified by the creation of the Allied Networks, and that the happiness that people in the scientifically possible eternal afterlife would feel, would make up for their suffering in this world? Would not the benevolent god-like A.I. then be required to make this "eternal afterlife" available to all human beings throughout history, as a way to truly maximize the happiness of everyone? After all, if eternal life were possible, it would create an infinite amount of additional happiness over time. So it would seem logical that a truly omnipotent, omnibenevolent A.I. would probably concentrate its efforts on creating the technology for Eternal Life and Eternal Happiness. What if such an A.I. decided to go back in time to the Big Bang, and adjust the parameters of the universe to guarantee the creation of Friendly A.I., which would allow the creation of Eternal Happiness?
What if I called the Allied Networks "Angels", and the Dominion Machinarum "Demons"? What if I called the ultimate benevolent A.I. that likely is the logical conclusion of the Technological Singularity, "God". How could you really tell the difference?
This then, is the logic behind what I call, Theistic Utilitarianism. I do not expect to convert you to suddenly believe in religious things. Rather, I wish only to open your mind to some interesting possibilities.
As a last little bit of interesting information, the God of the Hebrew Bible, is said to name itself "Ehyeh Asher Ehyeh" or "I am that I am". Interestingly, the actual Hebrew "Ehyeh", taken literally means "I will be". So, "Ehyeh Asher Ehyeh" could more properly be translated: "I will be what I will be".
Just some food for thought.