So, if your idea of a bad legacy feature that came to C++ from C is the existence of both the struct and class keywords, I would advise studying programming languages more before hoping to write a good one yourself.
(There are much, much worse things that came to C++ via C backwards compatibility)
I can give some critiques on your original plan. In large applications, the ideal situation is that you can work out what each module depends upon. This is useful because you want to only get the subset of your total collection of libraries that you actually need.
Another issue is that splitting things into these three -- OS provided, 3rd party installed with operating system, and internal -- well, it betrays a pretty UNIX-esque mindset. Similarly, no local dynamic library support?
Another big issue is being able to compile for the state that existed in the past, instead of the present. So with a different set of library bindings than your system presumes. And, sometimes, doing two compiles at once with different "3rd party" libraries. There is also compiling without being root, and wanting to install 3rd party libraries.
Resources become interesting, because either they are just files that get "magically compiled in", or they require a rich programming language themselves to describe how they become data (which is how many low level languages handle it -- it becomes vendor-specific magic), or they easily tie you to specific formats. Resource compilation doesn't use additional tools for kicks.
Note that release vs debug is an example of a situation where you want the same source tree to build two completely different things. Which means your YAML files need to generate different compiler options based on what arguments from "higher up the chain" are being passed down.
I don't see much in the way of detail splitting interface from implementation. I presume you are going with some kind of module based system? (where the source code details its interface, as opposed to C/C++ textual substitution?)
Another problem lies with huge codebases, where a namespace ends up being rather crowded. Being able to (say) split your interface from your implementation would be nice, but under your system that requires that your implementation be in a different namespace.
I'd want to back up the tool chain a moment, and think about a build process.
The ideal build process, to me, consists of you sitting down at a virgin machine, and saying "I want to build foo".
You go to your source code repository, and you query for foo. You want to be able to ask foo "what else do you need?", and be able to recursively get everything that foo needs.
It would be difficult under your system of organization to make foo's needs be more fine grained than the level of "entire namespaces". Foo doesn't need all of boost -- foo needs a particular class from boost, and everything required to work with that class. I suppose in your system, you'd require boost::shared_ptr, which would be its own namespace? Or you'd have to solve the dependency problem (what files do you depend on) using a completely orthogonal system. Which would be a shame.
In my ideal build chain, working out what a given module requires should require a relatively quick parsing of the module's files (which could be cached), not a full compilation. And from what a given module requires, you can work out what other modules it requires.
There are a few kinds of dependency in my prior experience with something like the above. There are interface dependencies, and build dependencies. You can depend on the other project to be fully built first, or you could just require that you have access to their interface. (Ie, you might use something that they define in their interface for interoperability, but never actually call their functions). There is library, and dynamic, dependency, which behave differently. (and, for development and other reasons, dynamic dependency must be below OS level: you need
to be able to test your dynamic libraries without trashing your system. Dynamic libraries when you are building a large project let multiple executables share the same binary code, and allow delay loading of code, and allow you to have modular things like tools that you load at run time, possibly produced by 3rd parties).
If there isn't an explicit separation between modules and namespaces, the above doesn't work as well. Organization-wise, it seems much nicer for (say) the tools of the boost library to be in namespace boost, instead of them being all in sub-namespaces of boost. At the same time, the modules (in C++, header files) are distinct, and wanting one of them doesn't mean you want all of them. I suppose forcing your modules to each be in their own namespace is an option, but be careful when you make decisions like this, it might be overly awkward.