Abstract
In this paper we present the concept for an abstract modeling of output render components. We illustrate how this categorization serves to seamlessly integrate previously unknown output multimodalities coherently into the multimodal presentations of the EMBASSI dialog system. We present a case study and conclude with an overview of related work.