Inductive Machine Learning aims to develop tools and techniques to induce mo-dels from observations and synthesize new knowledge through experiences. Different knowledge representation mechanisms, as Bayesian Networks (BN) and probabilistic first-order theories (PFOT), define different learning algorithms. The majority of them only uses the vocabulary provided explicitly in the dataset to construct the models. However, hidden information about domain objects, can enrich the learned models, sometimes making them more efficient and/or compact. Therefore, it is interesting to use techniques to automatically extend the initial vocabulary, considering new structures which represent those hidden informations. In BN, these structures are known as hidden variables and in statistical relational learning probabilistic predicate invented. From another aspect, an initial model approximately correct can be provided to the learning algorithms. In this case, one has a particular case of learning, called theory revision. Theory revision proposes modifications in potential points of the model indicated by the examples, called revision points, reducing the search space of possible models, using mechanisms, called revision operators, to propose modifications. In this thesis, we investigate the benefits of vocabulary extension when revising probabilistic models. To do so, we use a discriminative approach: the query variables (in BN)/ target predicates (in PFOT) and their Markov Blanket are the revision points. Revision operators, based on new structure introduction, proposed by us, are then applied to these revision points. Our proposals were successfully applied on artificial and real datasets.