Bayesian models of document retrieval have a long theoretical history in the subject but have only recently proved practical. Our current online retrieval system is partially Bayesian and we have developed a fully Bayesian model based on cluster concepts which incorporates the document length and local term frequency while allowing the model to be completely Bayesian. This performs at the same basic level as the partially Bayesian model in which local weights are treated ad hoc. It does however allow one to see the actual log odds predictions of relevance. These exceed the observed log odds of relevance by 13.1 which gives an interesting perspective on term dependency. A new model based on the Bayesian approach has been developed which has interesting connections with the vector models of G. Salton. Theoretical details have been worked out. Documents must be indexed by the """"""""real"""""""" objects that they refer to and these real objects become nodes in a system of multiple hierarchies called a specificity network. Each hierarchy is produced by a specificity operator and results in a tree of objects starting at the root with the most general and moving to greater specificity as one progresses towards the leaves. The objects which populate nodes are represented by textual terms or phrases. There may be many representatives of any single object. Programs are to be written to create and store these structures and eventually the stored data will be used to make the process of indexing semiautomatic. Two documents are to be rated as to their similarity depending on the relatedness of the real objects that they reference. The approach is to be tested using the humanly judged material that has been produced for the purpose of probability scaling of online retrieval system raw scores.