• Bio
  • Research
  • Teaching
  • Education
  • Publications
  • Talks
  • Links
  • Contact
spacer

Shaojun Wang is a faculty member at Kno.e.sis Center with research interest in statistical machine learning, natural language processing and cloud computing. He directs Machine Learning and Natural Language Processing Lab, where the research projects are funded by NSF, AFOSR, DoD and Google with all emphasizing on scalability and parallel/distributed approaches to process extremely large scale datasets.

Interests

  • Statistical machine learning, natural language processing and cloud computing

Projects

  • Large scale distributed syntactic, semantic and lexical language models
  • Scalable semi-supervised discriminative structured prediction
  • Direct loss minimization for classification and ranking problems
  • Deep learning for natural language processing

Research lab

  • Machine Learning and Natural Language Processing Lab

Teaching

  • CS 714 Machine Learning, Fall 2013 (Wright State)
  • CS 790 Foundations of Machine Learning, Spring 2013 (Wright State)
  • CS 409/609 Principles of Artificial Intelligence, Fall 2012 (Wright State)
  • CS 771 Natural Language Processing Techniques, Winter 2010 (Wright State)

Education

  • Ph.D in Electrical Engineering, University of Illinois at Urbana-Champaign
  • M.S in Mathematics, University of Illinois at Urbana-Champaign
  • M.S in Electrical Engineering, Tsinghua University, Beijing
  • B.S in Electrical Engineering, Tsinghua University, Beijing

Publications

    Under review

    • Semi-supervised CONTRAfold for RNA secondary structure prediction: A maximum entropy approach
      M. Tan, J. Feng, S. Wang. and M. Raymer. [pdf]

    2013

    • Direct 0-1 loss minimization and margin maximization with boosting
      S. Zhai, T. Xia, M. Tan and S. Wang
      Advances in Neural Information Processing Systems, NIPS-2013, [pdf]
    • Consistency and generalization bounds for maximum entropy density estimation
      S. Wang,  R. Greiner and S. Wang.
      Entropy: Special Issue on Maximum Entropy and Bayes Theorem, Vol. 15, No. 12, pp. 5439-5463, 2013. [pdf]
    • Improving alignment of system combination by using multi-objective optimization
      T. Xia, Z. Ji, S. Zhai, Y. Chen, Q. Liu and S. Wang
      Conference on Empirical Methods in Natural Language Processing, EMNLP-2013, [pdf]
    • A corpus level MIRA tuning strategy for machine translation
      M. Tan, T. Xia, S. Wang and B. Zhou
      Conference on Empirical Methods in Natural Language Processing, EMNLP-2013, [pdf]
    • A robust semi-supervised boosting method using linear programming
      S. Zhai, T. Xia, M. Tan, S. Wang and P. Zhang
      IEEE GlobalSIP Symposium on Optimization in Machine Learning and Signal Processing, GlobalSIP-2013.
    • Direct optimization of ranking measures for learning to rank models
      M. Tan, T. Xia, L. Guo and S. Wang
      The 19th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD-2013, [pdf]

    2012

    • Extracting diverse sentiment expressions with target-dependent polarity from Twitter
      L. Chen, W. Wang, M. Nagarajan, S. Wang and A. Sheth
      The Sixth International Conference on Weblogs and Social Media, ICWSM-2012, [pdf]
    • A scalable distributed syntactic, semantic and lexical language model
      M. Tan, W. Zhou, L. Zheng and S. Wang
      Computational Linguistics, Vol. 38, No. 3, pp. 631-671, 2012, [pdf]
    • The latent maximum entropy principle
      S. Wang, D. Schuurmans and Y. Zhao. ACM Transactions on Knowledge Discovery from Data (TKDD), Vol. 6, No. 2, 8:1-42, 2012, [pdf]
    • Exploiting syntactic, semantic and lexical regularities in language modeling via directed Markov random fields
      S. Wang, S. Wang, L. Cheng, R. Greiner and D. Schuurmans
      Computational Intelligence, 2012.

    2011

    • A large scale distributed syntactic, semantic and lexical language model for machine translation
      M. Tan, W. Zhou, L. Zheng and S. Wang
      The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-2011. [pdf]

    2009

    • A rate distortion approach for semi-supervised conditional random fields
      Y. Wang, G. Haffari, S. Wang and G. Mori
      Advances in Neural Information Processing Systems, NIPS-2009. [pdf]
    • Monetizing user activity on social networks - challenges and experiences
      M. Nagarajan, K. Baid, A. P. Sheth, and S. Wang
      The IEEE/WIC/ACM International Conference on Web Intelligence, WI-2009.
    • Information theoretic regularization for semi-supervised boosting
      L. Zheng, S. Wang, Y. Liu and C. Lee
      The 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD-2009.[pdf]

    2008

    • Boosting with incomplete information
      G. Haffari, Y. Wang, S. Wang, G. Mori and F. Jiao
      The 25th International Conference on Machine Learning, ICML-2008. [pdf]
    • Segmenting brain tumors using pseudo--conditional random fields
      C. Lee, S. Wang, A. Murtha, M. Brown and R. Greiner
      The 11th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI-2008. [pdf]
    • Constrained classification on structured data
      C. Lee, M. Brown, R. Greiner, S. Wang and A. Murtha
      The 23th AAAI Conference on Artificial Intelligence, AAAI-2008.
    • Unsupervised discovery of compound entities for relationship extraction
      C. Ramakrishnan, P. Mendes, S. Wang and A. Sheth
      The 16th International Conference on Knowledge Engineering and Knowledge Management Knowledge Patterns, EKAW-2008.

    2007

    • Learning to model spatial dependency: Semi-supervised discriminative random fields
      C. Lee, S. Wang, F. Jiao, D. Schuurmans and R. Greiner
      Advances in Neural Information Processing Systems, NIPS-2007. [pdf]
    • Implicit online learning with kernels
      L. Cheng, S. Vishwanathan, D. Schuurmans, S. Wang and T. Caelli
      Advances in Neural Information Processing Systems, NIPS-2007. [pdf]

    2006

    • Almost sure convergence of Titterington's recursive estimator for finite mixture
      S. Wang and Y. Zhao
      Statistics & Probability Letters, Vol. 76, No. 18, pp. 2001-2006, December 2006. [pdf]
    • Stochastic analysis of lexical and semantic enhanced structural language model
      S. Wang, S. Wang, L. Cheng, R. Greiner and D. Schuurmans
      The 8th International Colloquium on Grammatical Inference, ICGI-2006. [pdf]
    • Semi-supervised conditional random fields for improved sequence segmentation and labeling
      F. Jiao, S. Wang, C. Lee, R. Greiner and D. Schuurmans
      The Joint 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, COLING/ACL-2006. [pdf]
    • Using query-specific variance estimates to combine Bayesian classifiers
      C. Lee, R. Greiner and S. Wang
      The 23th International Conference on Machine Learning, ICML-2006. [pdf]
    • An online discriminative approach to background subtraction
      L. Cheng, S. Wang, D. Schuurmans, T. Caelli and S. Vishwanathan
      IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS-2006. [pdf]

    2005

    • Exploiting syntactic, semantic and lexical regularities in language modeling via directed Markov random fields
      S. Wang, S. Wang, R. Greiner, D. Schuurmans and L. Cheng
      The 22th International Conference on Machine Learning, ICML-2005. [pdf]
    • Variational Bayesian image modelling
      L. Cheng, F. Jiao, D. Schuurmans and S. Wang
      The 22th International Conference on Machine Learning, ICML-2005. [pdf]
    • Combining statistical language models via the latent maximum entropy principle
      S. Wang, D. Schuurmans, F. Peng and Y. Zhao
      Machine Learning Journal: Special Issue on Learning in Speech and Language Technologies, Vol. 60, pp. 229-250, 2005. [pdf]

    2004

    • Learning mixture models with the regularized latent maximum entropy principle
      S. Wang, D. Schuurmans, F. Peng and Y. Zhao
      IEEE Trans. on Neural Networks: Special Issue on Information Theoretic Learning, Vol. 15, No. 4, pp. 903-916, 2004. [pdf]
    • Augmenting naive Bayes text classifier using statistical n-gram language modeling
      F. Peng, D. Schuurmans and S. Wang
      Information Retrieval, Vol. 7, No. 3-4, pp. 317-345, 2004. [pdf]

    2003

    • Learning continuous latent variable models with Bregman divergence
      S. Wang and D. Schuurmans
      The 14th International Conference on Algorithmic Learning Theory, ALT-2003. [pdf]
    • Boltzmann machine learning with the latent maximum entropy principle
      S. Wang, D. Schuurmans, F. Peng and Y. Zhao
      The Nineteenth Conference on Uncertainty in Artificial Intelligence, UAI-2003. [ps]
    • Learning mixture models with the latent maximum entropy principle
      S. Wang, D. Schuurmans, F. Peng and Y. Zhao
      The 20th International Conference on Machine Learning, ICML-2003. [ps]
    • Semantic n-gram language modeling with the latent maximum entropy principle
      S. Wang, D. Schuurmans, F. Peng and Y. Zhao
      International Conference on Acoustics, Speech, ans Signal Processing, ICASSP-2003. [ps]
    • Language and task independent text categorization with simple language models
      F. Peng, D. Schuurmans, S. Wang
      North American Chapter of the Association for Computational Linguistics, NAACL-2003.
    • Language independent authorship attribution with character level n-Gram language modeling
      F. Peng, D. Schuurmans, S. Wang
      The 10th Conference of the European Chapter of the Association for Computational Linguistics, EACL-2003.

    2002

    • The latent maximum entropy principle
      S. Wang, R. Rosenfeld, Y. Zhao and D. Schuurmans
      IEEE International Symposium on Information Theory, ISIT-2002. Abstract[ps], [ps]
    • Almost sure convergence of Titterington's recursive estimator for finite mixture models
      S. Wang and Y. Zhao
      IEEE International Symposium on Information Theory, ISIT-2002.
    • Predicting oral reading miscues
      J. Mostow, J. Beck, V. Winter, S. Wang and B. Tobin
      International Conference on Spoken Language Processing, ICSLP-2002.

    2001

    • On-line Bayesian tree-structured transformation of HMMs with optimal model selection for speaker adaptation
      S. Wang and Y. Zhao
      IEEE Trans. on Speech and Audio Processing, Vol. 9, No. 6, pp. 663-677, September 2001. [pdf]
    • Latent maximum entropy principle for statistical language modeling
      S. Wang, R. Rosenfeld and Y. Zhao
      IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU-2001.
    • Recursive estimation of time-varying environments for robust speech recognition
      Y. Zhao, S. Wang and K. Yen
      IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP-2001.

    2000

    • Optimal on-line Bayesian model selection for speaker adaptation
      S. Wang and Y. Zhao
      International Conference on Spoken Language Processing, ICSLP-2000.
    • On-line Bayesian speaker adaptation by using tree-structured transformation and robust priors
      S. Wang and Y. Zhao
      IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP-2000.

    1999

    • On-line Bayesian tree-structured transformation of hidden Markov models for speaker adaptation
      S. Wang and Y. Zhao
      IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU-1999.
    • A unified framework for recursive maximum likelihood estimation of hidden Markov models
      S. Wang and Y. Zhao
      The 33rd Annual Conference on Information Sciences and Systems, CISS-1999.

    1998

    • On convergence of maximum likelihood estimation of binary HMMs by EM algorithm
      M. Li, S. Wang and Y. Zhao
      The 32rd Annual Conference on Information Sciences and Systems, CISS-1998, pp. 1018-1024.

    1995

    • Short-term generation scheduling with transmission and environmental constraints using an augmented Lagrangian relaxation
      S. Wang, S. Shahidehpour, D. Kirschen, S. Mokhtari and G. Irisarri
      IEEE Trans. on Power Systems, Vol. 10, No. 3, pp. 1294-1301, August 1995. [pdf]
    • Probabilistic marginal cost curve and its applications
      S. Wang, S. Shahidehpour and N. Xiang
      IEEE Trans. on Power Systems, Vol. 10, No. 3, pp 1321-1328, August 1995. [pdf]

    Manuscript

    • On determination of domains of convergence for the EM algorithm
      S. Wang and Y. Zhao
      [ps]

    Talks

    • Direct loss minimization for classification and ranking problems. [Slides: [ps] [pdf] ], Presented at Microsoft, 2013
    • A scalable distributed syntactic, semantic and lexical language model. [Slides: [ps] [pdf] ], Presented at Google, Microsoft and JHU, 2010-2011
    • Exploiting syntactic, semantic and lexical regularities in language modeling via directed Markov random fields. [Slides: [ps] [pdf] ], Presented at The 22nd International Conference on Machine Learning , Bonn, Germany, 7-11 August, 2005.
    • Learning continuous latent variable models with Bregman divergences. [Slides: [ps]], Presented at The 14th International Conference on Algorithmic Learning Theory, Hokkaido University, Sapporo, Japan, October 17 - 19, 2003.
    • The latent maximum entropy principle. [Slides: [ps]], Presented at AMS/IMS/SIAM Joint Summer Research Conference on Machine Learning, Statistics, and Discovery, Snowbird, Utah, June 22-26, 2003.

    Contact Information

    • Email: shaojun.wang(at)wright.edu
    • Office Phone: (937) 775-5140
    • Fax: (937) 775-5133
    • Home Page: www.cs.wright.edu/~swang
    • Mailing Address: Department of Computer Science and Engineering, Wright State University, 3640 Colonel Glenn Hwy., Dayton, Ohio 45435-0001
    • Office: 387, Joshi Center

    gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.