Mondeca software
Comprehensive suite of taxonomy, content tagging and knowledge navigation tools
Intelligent Taxonomy Manager
Easy management of a controlled vocabulary, taxonomy or ontology
Mondeca offers a comprehensive solution for taxonomy and ontology management an well as artificial intelligence based content tagging. Create your metadata repository with Intelligent Taxonomy Manager.
Key features
Multiple taxonomies
- Split data into workspaces
- Concept and term alignement
- Taxonomy specific ccess rights
Multiple languages
- Any language and character set
- Alignment of taxonomies
- Translation management
Access rights
- Fine grained user profiles
- Integrates with enterprise IAM
Import / export data
- CSV, Excel
- Semantic standards: OWL, RDF, XML, SKOS, XTM, RDF/XML, N-Triples, Turtle, N3, JSON-LD, and RDF/JSON
Browse
- Easy navigation
- End user views
- Graphical data representation
Query and search
- Advanced search UI
- Multicriteria search
- Saved search
- Search based imports & exports
Improve data
- Candidate & suggested terms
- Candidate term API
- Maintenance tasks assignement
Alert & track
- Event based alerts
- Event based data push
- Audit trail
- Activity dashboards
Open access to your data
Stop losing time building and searching for reference data. Establish a single source of truth for your teams, partners, and clients. Start out simple and ramp up with more complex data representations to take up new challenges in time.
Content Auto-tagging manager
Speed up indexing work and free up quality time for assessment
Enable automation of mundane and time-consuming indexing processes while allowing manual, qualitative review of machine-processed content using Content Auto-Tagging Manager
Key features
Persistence
- Store your work in the application persistence layer to support iterations and fine tuning of manual indexing work
Configuration
- Use existing templates for the creation of new resources (profiles, connectors, scripts, workflows, and engines)
- Edit and reload CAM central configuration without server restart
- View classification rules, RDF resources, and gazetteers
Integration
- Use CAM REST web service to execute auto-tagging workflows from content-centric applications
- Connect to CMS: Sharepoint, Drupal, WordPress
- Search engines: Elastic, SolR
- Graph databases
Workbench
- Execute, control and review results with informative and visual information
- Modular display: select side by side panels or widgets when reviewing results
- Monitor and configure CAM directly from the administration UIs
Execute multiple analyses
- Analyze content using NLP powered by business taxonomies
- Classify content using SPARQL classifications rules
- Execute Machine-learning to train CAM on a corpus of documents – and apply ML at sentence, paragraph or document level
Learn from content
- Analyze a corpus of several documents in one go
- Detect candidate terms
- Bulk extract terminology from a corpus with TF/IDF scores
- Compute precision/recall metrics at document or corpus level
Application Monitoring
- Track workload and activities from the user and administrator dashboards
- Server nodes activity and storage capacity details
Permissions & Security
- Editable and granular rights for profiles
- Optionally map permissions to Mondeca Identity and Access Management component
Machine learning
- Supports Google Bert
- Transformer models
- Neuronal deep learning
- Integrates spaCy library
How to use machine learning for content annotation
Unleash the power of the latest deep learning models
Solve your NLP tasks with state of the art Transformer models like BERT with CAM
When to use
When the rules required to properly execute the task are not known or to complex to formulate. You will also need an annotated set of data with at least a hundred examples for each category.
What can you do with machine learning models
Classify sentences or documents, identify named entities or concepts, detect sentiment and others.
How
Create and train your own model. Alternatively, use the transfer learning approach – take an existing complex like a Transformer model and retrain it for your task.
Use Jupyter notebook to define and train Python ML models.
Here is an example of notebook for a BERT based Tensorflow model.
Train deep learning models on environments like Google Colab. Remember, training will work best on a GPU fitted server.
Once your model is ready, use the CAM AdminUI to import it and set up the serving environment.
Supported models
Gate plugins
- Naive Bayes, Maximum Entropy, and Decision Trees (MALLET)
- linear chain Conditional Random Fields (MALLET)
- Support Vector Machines (LibSVM)
Python models including neuronal deep learning models
- scikit-learn
- spaCy
- Tensorflow
- PyTorch
- Transformer models: HuggingFace
Enjoy the machine learning in an industrial grade production setup
01
Select the algorithm
Get annotated data set
Split into training and validation
Train using the training data set
Evaluate results using validation set
Deploy to production
Knowledge Browser
Knowledge Browser is a web-based portal featuring powerful search capabilities combined with graphical visualizations. It provides intuitive, read-only access to broad audiences who need to visualize, search, navigate and browse collections of enterprise data.
Your Challenges
- Publish reference data using open standards and formats
- Enable internal/external access to enterprise data via any web browser
- Organize and control the type and amount of data shared with clients
- Capture client requests for terminology improvement
- Aggregate data sources to streamline integration with applications
Browse, search & find, explore & discover, share & publish your data
knowledge browser
Key features
Browse
- User-friendly ultra-intuitive access to data.
- Navigate flat, hierarchical or graph-based terminologies
Explore and discover
- Configurable home dashboard for quick access to predefined datasets
- Graph-based visualization
Search
- Search enhanced through smart options
- Auto suggestion and query interpretation
- Multicriteria search
Find
- Refine search results based on facets (properties, types and data sources)
- Sort results according to relevancy
Share & Publish
- Configurable web portal
- Connected to ITM and CAM
- Standards REST services
- Users can provide feedback, ask questions and download data
Move away from spreadsheet exchanges and think Graph
Let systems and targeted audiences take advantage of your enterprise data using semantic standards for data publishing and dissemination
Let’s talk about your project needs & goals
We will share with you how we can rapidly increase the performance and value of your taxonomy.
- Discuss your use cases and challenges
- Show relevant features and capabilities
- Agree on next steps