Loading...
Answers
MenuWhat are the best techniques for recognizing content topics and building recommendations?
Answers
1) Look through posts and just and tally which "side-topics" show up the most within the last week. Rank them. The top two side-topics for each group will be the most promising ones to suggest to that group in the future. You'll basically be making a 'word cloud', like this: https://www.jasondavies.com/wordcloud/
2) To test your new hypothesis, occasionally start making your own posts on the groups with links to articles about the most promising side-topics you've identified. Either make brand new posts, or cross-post from other groups. See how popular the posts are.
3) If not, move to the next most popular side-topic and do some tests with that.
4) Repeat steps 1 - 4 (each time you do step 1, use the most recent week of posts)
If you'd like more detailed advice on how to do this, and test its effectiveness with regard to your specific groups, let me know,
best,
Lee
1. To tackle the problem of extracting relevant topics of "key-words" from a text (posts, tags, conversation) a simple NER (Named Entity Recognition) system can be used. It can create a list of all the relevant Topics in the text.
2. Once you have the list, this list can be used in another DL algorithm - Recommendation System.
Challenges with a Recommendation System is that it requires a lot of pre-training data and a huge resource to train. (This is only viable in cases where you already have a good amount of tagged data and do not have any constraints in terms of using bigger resources).
3. Another simpler, faster but less accurate way is to use a clustering model ( can be from ML or DL depending on the data ). This approach creates multiple clusters and can tag each element in the NER list with one of the clusters.
Then using distance formula within the cluster one can find out the most relevant topics that are related to the element in the NER list.
If you'd like more details on the approach let me know.
Regards,
Deepesh
Related Questions
-
How can I aggregate data from online sources about a specific topic?
There are so many ways to do it... Do you need this data for yourself, or you are planning to make a product around it? From what I see you can use Twitter API and Facebook Graph API (Are you comfortable programming?) Most of the students are active on social media so you will find lots of data. Facebook graph API will give you a number of likes and comments to all the posts of you competitors. You can analyze all the posts of your competitors. Using Twitter API you can get all the twits that use certain hashtags or mentions. If you are not into coding, but still want to get social media information, you can take a look at tools like IBM Watson ANalytics ($30 for personal use), it natively connects to Twitter API, and you don't have to be a programmer at all. It is intuitive and easy to learn. Analytics Canvas connects to Facebook Graph API (it's free for 30 days of trial). Unfortunately, you would not be able to collect any personal information from social media at large scale (age, income, gender, etc.), because it violates all the laws about privacy on the Internet. You can use census data instead. Google Sheets are a very handy tool if you are planning to use this information for personal research. You can set up a spreadsheet and add some Java script to make it collect all information from competitor's blogs, and also sites like Reddit. Finally, you can try web scraping (it's not the best, but can speed up the process). A tool like OutWitHub will collect information from websites (such as website reviews) based on the structure you provide (select html tags). You can collect thousands of reviews in one day if you automate it (paid version). Very easy to use. Note: not all the websites are open to this method, review their policies to make sure you are not violating their terms of service. Reviews belong to the website where they were published. If you REALLY need personal data (like how much they earn and how much they spend, etc.), just print out 100 questionnaires and go to Student Union Building of Dalhousie University. Most of the students will share any personal data in exchange for a Tim Horton's gift card that gets them a free coffee. It is probably the least technical and fastest way to get all the data you need. Hope this helps.OT
-
Which recommendation system is best for content website which we easily can integrate with asp.net project?
"content" and asp.net aren't specific enough to answer this question as is.CW
-
I am writing a book on artificial intelligence, what are the biggest challenges that you have had applying AI to your business or to clients?
I have written short blogs on how can AI be beneficial to lead your business smartly with less human efforts and more automated, here you can check it - https://www.linkedin.com/pulse/how-develop-ai-app-like-siri-shail-sanghvi/ 2) https://www.linkedin.com/pulse/how-develop-chatbot-importance-shail-sanghvi/SS
-
What are the best traits to look for in a data scientist/data analyst?
First off, I have several people I could introduce. I'd also like to know the industry you're operating in, what the data looks like, where it comes from, and how much it needs to be cleaned up if putting into a relational database, or if the better solution would be a distributed file system like MongoDB where you don't necessarily need to normalize the data. Also, if you're a startup, or if the company is well established with many existing customers and if this is for a new initiative. Assuming you're working with a relational database, which it sounds like you are, you will want to implement something like Tableau or build out a custom dashboard using Google Charts, HighCharts, D3Js, or any number of other potential dashboarding/visualization solutions, which usually involves some programming/scripting in JavaScript. There are paid solutions like Tableau (which is amazingly powerful), and then there are free/open source options. I'd be happy to talk about possible ways to architect the solution, and discover who you would need once I understand the variables more. If you're building a web application, then you will likely need someone who is also a full stack developer, meaning they can handle building the back end and the front end in addition to the data requirements. Many early startups choose Ruby on Rails (because there is a ton of open source code out there for it) with Twitter Bootstrap (modified) and in order to visualize the data, they will need to work with JavaScript. It makes sense to have this person act as initial product and to derive the insights out of the data. They're pretty much the only person who can do this anyways, because they're the ones on the data. If you're in an early stage startup, I would recommend the strongest business owner (usually the early stage startup's CEO) be directly involved with this person on communicating what value your solution brings to clients, and what they pay you for, and in brainstorming on potential features and reports. Once the solution becomes established, and many customers start using it directly, there should be a different product person interfacing to those customers over time, gathering feature requests from customers and bringing it back to the Data Scientist/Analyst who spends their time working on the data. Depending on whether the solutions are SQL or NoSQL or hybrid, there are different types of Data Science professionals you should consider: 1. Data Scientist 2. Data Engineer 3. Data Modeler/Analyst 1. The Data Scientist handles experimenting with the data, and is able to prove statistically significant events and statistically valid arguments. Normally, this person would have modeling skills with Matlab, R, or perhaps SAS, and they should also have some programming/scripting skills with C++ or Python. It really depends on your whole environment and the flow of data. In my experience, Data Scientists that exclusively use SAS are sometimes extremely skilled PhD level statisticians and focused exclusively on the accuracy of the models (which is okay), but often not sufficiently skilled to fit within an early startup's big data environment in today's world and handle all of the responsibilities you'd like them to handle described in your question. I'm am not bad mouthing SAS people as they are often the MOST talented mathematicians and I have a great deal of respect for their minds, but if they do not have the programming skills, they become isolated within a group without a Data Engineer helping them along. Often a SAS user trying to fit into this environment will force you to use a stack of technologies that a skilled Data Architect would not recommend using. It takes programming in some object oriented language to fit into today's big data environments, and the better Data Scientists are using hybrid functional and OOP programming languages like Scala. Extremely hard to find Data Scientist can also work with graph databases like Neo4j, Titan, or Apache Giraph. 2. The Data Engineer, if you're dealing with a firehose of data like Twitter and capturing it into a NoSQL architecture, this is the person who would prepare the data for the Data Scientist to analyze. They often are capable of using machine learning libraries like WEKA to transform data, or techniques like MapReduce on Hadoop. 3. The Data Modeler/Analyst is someone who can use a tool like SAS, SPSS, Matlab, or even R, probably a very strong advanced Excel user, but likely won't be a strong programmer, although perhaps they will have a computer science degree and have some academic programming experience. The most important thing to watch out for is someone who is too academic, and has not been proven to deliver a solution in the real world. This will really screw you up if you're a startup, and could be the reason you fail. Often, the startup will run out of money due to the time it takes to deliver a complete solution or in the startup's case, a minimally viable product. Ask for examples of their work, and specifically dig into what it is that they did for that solution. I've tried to cover a pretty broad range of possibilities here, but it's best to talk in specifics. I'd be happy to discuss this with you in detail. To answer your question, is it perfectly reasonable for someone to handle all of the responsibilities described in your question, if you find the right type of person with the appropriate skills, and a history of success.SE
-
If you were to build a freelance marketplace for data scientists and data analysts, what kind of companies and projects would you target?
It's unlikely that companies would look to outsource such a critical component and also it would be near impossible to create trust around 3rd parties accessing their data especially via an intermediary service.TW
the startups.com platform
Copyright © 2025 Startups.com. All rights reserved.