You are here

What do data scientists do: The data science diaries

Definitions of what constitutes ‘data science’ are apt to differ according to the source you consult. IFoA President John Taylor has described the discipline as “a dynamic field that as soon as you try and pin down what it is today, it’s changed tomorrow”. This being the case, how does this apparently elusive definition reflect the actual job of data scientists? What goes into their daily roles and responsibilities, and how do data scientists differ from organisation to organisation?

First-hand accounts are the best way to gain insight into what any professional remit entails, and so the IFoA invited three data scientists who work in contrasting sectors to outline the basics of a typical working week. Their accounts reveal that, as well as being core members of multidisciplinary teams, data scientists have to be multidisciplinary individuals, adept at a range of data analysis, software and business skills.


Gousto: Irene Iriarte Carretero

IRENE IRIARTE CARRETERO

Irene Iriarte Carretero is a data scientist at the UK’s largest recipe box company, Gousto*. Her work has focused on the development and implementation of different data science products, such as a menu recommendation engine and a forecasting algorithm that allows Gousto to predict the recipes that customers will order, and to ensure that the company minimises the food waste in its supply chain.

Monday

AM: I’m working from home today. It’s a great way to start the week, as I’m really able to focus on tasks that require more concentration. After a quick call with the team to finalise what we are working on this sprint, I do some research on how top companies are applying personalisation. There are some really interesting blog posts and papers – I summarise my thoughts so that I can share them later in the week. 

PM: I work on doing some analysis to understand how our customers are interacting with their recommendations – we are working on the implementation of a new collaborative filtering method, and these findings will feed into how we end up implementing the algorithm.

Tuesday

AM: My calendar looks pretty clear today  – I have time to focus on starting the implementation of the new algorithm. Luckily, the data we need is straightforward to obtain and is already pretty clean, so I can get straight to using it. Given that, as data scientists, we take ownership of the entire process, from ideation of the product to deployment, I need to ensure that my code is production-ready. I send my code to one of the Machine Learning Engineers in the team who gives me some suggestions to make the code more efficient. Once I make these changes, I check that everything is working as expected in our testing environments before pushing it to production.

Wednesday

AM-PM: A bit of a change of pace today. We are spending the day at an offsite location where we are going to have several workshops to brainstorm on the long-term vision for our menu page. Our team is cross-functional and includes members from the Design, Food, Software Engineering and Proposition teams, as well as Data Science, which ensures that we think about our ideas from different perspectives. As data scientists, I think it’s easy to get too caught up on improving a model’s accuracy, so I always find it useful to have opportunities like this where we can really think about how our products slot into the wider picture. 

Thursday

AM-PM: It’s really important that we understand the impact of our products, so today I am focusing on analysing the results of an experiment we ran on the website, in which 50 per cent of our customers saw a slightly different experience of how recommendations were presented to them. We make sure that, as well as internal algorithm metrics, we deep-dive into more commercial metrics that we can easily communicate with stakeholders across the business, and that are more useful to understand how our products impact our customers.

Friday

AM: After a much-needed coffee (it’s been a long week!), I have one-to-one meetings with the team and catch up with some outstanding emails.

PM: We have a team retrospective to discuss what has worked well and what improvements we can make to ensure we are working more effectively. Finally, I work on preparing a short presentation for our monthly Tech Showcase – it’s a great opportunity to share our work with the whole company over some drinks and nibbles.

More from our data science series:

*Based on volumes and revenues, based on third party data from Reward Insight.


Centre For Environmental Data Analysis: Graham Parton

Graham Parton

As a Senior Environmental Data Scientist at the Centre for Environmental Data Analysis (CEDA), Graham Parton curates observational data from its atmospheric community, and ensures they are accessible and future-proofed for CEDA’s users. His role also scopes development of the content and structure of CEDA’s data cataloguing service, a data discovery tool and link to supporting materials. CEDA services are provided on behalf of the Natural Environment Research Council via the National Centre for Atmospheric Science and the National Centre for Earth Observation. CEDA is based within the RAL Space department of the Science and Technology Facilities Council.

Monday

AM: Start the week with a quick catch-up on the helpdesk where I can see our regular batch of users struggling to use the surface weather observations data from the National Met Agency. It’s brilliant data – it’s just that they need to cross reference the station metadata with the data itself before they can get on with their research. The open version of the data collection tool resolves most of those usability issues, which is helping new users to access these data… I’ll remind our Met Agency partner about this when we catch up about the upcoming new release of those data, as he’ll be pleased to see the rewards of his efforts there! The rest of the day I’ll get on with coding to improve our catalogue service. There are a few niggly bugs I reckon I can fix this afternoon.

Tuesday

AM: Focused on weekly ’Data Management Plan’ (DMP) related tasks. Checking our internal DMP tool, I see I’ve got a couple of new projects that have come in from the latest Research Council funding round for me to get in touch with. Will look at their project details and their outline DMPs to figure out what data they may want to archive, and then make contact later; but, for now, I’ve got some new sample files from another project to look at. Hopefully, these will be an improvement on the last ones they sent, where the internal metadata was, well, a bit sparse to say the least!

Thankfully, though, I can refer them to our help documentation to steer them in the right direction to resolve those issues. Just hope they can find their notes about the instrument’s deployment last year – without the instrument calibration details the data’s reusability could be questionable

PM: still not managed to get to look over those new projects that came in yesterday, as one of my ingest scripts encountered issues with the storage system; so I’ve had to spend most of the morning checking over the issues and getting the ingest restarted. But my diary is clear later this afternoon, so I should be able to fire off those introductory emails at last.

Wednesday

AM: Day of meetings. For starters, our developer group catch-up – usually tough for me, as I’m not a seasoned code developer, so some of the stuff is a bit difficult to follow. After that a Google Hangout to catch up with my line manager to review the outstanding developer task lists for the data catalogue. Hopefully with our new archive access database in place later this month I’ll get the go-ahead to develop the catalogue service: this will ensure up-to-date access and licence information can be fed through from our new tool. Then we can finally stop having to record this information in more than just one place.

After that I’ve a 16:30 meet with a colleague to see if she can help review our 70+ data licences to classify them with our new scheme. That would really help users to filter our 6,500+ datasets to find ones which they can use for work purposes (e.g., commercial or personal use), and not just assume that it’s just for academic use. This classification stuff is quite exciting, though, as it’s a new approach that is getting lots of interest across the international research data community, and not just for environmental data either! If it gets adopted more widely users will be able to do something akin to Google image searches, which allows searches to be limited by permitted uses.

Thursday

AM: Monthly group meeting. Hear about the wide range of work that our group is engaged with, from working our cloud portals and development of our high-performance data analysis system, through to involvement with international metadata standards.

PM: Crack on with data management tasks. Before that, though, I’ll spend a bit of time checking our Elasticsearch index of our entire 200 million files to check for occurrences of some site names used in filenames to aid a project’s file naming scheme. It would be like looking for the tiniest of needles in the mother of all haystacks if it wasn’t for our Big Data tools to scan and index everything – but that’s what’s needed these days to manage such vast and diverse archives.

Friday

AM-PM: Today was pretty full-on. One minute I was setting-up new data extractions to pull in forecast model data into the archive (a quick task, but needs regular checks, as not all extractions run smoothly), the next covering the helpdesk, aiding users to find relevant data and sorting out their account issues. Then a brief Google Hangout with a developer to check on the adjustments to the new access control system, before doing a handful of data catalogue record reviews and DOI (Digital Object Identifier) minting, and getting a new dataset finally published after the last few weeks spent persuading the provider to actually follow the metadata guidelines. Thankfully I’ve had a quiet last hour of this week to  catch up on my colleague’s blog post about data citation. It’s important stuff, essential for researchers to follow – otherwise, how can we rely on the science if we can’t find the underlying data that supports the results?

More from our data science series: 


Bloom & Wild: Dave Marshall

Dave Marshall has been Lead Data Scientist at online florist Bloom & Wild for more than three years. The focus of his work is to use data to drive decision making in the company, and automate this wherever possible. This feeds into the main company objective of maximising the lifetime value of customers.

Monday

Make coffee first thing, to get the day going, and then look into an experimental project we are running with a new product recommendation system, which personalises the search for our customers to recommend the perfect bouquet for them. I discuss potential improvements on the experiment with my data analysis colleague, and whether to rollout to all customers, with another colleague, Bloom & Wild’s Retention Product Manager. We decide to keep A/B testing, where 50 per cent of our customers see the recommended products and the other 50 per cent do not, so that we can quantify data.

Tuesday

I catch up with my data analysis colleague on our priorities for the week. He’s working on an exciting project to improve our product metadata – the properties of the bouquets and plants that we sell. This will allow us to automatically calculate a similarity score for different items which could feed future improvements for our product recommendations. He has worked closely with backend developers to get data in place. We agree the project is nearly complete, and that he’ll present the work at next week’s company-wide meeting. We think we’ll get lots of ideas for other areas of the business where this new dataset can also be implemented.

Wednesday

We have a weekly company-wide meeting at midday. In 20 minutes we hear how we are performing against our key metrics, and also get an update on any career opportunities currently open across the company. It’s traditional that the presenter sprinkles-in fun facts on a theme. After this we head-off for lunch.

Thursday

Busy day. Kicked-off a project to improve our website speed with two frontend developers and our VP of Product. A faster loading and more responsive web shop should make customers (even) happier and purchases more likely. My role is to crunch site performance data from our UK & Ireland, German and French websites: this is to identify where we’ve negatively impacted the experience, or if certain pages suffer on specific web browsers. I have also created a dashboard for developers to check they are making progress!

Friday

AM: headphones on, and time to work on my own for a bit. I write some code to help our operations team better understand our stock position at any given time, and to predict what bouquets will be left unsold at the end of the day. Dealing in perishable goods means that we need to forecast very accurately to avoid wastage. Meanwhile, this week it’s Bloom & Wild’s 6th anniversary, so at 18:00 the whole company heads off to eat some birthday cake and to do a flower arranging workshop!
 

Filter or search events

Start date
E.g., 27/11/2021
End date
E.g., 27/11/2021

Events calendar

  • The Growth Mindset for Actuaries

    13 October 2021 - 8 December 2021

    Fully booked.

    This practical course is aimed at actuaries at any stage of their career who want to develop their own growth mindset and apply it to their work setting and personal or professional lifelong learning. The content of the course builds on the lecture given by Dr Helen Wright on Growth Mindset as part of the President’s 2021 Lecture series, and will be delivered over a period of 2 months, from mid-October to early December.

  • Spaces available

    The role of actuaries within the health sector varies considerably from one country to another, due to differences in the local evolution of health systems and the funding models for health services. 

  • Spaces available

    This paper outlines key frameworks for reserving validation and techniques employed. Many companies lack an embedded reserve validation framework and validation is viewed as piecemeal and unstructured.  The paper outlines a case study demonstrating how successful machine learning techniques will become and then goes on to discuss implications.  The paper explores common validation approaches and their role in enhancing governance and confidence.

  • Spaces available

    Content will be aimed at all actuaries looking to understand the issues surrounding mental health in insurance and in particular those looking to ensure products and processes widen access for, and are most useful to, those experiencing periods of poor mental health.
     

  • Spaces available

    The IFoA Policy Briefing 'Can we help consumers avoid running out of money in retirement' examined the benefits of blending a lifetime annuity with income drawdown. Panellists, including providers and advisers, will look at the market practicalities of taking the actuarial theory through into the core advice propositions used by IFAs and Fund Managers. They will share a number of practical issues such as investment consequences before and after retirement and the level of annuity that is appropriate and answer questions from the audience.

  • Speech from the Governor of the Bank of England, Andrew Bailey

    Lincoln's Inn The Treasury Office, London WC2A 3TL
    1 December 2021

    The IFoA is pleased to be hosting the Governor of the Bank of England, Andrew Bailey, to deliver a speech on delivering policyholder protection in insurance regulation.

    The speech will be presented to an in-person audience, and simultaneously live-streamed, at 14.00 on Wednesday 1st December.

  • The Many Faces of Bias

    2 December 2021

    Spaces available

    This webinar looks at the many types of biases, both conscious and unconscious and the impacts they can have in the workplace.  Raising our own awareness and understanding of the issues can help us avoid the pitfalls of unconscious bias in particular.  We’ve all heard the phrase ‘office banter’ but are we sure that’s how those on the receiving end perceive it and is it ok to go along with it?

  • Spaces available

    Actuaries need to take action now - but how?  With a focus on climate change, this session will provide informed insight to enable you to improve your knowledge and understanding of the issues involved, demonstrate how it will impact advice to your clients, and highlight prospective opportunities for actuaries within pensions and wider fields.

  • Spaces available

    Pension scams have become more prevalent as a result of the pandemic, and Trustees have increased responsibilities to protect members, which means that actuaries need to be in a position to provide advice in this area. Our specialist panel will include a professional trustee, an IFA and head administrator, two of whom are members of PASA.

  • Spaces available

    The covid-19 pandemic creates a challenge for actuaries analysing experience data that includes mortality shocks.  To address this we present a methodology for modelling portfolio mortality data that offers local flexibility in the time dimension.  The approach permits the identification of seasonal variation, mortality shocks and late-reported deaths.  The methodology also allows actuaries to measure portfolio-specific mortality improvements.  Results are given for a mature annuity portfolio in the UK

  • Spaces available

    In this webinar, the authors of the 2021 Brian Hey prize winning paper present a new deep learning model called the LocalGLMnet. While deep learning models lead to very competitive regression models, often outperforming classical statistical models such as generalized linear models, the disadvantage is that deep learning solutions are difficult to interpret and explain, and variable selection is not easily possible.

  • Spaces available

    The dominant underwriting approach is a mix between rule-based engines and traditional underwriting. Applications are first assessed by automated rule-based engines which typically are capable of processing only simple applications. The remaining applications are reviewed by underwriters or referred to the reinsurers. This research aims to construct predictive machine learning models for complicated applications that cannot be processed by rule-based engines.

  • Spaces available

    With the Pension Schemes Act 2021 requiring a long term strategy from Trustees and sponsors, choosing a pensions endgame strategy has become even more critical. However, it is important that the endgame options available are adequately assessed before choosing one. With an ever-increasing array of creative and innovative options available, this decision may not be straightforward.