US 12,170,817 B2
Systems and methods of interactive goal setting tools
Marjorie S. Anzalone, San Francisco, CA (US); Darius A. Miranda, San Francisco, CA (US); Wairnola Marria Rhodriquez, San Francisco, CA (US); Samundra Timilsina, South San Francisco, CA (US); and Paul Vittimberga, Oakland, CA (US)
Assigned to Wells Fargo Bank, N.A., San Francisco, CA (US)
Filed by Wells Fargo Bank, N.A., San Francisco, CA (US)
Filed on Sep. 25, 2023, as Appl. No. 18/372,486.
Application 18/372,486 is a continuation of application No. 17/882,209, filed on Aug. 5, 2022, granted, now 11,770,586.
Application 17/882,209 is a continuation of application No. 16/150,046, filed on Oct. 2, 2018, granted, now 11,412,298.
Prior Publication US 2024/0015362 A1, Jan. 11, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 21/442 (2011.01); A61B 5/0205 (2006.01); A61B 5/16 (2006.01); G06F 3/01 (2006.01); G06F 3/14 (2006.01); G06N 5/04 (2023.01); G06Q 40/06 (2012.01); G06V 40/16 (2022.01); G10L 25/63 (2013.01); A61B 5/00 (2006.01); A61B 5/021 (2006.01); A61B 5/024 (2006.01); A61B 5/0533 (2021.01); A61B 5/08 (2006.01); A61B 5/369 (2021.01); A61B 7/04 (2006.01); G06N 3/126 (2023.01)
CPC H04N 21/44218 (2013.01) [A61B 5/0205 (2013.01); A61B 5/165 (2013.01); G06F 3/011 (2013.01); G06F 3/012 (2013.01); G06F 3/14 (2013.01); G06N 5/04 (2013.01); G06Q 40/06 (2013.01); G06V 40/174 (2022.01); G10L 25/63 (2013.01); A61B 5/0077 (2013.01); A61B 5/021 (2013.01); A61B 5/024 (2013.01); A61B 5/0533 (2013.01); A61B 5/0816 (2013.01); A61B 5/369 (2021.01); A61B 7/04 (2013.01); A61B 2503/12 (2013.01); G06F 3/015 (2013.01); G06F 3/016 (2013.01); G06F 2203/011 (2013.01); G06N 3/126 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
generating a first portion of a video, wherein the first portion depicts an initial object corresponding to an initial goal, wherein the first portion of the video is displayed on an output circuit of a user device;
generating a second portion of the video based on emotional response data captured by an emotion-tracking device of the user device, wherein the second portion depicts an updated object corresponding to an updated goal, the updated goal is updated from the initial goal, and the updated object being different from the initial object, wherein the emotional response data is captured by the emotion-tracking device while a user is viewing the first portion using the output circuit, and wherein the emotional response data indicates that the user reacted negatively to the initial object corresponding to the initial goal;
determining a cutoff point of the first portion; and
stitching the second portion of the video to the first portion of the video at the cutoff point, wherein the output circuit of the user device continuously plays the video from the first portion to the second portion.