vispubnetwork

Citation Network of the IEEE Vis papers

https://github.com/john-guerra/vispubnetwork

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (0.5%) to scientific vocabulary
Last synced: 8 months ago · JSON representation ·

Repository

Citation Network of the IEEE Vis papers

Basic Info
Statistics
  • Stars: 9
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 11 years ago · Last pushed over 1 year ago
Metadata Files
Readme Citation

README.md

visPubNetwork

Citation Network of the IEEE Vis papers

Owner

  • Name: John Alexis Guerra Gómez
  • Login: john-guerra
  • Kind: user
  • Location: San Francisco, California
  • Company: Northeastern University Silicon Valley

I love to build dataviz for Insight discovery. I also love to put technology to the service of humanity

Citation (citationsNetwork.json)

{
    "links": [
        {
            "source": 0,
            "target": 1,
            "type": "cites",
            "value": 8
        },
        {
            "source": 0,
            "target": 2,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 3,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 4,
            "type": "cites",
            "value": 9
        },
        {
            "source": 0,
            "target": 5,
            "type": "cites",
            "value": 3
        },
        {
            "source": 6,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 6,
            "target": 8,
            "type": "cites",
            "value": 6
        },
        {
            "source": 6,
            "target": 9,
            "type": "cites",
            "value": 6
        },
        {
            "source": 6,
            "target": 10,
            "type": "cites",
            "value": 6
        },
        {
            "source": 6,
            "target": 11,
            "type": "cites",
            "value": 4
        },
        {
            "source": 6,
            "target": 12,
            "type": "cites",
            "value": 13
        },
        {
            "source": 13,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 15,
            "target": 16,
            "type": "cites",
            "value": 4
        },
        {
            "source": 15,
            "target": 14,
            "type": "cites",
            "value": 16
        },
        {
            "source": 17,
            "target": 15,
            "type": "cites",
            "value": 5
        },
        {
            "source": 17,
            "target": 14,
            "type": "cites",
            "value": 10
        },
        {
            "source": 18,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 19,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 20,
            "target": 14,
            "type": "cites",
            "value": 12
        },
        {
            "source": 21,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 17,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 14,
            "type": "cites",
            "value": 44
        },
        {
            "source": 15,
            "target": 23,
            "type": "cites",
            "value": 8
        },
        {
            "source": 17,
            "target": 23,
            "type": "cites",
            "value": 4
        },
        {
            "source": 20,
            "target": 23,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 23,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 24,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 25,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 26,
            "type": "cites",
            "value": 7
        },
        {
            "source": 15,
            "target": 27,
            "type": "cites",
            "value": 3
        },
        {
            "source": 15,
            "target": 28,
            "type": "cites",
            "value": 3
        },
        {
            "source": 15,
            "target": 22,
            "type": "cites",
            "value": 5
        },
        {
            "source": 17,
            "target": 22,
            "type": "cites",
            "value": 6
        },
        {
            "source": 20,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 20,
            "target": 30,
            "type": "cites",
            "value": 4
        },
        {
            "source": 20,
            "target": 22,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 29,
            "type": "cites",
            "value": 18
        },
        {
            "source": 22,
            "target": 31,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 30,
            "type": "cites",
            "value": 16
        },
        {
            "source": 22,
            "target": 32,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 33,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 15,
            "target": 35,
            "type": "cites",
            "value": 3
        },
        {
            "source": 15,
            "target": 36,
            "type": "cites",
            "value": 3
        },
        {
            "source": 15,
            "target": 37,
            "type": "cites",
            "value": 5
        },
        {
            "source": 15,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 38,
            "type": "cites",
            "value": 6
        },
        {
            "source": 15,
            "target": 39,
            "type": "cites",
            "value": 3
        },
        {
            "source": 15,
            "target": 40,
            "type": "cites",
            "value": 3
        },
        {
            "source": 17,
            "target": 39,
            "type": "cites",
            "value": 3
        },
        {
            "source": 17,
            "target": 40,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 41,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 43,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 43,
            "type": "cites",
            "value": 5
        },
        {
            "source": 45,
            "target": 43,
            "type": "cites",
            "value": 4
        },
        {
            "source": 42,
            "target": 46,
            "type": "cites",
            "value": 13
        },
        {
            "source": 42,
            "target": 47,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 47,
            "type": "cites",
            "value": 4
        },
        {
            "source": 48,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 46,
            "type": "cites",
            "value": 14
        },
        {
            "source": 49,
            "target": 47,
            "type": "cites",
            "value": 4
        },
        {
            "source": 50,
            "target": 46,
            "type": "cites",
            "value": 26
        },
        {
            "source": 50,
            "target": 47,
            "type": "cites",
            "value": 3
        },
        {
            "source": 51,
            "target": 46,
            "type": "cites",
            "value": 5
        },
        {
            "source": 47,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 53,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 46,
            "type": "cites",
            "value": 26
        },
        {
            "source": 52,
            "target": 47,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 42,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 54,
            "type": "cites",
            "value": 6
        },
        {
            "source": 42,
            "target": 55,
            "type": "cites",
            "value": 5
        },
        {
            "source": 42,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 7,
            "type": "cites",
            "value": 19
        },
        {
            "source": 44,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 49,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 50,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 51,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 57,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 58,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 45,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 52,
            "target": 55,
            "type": "cites",
            "value": 5
        },
        {
            "source": 52,
            "target": 56,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 7,
            "type": "cites",
            "value": 19
        },
        {
            "source": 42,
            "target": 59,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 59,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 59,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 59,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 52,
            "type": "cites",
            "value": 5
        },
        {
            "source": 44,
            "target": 52,
            "type": "cites",
            "value": 5
        },
        {
            "source": 49,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 50,
            "target": 52,
            "type": "cites",
            "value": 15
        },
        {
            "source": 58,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 47,
            "target": 52,
            "type": "cites",
            "value": 7
        },
        {
            "source": 52,
            "target": 50,
            "type": "cites",
            "value": 10
        },
        {
            "source": 44,
            "target": 61,
            "type": "cites",
            "value": 27
        },
        {
            "source": 50,
            "target": 49,
            "type": "cites",
            "value": 10
        },
        {
            "source": 50,
            "target": 62,
            "type": "cites",
            "value": 7
        },
        {
            "source": 47,
            "target": 49,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 49,
            "type": "cites",
            "value": 6
        },
        {
            "source": 52,
            "target": 62,
            "type": "cites",
            "value": 8
        },
        {
            "source": 42,
            "target": 26,
            "type": "cites",
            "value": 15
        },
        {
            "source": 50,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 52,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 42,
            "target": 63,
            "type": "cites",
            "value": 6
        },
        {
            "source": 44,
            "target": 64,
            "type": "cites",
            "value": 6
        },
        {
            "source": 44,
            "target": 65,
            "type": "cites",
            "value": 5
        },
        {
            "source": 44,
            "target": 63,
            "type": "cites",
            "value": 13
        },
        {
            "source": 49,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 63,
            "type": "cites",
            "value": 5
        },
        {
            "source": 50,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 57,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 58,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 47,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 47,
            "target": 63,
            "type": "cites",
            "value": 6
        },
        {
            "source": 45,
            "target": 63,
            "type": "cites",
            "value": 7
        },
        {
            "source": 52,
            "target": 44,
            "type": "cites",
            "value": 5
        },
        {
            "source": 52,
            "target": 63,
            "type": "cites",
            "value": 7
        },
        {
            "source": 52,
            "target": 66,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 67,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 42,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 44,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 72,
            "type": "cites",
            "value": 9
        },
        {
            "source": 44,
            "target": 72,
            "type": "cites",
            "value": 13
        },
        {
            "source": 48,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 49,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 57,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 58,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 47,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 45,
            "target": 72,
            "type": "cites",
            "value": 7
        },
        {
            "source": 52,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 45,
            "target": 73,
            "type": "cites",
            "value": 3
        },
        {
            "source": 49,
            "target": 74,
            "type": "cites",
            "value": 4
        },
        {
            "source": 49,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 50,
            "target": 74,
            "type": "cites",
            "value": 5
        },
        {
            "source": 50,
            "target": 38,
            "type": "cites",
            "value": 6
        },
        {
            "source": 51,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 49,
            "target": 75,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 76,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 75,
            "type": "cites",
            "value": 4
        },
        {
            "source": 44,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 47,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 45,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 44,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 50,
            "target": 2,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 52,
            "target": 2,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 4,
            "type": "cites",
            "value": 9
        },
        {
            "source": 42,
            "target": 77,
            "type": "cites",
            "value": 5
        },
        {
            "source": 78,
            "target": 79,
            "type": "cites",
            "value": 4
        },
        {
            "source": 78,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 81,
            "target": 78,
            "type": "cites",
            "value": 3
        },
        {
            "source": 81,
            "target": 79,
            "type": "cites",
            "value": 6
        },
        {
            "source": 81,
            "target": 80,
            "type": "cites",
            "value": 7
        },
        {
            "source": 80,
            "target": 78,
            "type": "cites",
            "value": 5
        },
        {
            "source": 80,
            "target": 79,
            "type": "cites",
            "value": 14
        },
        {
            "source": 80,
            "target": 82,
            "type": "cites",
            "value": 4
        },
        {
            "source": 78,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 81,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 83,
            "type": "cites",
            "value": 10
        },
        {
            "source": 80,
            "target": 84,
            "type": "cites",
            "value": 8
        },
        {
            "source": 78,
            "target": 43,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 43,
            "type": "cites",
            "value": 6
        },
        {
            "source": 78,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 81,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 85,
            "type": "cites",
            "value": 7
        },
        {
            "source": 80,
            "target": 7,
            "type": "cites",
            "value": 24
        },
        {
            "source": 80,
            "target": 86,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 87,
            "type": "cites",
            "value": 7
        },
        {
            "source": 81,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 88,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 14,
            "type": "cites",
            "value": 25
        },
        {
            "source": 80,
            "target": 36,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 23,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 89,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 93,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 95,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 97,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 98,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 93,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 99,
            "type": "cites",
            "value": 4
        },
        {
            "source": 94,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 94,
            "target": 14,
            "type": "cites",
            "value": 16
        },
        {
            "source": 95,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 95,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 97,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 97,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 98,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 98,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 93,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 93,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 103,
            "type": "cites",
            "value": 8
        },
        {
            "source": 104,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 105,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 106,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 107,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 108,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 109,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 110,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 104,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 105,
            "target": 1,
            "type": "cites",
            "value": 5
        },
        {
            "source": 105,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 106,
            "target": 1,
            "type": "cites",
            "value": 4
        },
        {
            "source": 107,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 108,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 109,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 110,
            "target": 1,
            "type": "cites",
            "value": 5
        },
        {
            "source": 110,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 83,
            "type": "cites",
            "value": 11
        },
        {
            "source": 96,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 96,
            "target": 83,
            "type": "cites",
            "value": 11
        },
        {
            "source": 100,
            "target": 113,
            "type": "cites",
            "value": 7
        },
        {
            "source": 96,
            "target": 113,
            "type": "cites",
            "value": 7
        },
        {
            "source": 114,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 115,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 7,
            "type": "cites",
            "value": 18
        },
        {
            "source": 96,
            "target": 7,
            "type": "cites",
            "value": 22
        },
        {
            "source": 100,
            "target": 80,
            "type": "cites",
            "value": 6
        },
        {
            "source": 96,
            "target": 80,
            "type": "cites",
            "value": 8
        },
        {
            "source": 100,
            "target": 116,
            "type": "cites",
            "value": 7
        },
        {
            "source": 96,
            "target": 116,
            "type": "cites",
            "value": 7
        },
        {
            "source": 100,
            "target": 71,
            "type": "cites",
            "value": 8
        },
        {
            "source": 100,
            "target": 117,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 118,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 119,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 120,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 121,
            "type": "cites",
            "value": 6
        },
        {
            "source": 96,
            "target": 71,
            "type": "cites",
            "value": 11
        },
        {
            "source": 96,
            "target": 117,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 118,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 119,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 120,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 121,
            "type": "cites",
            "value": 8
        },
        {
            "source": 114,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 122,
            "type": "cites",
            "value": 6
        },
        {
            "source": 100,
            "target": 84,
            "type": "cites",
            "value": 5
        },
        {
            "source": 100,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 100,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 96,
            "target": 122,
            "type": "cites",
            "value": 6
        },
        {
            "source": 96,
            "target": 84,
            "type": "cites",
            "value": 5
        },
        {
            "source": 96,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 100,
            "target": 96,
            "type": "cites",
            "value": 8
        },
        {
            "source": 100,
            "target": 123,
            "type": "cites",
            "value": 4
        },
        {
            "source": 100,
            "target": 101,
            "type": "cites",
            "value": 7
        },
        {
            "source": 96,
            "target": 100,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 101,
            "type": "cites",
            "value": 4
        },
        {
            "source": 100,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 96,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 96,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 22,
            "type": "cites",
            "value": 12
        },
        {
            "source": 124,
            "target": 14,
            "type": "cites",
            "value": 17
        },
        {
            "source": 124,
            "target": 99,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 126,
            "target": 127,
            "type": "cites",
            "value": 4
        },
        {
            "source": 126,
            "target": 128,
            "type": "cites",
            "value": 4
        },
        {
            "source": 126,
            "target": 129,
            "type": "cites",
            "value": 4
        },
        {
            "source": 126,
            "target": 130,
            "type": "cites",
            "value": 6
        },
        {
            "source": 126,
            "target": 124,
            "type": "cites",
            "value": 6
        },
        {
            "source": 126,
            "target": 131,
            "type": "cites",
            "value": 4
        },
        {
            "source": 126,
            "target": 132,
            "type": "cites",
            "value": 13
        },
        {
            "source": 124,
            "target": 132,
            "type": "cites",
            "value": 5
        },
        {
            "source": 124,
            "target": 126,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 133,
            "target": 134,
            "type": "cites",
            "value": 3
        },
        {
            "source": 133,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 26,
            "target": 135,
            "type": "cites",
            "value": 5
        },
        {
            "source": 136,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 26,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 125,
            "type": "cites",
            "value": 12
        },
        {
            "source": 138,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 138,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 138,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 139,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 139,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 139,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 140,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 140,
            "target": 100,
            "type": "cites",
            "value": 4
        },
        {
            "source": 140,
            "target": 101,
            "type": "cites",
            "value": 4
        },
        {
            "source": 36,
            "target": 102,
            "type": "cites",
            "value": 7
        },
        {
            "source": 36,
            "target": 96,
            "type": "cites",
            "value": 7
        },
        {
            "source": 36,
            "target": 100,
            "type": "cites",
            "value": 6
        },
        {
            "source": 36,
            "target": 101,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 102,
            "type": "cites",
            "value": 27
        },
        {
            "source": 14,
            "target": 96,
            "type": "cites",
            "value": 11
        },
        {
            "source": 14,
            "target": 100,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 101,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 141,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 116,
            "type": "cites",
            "value": 4
        },
        {
            "source": 36,
            "target": 83,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 83,
            "type": "cites",
            "value": 24
        },
        {
            "source": 36,
            "target": 142,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 122,
            "type": "cites",
            "value": 6
        },
        {
            "source": 36,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 142,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 122,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 4,
            "type": "cites",
            "value": 18
        },
        {
            "source": 140,
            "target": 80,
            "type": "cites",
            "value": 6
        },
        {
            "source": 36,
            "target": 143,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 144,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 145,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 143,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 144,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 79,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 80,
            "type": "cites",
            "value": 23
        },
        {
            "source": 36,
            "target": 84,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 84,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 38,
            "type": "cites",
            "value": 10
        },
        {
            "source": 14,
            "target": 112,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 52,
            "type": "cites",
            "value": 17
        },
        {
            "source": 36,
            "target": 146,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 147,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 148,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 146,
            "type": "cites",
            "value": 4
        },
        {
            "source": 138,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 139,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 36,
            "target": 14,
            "type": "cites",
            "value": 18
        },
        {
            "source": 14,
            "target": 36,
            "type": "cites",
            "value": 14
        },
        {
            "source": 14,
            "target": 88,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 91,
            "type": "cites",
            "value": 11
        },
        {
            "source": 149,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 150,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 12,
            "target": 8,
            "type": "cites",
            "value": 7
        },
        {
            "source": 12,
            "target": 9,
            "type": "cites",
            "value": 6
        },
        {
            "source": 151,
            "target": 8,
            "type": "cites",
            "value": 5
        },
        {
            "source": 151,
            "target": 9,
            "type": "cites",
            "value": 7
        },
        {
            "source": 152,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 151,
            "target": 12,
            "type": "cites",
            "value": 16
        },
        {
            "source": 152,
            "target": 6,
            "type": "cites",
            "value": 7
        },
        {
            "source": 12,
            "target": 10,
            "type": "cites",
            "value": 16
        },
        {
            "source": 12,
            "target": 6,
            "type": "cites",
            "value": 18
        },
        {
            "source": 151,
            "target": 10,
            "type": "cites",
            "value": 5
        },
        {
            "source": 151,
            "target": 6,
            "type": "cites",
            "value": 13
        },
        {
            "source": 12,
            "target": 43,
            "type": "cites",
            "value": 6
        },
        {
            "source": 6,
            "target": 43,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 60,
            "type": "cites",
            "value": 5
        },
        {
            "source": 151,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 77,
            "type": "cites",
            "value": 5
        },
        {
            "source": 151,
            "target": 153,
            "type": "cites",
            "value": 6
        },
        {
            "source": 154,
            "target": 83,
            "type": "cites",
            "value": 6
        },
        {
            "source": 155,
            "target": 55,
            "type": "cites",
            "value": 4
        },
        {
            "source": 155,
            "target": 7,
            "type": "cites",
            "value": 13
        },
        {
            "source": 154,
            "target": 7,
            "type": "cites",
            "value": 9
        },
        {
            "source": 156,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 157,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 159,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 88,
            "target": 80,
            "type": "cites",
            "value": 7
        },
        {
            "source": 155,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 157,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 156,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 159,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 155,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 157,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 159,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 155,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 154,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 159,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 159,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 159,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 159,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 155,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 155,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 155,
            "target": 100,
            "type": "cites",
            "value": 4
        },
        {
            "source": 155,
            "target": 101,
            "type": "cites",
            "value": 4
        },
        {
            "source": 157,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 156,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 156,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 156,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 156,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 88,
            "target": 43,
            "type": "cites",
            "value": 3
        },
        {
            "source": 155,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 154,
            "target": 12,
            "type": "cites",
            "value": 7
        },
        {
            "source": 156,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 159,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 88,
            "target": 148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 135,
            "type": "cites",
            "value": 4
        },
        {
            "source": 160,
            "target": 161,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 162,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 163,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 164,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 165,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 166,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 168,
            "target": 37,
            "type": "cites",
            "value": 3
        },
        {
            "source": 37,
            "target": 146,
            "type": "cites",
            "value": 5
        },
        {
            "source": 37,
            "target": 24,
            "type": "cites",
            "value": 4
        },
        {
            "source": 169,
            "target": 170,
            "type": "cites",
            "value": 4
        },
        {
            "source": 169,
            "target": 171,
            "type": "cites",
            "value": 4
        },
        {
            "source": 172,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 173,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 174,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 169,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 176,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 169,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 169,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 178,
            "type": "cites",
            "value": 5
        },
        {
            "source": 175,
            "target": 179,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 180,
            "type": "cites",
            "value": 3
        },
        {
            "source": 169,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 176,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 169,
            "type": "cites",
            "value": 8
        },
        {
            "source": 175,
            "target": 181,
            "type": "cites",
            "value": 5
        },
        {
            "source": 175,
            "target": 182,
            "type": "cites",
            "value": 6
        },
        {
            "source": 175,
            "target": 183,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 91,
            "type": "cites",
            "value": 8
        },
        {
            "source": 169,
            "target": 175,
            "type": "cites",
            "value": 3
        },
        {
            "source": 169,
            "target": 181,
            "type": "cites",
            "value": 3
        },
        {
            "source": 169,
            "target": 182,
            "type": "cites",
            "value": 5
        },
        {
            "source": 169,
            "target": 91,
            "type": "cites",
            "value": 6
        },
        {
            "source": 176,
            "target": 169,
            "type": "cites",
            "value": 7
        },
        {
            "source": 176,
            "target": 175,
            "type": "cites",
            "value": 4
        },
        {
            "source": 176,
            "target": 181,
            "type": "cites",
            "value": 5
        },
        {
            "source": 176,
            "target": 182,
            "type": "cites",
            "value": 5
        },
        {
            "source": 176,
            "target": 183,
            "type": "cites",
            "value": 3
        },
        {
            "source": 176,
            "target": 91,
            "type": "cites",
            "value": 7
        },
        {
            "source": 175,
            "target": 184,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 185,
            "type": "cites",
            "value": 4
        },
        {
            "source": 169,
            "target": 184,
            "type": "cites",
            "value": 3
        },
        {
            "source": 169,
            "target": 185,
            "type": "cites",
            "value": 3
        },
        {
            "source": 176,
            "target": 185,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 169,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 176,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 187,
            "target": 9,
            "type": "cites",
            "value": 4
        },
        {
            "source": 71,
            "target": 55,
            "type": "cites",
            "value": 5
        },
        {
            "source": 71,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 71,
            "target": 7,
            "type": "cites",
            "value": 26
        },
        {
            "source": 71,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 71,
            "target": 77,
            "type": "cites",
            "value": 8
        },
        {
            "source": 71,
            "target": 188,
            "type": "cites",
            "value": 9
        },
        {
            "source": 189,
            "target": 190,
            "type": "cites",
            "value": 6
        },
        {
            "source": 187,
            "target": 191,
            "type": "cites",
            "value": 6
        },
        {
            "source": 187,
            "target": 112,
            "type": "cites",
            "value": 6
        },
        {
            "source": 187,
            "target": 192,
            "type": "cites",
            "value": 7
        },
        {
            "source": 187,
            "target": 46,
            "type": "cites",
            "value": 8
        },
        {
            "source": 187,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 2,
            "type": "cites",
            "value": 7
        },
        {
            "source": 187,
            "target": 122,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 193,
            "type": "cites",
            "value": 6
        },
        {
            "source": 187,
            "target": 194,
            "type": "cites",
            "value": 7
        },
        {
            "source": 187,
            "target": 70,
            "type": "cites",
            "value": 5
        },
        {
            "source": 187,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 196,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 195,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 71,
            "target": 197,
            "type": "cites",
            "value": 6
        },
        {
            "source": 71,
            "target": 68,
            "type": "cites",
            "value": 7
        },
        {
            "source": 71,
            "target": 121,
            "type": "cites",
            "value": 7
        },
        {
            "source": 153,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 158,
            "type": "cites",
            "value": 5
        },
        {
            "source": 8,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 153,
            "target": 198,
            "type": "cites",
            "value": 7
        },
        {
            "source": 8,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 77,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 153,
            "target": 12,
            "type": "cites",
            "value": 7
        },
        {
            "source": 8,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 8,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 198,
            "target": 199,
            "type": "cites",
            "value": 3
        },
        {
            "source": 198,
            "target": 72,
            "type": "cites",
            "value": 9
        },
        {
            "source": 198,
            "target": 200,
            "type": "cites",
            "value": 4
        },
        {
            "source": 153,
            "target": 122,
            "type": "cites",
            "value": 5
        },
        {
            "source": 153,
            "target": 84,
            "type": "cites",
            "value": 3
        },
        {
            "source": 153,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 153,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 122,
            "type": "cites",
            "value": 4
        },
        {
            "source": 77,
            "target": 84,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 4,
            "type": "cites",
            "value": 14
        },
        {
            "source": 198,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 198,
            "target": 4,
            "type": "cites",
            "value": 12
        },
        {
            "source": 201,
            "target": 42,
            "type": "cites",
            "value": 3
        },
        {
            "source": 201,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 158,
            "target": 202,
            "type": "cites",
            "value": 5
        },
        {
            "source": 158,
            "target": 63,
            "type": "cites",
            "value": 6
        },
        {
            "source": 201,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 158,
            "target": 55,
            "type": "cites",
            "value": 6
        },
        {
            "source": 158,
            "target": 56,
            "type": "cites",
            "value": 4
        },
        {
            "source": 158,
            "target": 7,
            "type": "cites",
            "value": 24
        },
        {
            "source": 201,
            "target": 7,
            "type": "cites",
            "value": 11
        },
        {
            "source": 203,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 158,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 116,
            "type": "cites",
            "value": 4
        },
        {
            "source": 158,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 201,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 158,
            "target": 83,
            "type": "cites",
            "value": 11
        },
        {
            "source": 201,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 203,
            "target": 83,
            "type": "cites",
            "value": 5
        },
        {
            "source": 201,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 201,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 158,
            "target": 205,
            "type": "cites",
            "value": 7
        },
        {
            "source": 201,
            "target": 158,
            "type": "cites",
            "value": 7
        },
        {
            "source": 203,
            "target": 205,
            "type": "cites",
            "value": 4
        },
        {
            "source": 203,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 77,
            "type": "cites",
            "value": 7
        },
        {
            "source": 158,
            "target": 4,
            "type": "cites",
            "value": 12
        },
        {
            "source": 158,
            "target": 113,
            "type": "cites",
            "value": 7
        },
        {
            "source": 201,
            "target": 4,
            "type": "cites",
            "value": 9
        },
        {
            "source": 201,
            "target": 113,
            "type": "cites",
            "value": 7
        },
        {
            "source": 203,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 201,
            "target": 206,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 7,
            "type": "cites",
            "value": 17
        },
        {
            "source": 207,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 69,
            "type": "cites",
            "value": 7
        },
        {
            "source": 177,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 208,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 209,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 26,
            "type": "cites",
            "value": 10
        },
        {
            "source": 100,
            "target": 208,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 209,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 26,
            "type": "cites",
            "value": 11
        },
        {
            "source": 146,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 210,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 96,
            "type": "cites",
            "value": 7
        },
        {
            "source": 102,
            "target": 123,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 211,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 212,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 100,
            "type": "cites",
            "value": 7
        },
        {
            "source": 102,
            "target": 213,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 101,
            "type": "cites",
            "value": 7
        },
        {
            "source": 100,
            "target": 102,
            "type": "cites",
            "value": 6
        },
        {
            "source": 100,
            "target": 213,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 100,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 214,
            "target": 8,
            "type": "cites",
            "value": 3
        },
        {
            "source": 214,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 7,
            "type": "cites",
            "value": 9
        },
        {
            "source": 214,
            "target": 7,
            "type": "cites",
            "value": 9
        },
        {
            "source": 215,
            "target": 200,
            "type": "cites",
            "value": 13
        },
        {
            "source": 214,
            "target": 200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 215,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 215,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 215,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 214,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 214,
            "target": 63,
            "type": "cites",
            "value": 14
        },
        {
            "source": 214,
            "target": 72,
            "type": "cites",
            "value": 10
        },
        {
            "source": 215,
            "target": 52,
            "type": "cites",
            "value": 5
        },
        {
            "source": 214,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 215,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 148,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 204,
            "type": "cites",
            "value": 6
        },
        {
            "source": 79,
            "target": 36,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 148,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 79,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 79,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 7,
            "type": "cites",
            "value": 16
        },
        {
            "source": 148,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 148,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 82,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 71,
            "type": "cites",
            "value": 9
        },
        {
            "source": 80,
            "target": 188,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 145,
            "type": "cites",
            "value": 4
        },
        {
            "source": 79,
            "target": 143,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 144,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 80,
            "type": "cites",
            "value": 11
        },
        {
            "source": 216,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 216,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 217,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 217,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 218,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 218,
            "target": 80,
            "type": "cites",
            "value": 6
        },
        {
            "source": 148,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 148,
            "target": 80,
            "type": "cites",
            "value": 8
        },
        {
            "source": 82,
            "target": 79,
            "type": "cites",
            "value": 4
        },
        {
            "source": 82,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 219,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 219,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 145,
            "type": "cites",
            "value": 5
        },
        {
            "source": 80,
            "target": 143,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 144,
            "type": "cites",
            "value": 6
        },
        {
            "source": 79,
            "target": 84,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 80,
            "target": 4,
            "type": "cites",
            "value": 10
        },
        {
            "source": 79,
            "target": 96,
            "type": "cites",
            "value": 9
        },
        {
            "source": 79,
            "target": 147,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 100,
            "type": "cites",
            "value": 9
        },
        {
            "source": 79,
            "target": 102,
            "type": "cites",
            "value": 9
        },
        {
            "source": 79,
            "target": 148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 146,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 101,
            "type": "cites",
            "value": 9
        },
        {
            "source": 80,
            "target": 96,
            "type": "cites",
            "value": 11
        },
        {
            "source": 80,
            "target": 147,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 100,
            "type": "cites",
            "value": 10
        },
        {
            "source": 80,
            "target": 102,
            "type": "cites",
            "value": 13
        },
        {
            "source": 80,
            "target": 148,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 146,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 101,
            "type": "cites",
            "value": 10
        },
        {
            "source": 79,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 121,
            "type": "cites",
            "value": 5
        },
        {
            "source": 80,
            "target": 193,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 78,
            "type": "cites",
            "value": 3
        },
        {
            "source": 218,
            "target": 88,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 201,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 206,
            "type": "cites",
            "value": 5
        },
        {
            "source": 80,
            "target": 201,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 220,
            "type": "cites",
            "value": 5
        },
        {
            "source": 80,
            "target": 221,
            "type": "cites",
            "value": 4
        },
        {
            "source": 222,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 223,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 224,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 226,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 222,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 222,
            "target": 9,
            "type": "cites",
            "value": 5
        },
        {
            "source": 227,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 226,
            "target": 220,
            "type": "cites",
            "value": 4
        },
        {
            "source": 226,
            "target": 221,
            "type": "cites",
            "value": 3
        },
        {
            "source": 226,
            "target": 228,
            "type": "cites",
            "value": 4
        },
        {
            "source": 226,
            "target": 229,
            "type": "cites",
            "value": 3
        },
        {
            "source": 223,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 227,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 223,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 230,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 232,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 231,
            "type": "cites",
            "value": 5
        },
        {
            "source": 230,
            "target": 233,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 234,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 235,
            "type": "cites",
            "value": 11
        },
        {
            "source": 14,
            "target": 233,
            "type": "cites",
            "value": 12
        },
        {
            "source": 16,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 15,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 236,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 230,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 237,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 238,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 239,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 99,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 240,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 241,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 242,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 103,
            "type": "cites",
            "value": 52
        },
        {
            "source": 16,
            "target": 15,
            "type": "cites",
            "value": 3
        },
        {
            "source": 230,
            "target": 22,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 15,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 22,
            "type": "cites",
            "value": 16
        },
        {
            "source": 230,
            "target": 243,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 243,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 92,
            "type": "cites",
            "value": 9
        },
        {
            "source": 230,
            "target": 244,
            "type": "cites",
            "value": 11
        },
        {
            "source": 14,
            "target": 244,
            "type": "cites",
            "value": 33
        },
        {
            "source": 14,
            "target": 245,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 206,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 246,
            "type": "cites",
            "value": 11
        },
        {
            "source": 14,
            "target": 247,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 124,
            "type": "cites",
            "value": 12
        },
        {
            "source": 14,
            "target": 248,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 27,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 23,
            "type": "cites",
            "value": 16
        },
        {
            "source": 14,
            "target": 249,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 35,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 37,
            "type": "cites",
            "value": 5
        },
        {
            "source": 250,
            "target": 241,
            "type": "cites",
            "value": 6
        },
        {
            "source": 250,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 250,
            "target": 240,
            "type": "cites",
            "value": 6
        },
        {
            "source": 250,
            "target": 99,
            "type": "cites",
            "value": 8
        },
        {
            "source": 250,
            "target": 242,
            "type": "cites",
            "value": 4
        },
        {
            "source": 250,
            "target": 103,
            "type": "cites",
            "value": 8
        },
        {
            "source": 99,
            "target": 241,
            "type": "cites",
            "value": 17
        },
        {
            "source": 99,
            "target": 251,
            "type": "cites",
            "value": 11
        },
        {
            "source": 99,
            "target": 240,
            "type": "cites",
            "value": 17
        },
        {
            "source": 99,
            "target": 252,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 253,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 254,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 242,
            "type": "cites",
            "value": 17
        },
        {
            "source": 99,
            "target": 103,
            "type": "cites",
            "value": 31
        },
        {
            "source": 136,
            "target": 241,
            "type": "cites",
            "value": 9
        },
        {
            "source": 136,
            "target": 251,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 240,
            "type": "cites",
            "value": 9
        },
        {
            "source": 136,
            "target": 252,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 253,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 254,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 99,
            "type": "cites",
            "value": 12
        },
        {
            "source": 136,
            "target": 242,
            "type": "cites",
            "value": 6
        },
        {
            "source": 136,
            "target": 103,
            "type": "cites",
            "value": 12
        },
        {
            "source": 26,
            "target": 241,
            "type": "cites",
            "value": 9
        },
        {
            "source": 26,
            "target": 251,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 240,
            "type": "cites",
            "value": 9
        },
        {
            "source": 26,
            "target": 252,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 253,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 254,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 99,
            "type": "cites",
            "value": 12
        },
        {
            "source": 26,
            "target": 242,
            "type": "cites",
            "value": 6
        },
        {
            "source": 26,
            "target": 103,
            "type": "cites",
            "value": 13
        },
        {
            "source": 26,
            "target": 182,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 91,
            "type": "cites",
            "value": 7
        },
        {
            "source": 99,
            "target": 125,
            "type": "cites",
            "value": 7
        },
        {
            "source": 26,
            "target": 255,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 256,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 84,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 1,
            "type": "cites",
            "value": 8
        },
        {
            "source": 136,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 84,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 99,
            "target": 258,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 136,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 26,
            "type": "cites",
            "value": 8
        },
        {
            "source": 136,
            "target": 26,
            "type": "cites",
            "value": 31
        },
        {
            "source": 26,
            "target": 136,
            "type": "cites",
            "value": 26
        },
        {
            "source": 99,
            "target": 259,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 260,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 261,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 262,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 263,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 264,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 265,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 170,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 266,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 265,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 265,
            "type": "cites",
            "value": 9
        },
        {
            "source": 26,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 195,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 267,
            "type": "cites",
            "value": 7
        },
        {
            "source": 99,
            "target": 268,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 269,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 270,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 90,
            "type": "cites",
            "value": 10
        },
        {
            "source": 99,
            "target": 111,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 111,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 271,
            "type": "cites",
            "value": 7
        },
        {
            "source": 99,
            "target": 272,
            "type": "cites",
            "value": 8
        },
        {
            "source": 99,
            "target": 273,
            "type": "cites",
            "value": 7
        },
        {
            "source": 99,
            "target": 274,
            "type": "cites",
            "value": 7
        },
        {
            "source": 136,
            "target": 271,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 272,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 273,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 274,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 271,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 272,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 273,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 274,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 276,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 92,
            "type": "cites",
            "value": 9
        },
        {
            "source": 275,
            "target": 8,
            "type": "cites",
            "value": 3
        },
        {
            "source": 277,
            "target": 275,
            "type": "cites",
            "value": 3
        },
        {
            "source": 278,
            "target": 275,
            "type": "cites",
            "value": 3
        },
        {
            "source": 279,
            "target": 275,
            "type": "cites",
            "value": 3
        },
        {
            "source": 280,
            "target": 275,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 281,
            "type": "cites",
            "value": 8
        },
        {
            "source": 275,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 192,
            "type": "cites",
            "value": 15
        },
        {
            "source": 275,
            "target": 50,
            "type": "cites",
            "value": 6
        },
        {
            "source": 280,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 275,
            "target": 4,
            "type": "cites",
            "value": 12
        },
        {
            "source": 275,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 275,
            "target": 282,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 200,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 24,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 215,
            "type": "cites",
            "value": 7
        },
        {
            "source": 275,
            "target": 10,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 6,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 46,
            "type": "cites",
            "value": 9
        },
        {
            "source": 275,
            "target": 283,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 275,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 284,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 285,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 286,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 288,
            "type": "cites",
            "value": 6
        },
        {
            "source": 275,
            "target": 289,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 290,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 186,
            "type": "cites",
            "value": 8
        },
        {
            "source": 275,
            "target": 291,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 292,
            "type": "cites",
            "value": 5
        },
        {
            "source": 275,
            "target": 293,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 294,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 2,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 3,
            "type": "cites",
            "value": 3
        },
        {
            "source": 275,
            "target": 5,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 80,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 141,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 112,
            "type": "cites",
            "value": 5
        },
        {
            "source": 80,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 295,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 296,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 297,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 298,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 295,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 295,
            "target": 103,
            "type": "cites",
            "value": 9
        },
        {
            "source": 80,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 299,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 301,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 302,
            "type": "cites",
            "value": 3
        },
        {
            "source": 295,
            "target": 303,
            "type": "cites",
            "value": 4
        },
        {
            "source": 295,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 295,
            "target": 80,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 135,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 0,
            "type": "cites",
            "value": 10
        },
        {
            "source": 204,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 223,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 305,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 306,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 307,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 308,
            "type": "cites",
            "value": 6
        },
        {
            "source": 204,
            "target": 309,
            "type": "cites",
            "value": 6
        },
        {
            "source": 204,
            "target": 310,
            "type": "cites",
            "value": 6
        },
        {
            "source": 204,
            "target": 22,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 20,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 311,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 204,
            "type": "cites",
            "value": 29
        },
        {
            "source": 22,
            "target": 312,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 178,
            "type": "cites",
            "value": 14
        },
        {
            "source": 22,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 315,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 287,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 276,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 92,
            "type": "cites",
            "value": 10
        },
        {
            "source": 244,
            "target": 317,
            "type": "cites",
            "value": 6
        },
        {
            "source": 244,
            "target": 318,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 60,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 33,
            "type": "cites",
            "value": 10
        },
        {
            "source": 124,
            "target": 220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 228,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 220,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 228,
            "type": "cites",
            "value": 7
        },
        {
            "source": 319,
            "target": 320,
            "type": "cites",
            "value": 7
        },
        {
            "source": 319,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 319,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 319,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 319,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 230,
            "target": 320,
            "type": "cites",
            "value": 7
        },
        {
            "source": 230,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 230,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 230,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 124,
            "target": 320,
            "type": "cites",
            "value": 9
        },
        {
            "source": 124,
            "target": 321,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 324,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 124,
            "target": 244,
            "type": "cites",
            "value": 16
        },
        {
            "source": 244,
            "target": 320,
            "type": "cites",
            "value": 38
        },
        {
            "source": 244,
            "target": 321,
            "type": "cites",
            "value": 18
        },
        {
            "source": 244,
            "target": 322,
            "type": "cites",
            "value": 19
        },
        {
            "source": 244,
            "target": 324,
            "type": "cites",
            "value": 13
        },
        {
            "source": 244,
            "target": 323,
            "type": "cites",
            "value": 24
        },
        {
            "source": 244,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 204,
            "type": "cites",
            "value": 20
        },
        {
            "source": 124,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 72,
            "type": "cites",
            "value": 9
        },
        {
            "source": 124,
            "target": 29,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 30,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 29,
            "type": "cites",
            "value": 10
        },
        {
            "source": 244,
            "target": 325,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 31,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 30,
            "type": "cites",
            "value": 6
        },
        {
            "source": 244,
            "target": 22,
            "type": "cites",
            "value": 21
        },
        {
            "source": 244,
            "target": 1,
            "type": "cites",
            "value": 4
        },
        {
            "source": 37,
            "target": 247,
            "type": "cites",
            "value": 3
        },
        {
            "source": 37,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 37,
            "target": 23,
            "type": "cites",
            "value": 3
        },
        {
            "source": 37,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 37,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 54,
            "type": "cites",
            "value": 4
        },
        {
            "source": 326,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 327,
            "type": "cites",
            "value": 4
        },
        {
            "source": 326,
            "target": 60,
            "type": "cites",
            "value": 9
        },
        {
            "source": 276,
            "target": 327,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 60,
            "type": "cites",
            "value": 10
        },
        {
            "source": 326,
            "target": 195,
            "type": "cites",
            "value": 7
        },
        {
            "source": 276,
            "target": 195,
            "type": "cites",
            "value": 8
        },
        {
            "source": 326,
            "target": 328,
            "type": "cites",
            "value": 4
        },
        {
            "source": 326,
            "target": 14,
            "type": "cites",
            "value": 9
        },
        {
            "source": 276,
            "target": 328,
            "type": "cites",
            "value": 7
        },
        {
            "source": 276,
            "target": 14,
            "type": "cites",
            "value": 14
        },
        {
            "source": 326,
            "target": 329,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 329,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 326,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 37,
            "target": 282,
            "type": "cites",
            "value": 3
        },
        {
            "source": 37,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 37,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 37,
            "target": 215,
            "type": "cites",
            "value": 5
        },
        {
            "source": 330,
            "target": 300,
            "type": "cites",
            "value": 5
        },
        {
            "source": 331,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 330,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 331,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 330,
            "target": 332,
            "type": "cites",
            "value": 6
        },
        {
            "source": 330,
            "target": 331,
            "type": "cites",
            "value": 12
        },
        {
            "source": 331,
            "target": 332,
            "type": "cites",
            "value": 6
        },
        {
            "source": 331,
            "target": 330,
            "type": "cites",
            "value": 16
        },
        {
            "source": 330,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 331,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 333,
            "target": 334,
            "type": "cites",
            "value": 7
        },
        {
            "source": 133,
            "target": 153,
            "type": "cites",
            "value": 5
        },
        {
            "source": 133,
            "target": 334,
            "type": "cites",
            "value": 8
        },
        {
            "source": 333,
            "target": 12,
            "type": "cites",
            "value": 8
        },
        {
            "source": 333,
            "target": 151,
            "type": "cites",
            "value": 7
        },
        {
            "source": 133,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 72,
            "type": "cites",
            "value": 14
        },
        {
            "source": 335,
            "target": 241,
            "type": "cites",
            "value": 4
        },
        {
            "source": 335,
            "target": 240,
            "type": "cites",
            "value": 4
        },
        {
            "source": 335,
            "target": 99,
            "type": "cites",
            "value": 5
        },
        {
            "source": 335,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 258,
            "target": 241,
            "type": "cites",
            "value": 4
        },
        {
            "source": 258,
            "target": 240,
            "type": "cites",
            "value": 4
        },
        {
            "source": 258,
            "target": 99,
            "type": "cites",
            "value": 5
        },
        {
            "source": 258,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 136,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 336,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 232,
            "type": "cites",
            "value": 7
        },
        {
            "source": 99,
            "target": 42,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 14,
            "type": "cites",
            "value": 12
        },
        {
            "source": 136,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 26,
            "target": 42,
            "type": "cites",
            "value": 8
        },
        {
            "source": 26,
            "target": 14,
            "type": "cites",
            "value": 10
        },
        {
            "source": 99,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 337,
            "target": 328,
            "type": "cites",
            "value": 3
        },
        {
            "source": 337,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 328,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 337,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 337,
            "target": 225,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 225,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 132,
            "type": "cites",
            "value": 7
        },
        {
            "source": 12,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 86,
            "type": "cites",
            "value": 5
        },
        {
            "source": 12,
            "target": 87,
            "type": "cites",
            "value": 8
        },
        {
            "source": 337,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 337,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 337,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 338,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 188,
            "type": "cites",
            "value": 5
        },
        {
            "source": 339,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 340,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 341,
            "type": "cites",
            "value": 6
        },
        {
            "source": 83,
            "target": 113,
            "type": "cites",
            "value": 6
        },
        {
            "source": 342,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 55,
            "type": "cites",
            "value": 10
        },
        {
            "source": 83,
            "target": 56,
            "type": "cites",
            "value": 6
        },
        {
            "source": 83,
            "target": 7,
            "type": "cites",
            "value": 39
        },
        {
            "source": 220,
            "target": 7,
            "type": "cites",
            "value": 11
        },
        {
            "source": 343,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 55,
            "type": "cites",
            "value": 5
        },
        {
            "source": 113,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 7,
            "type": "cites",
            "value": 14
        },
        {
            "source": 220,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 83,
            "target": 205,
            "type": "cites",
            "value": 7
        },
        {
            "source": 83,
            "target": 158,
            "type": "cites",
            "value": 8
        },
        {
            "source": 220,
            "target": 83,
            "type": "cites",
            "value": 7
        },
        {
            "source": 113,
            "target": 205,
            "type": "cites",
            "value": 4
        },
        {
            "source": 113,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 113,
            "target": 158,
            "type": "cites",
            "value": 4
        },
        {
            "source": 83,
            "target": 77,
            "type": "cites",
            "value": 11
        },
        {
            "source": 83,
            "target": 4,
            "type": "cites",
            "value": 9
        },
        {
            "source": 113,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 4,
            "type": "cites",
            "value": 16
        },
        {
            "source": 83,
            "target": 71,
            "type": "cites",
            "value": 9
        },
        {
            "source": 83,
            "target": 188,
            "type": "cites",
            "value": 7
        },
        {
            "source": 113,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 87,
            "type": "cites",
            "value": 9
        },
        {
            "source": 220,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 113,
            "target": 193,
            "type": "cites",
            "value": 8
        },
        {
            "source": 113,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 83,
            "target": 344,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 52,
            "type": "cites",
            "value": 7
        },
        {
            "source": 83,
            "target": 345,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 52,
            "type": "cites",
            "value": 7
        },
        {
            "source": 346,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 347,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 243,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 348,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 126,
            "type": "cites",
            "value": 4
        },
        {
            "source": 349,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 350,
            "target": 349,
            "type": "cites",
            "value": 3
        },
        {
            "source": 351,
            "target": 349,
            "type": "cites",
            "value": 3
        },
        {
            "source": 352,
            "target": 349,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 299,
            "type": "cites",
            "value": 10
        },
        {
            "source": 349,
            "target": 353,
            "type": "cites",
            "value": 5
        },
        {
            "source": 71,
            "target": 281,
            "type": "cites",
            "value": 4
        },
        {
            "source": 354,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 355,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 71,
            "target": 151,
            "type": "cites",
            "value": 7
        },
        {
            "source": 7,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 151,
            "type": "cites",
            "value": 7
        },
        {
            "source": 356,
            "target": 151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 354,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 71,
            "target": 69,
            "type": "cites",
            "value": 5
        },
        {
            "source": 71,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 7,
            "target": 68,
            "type": "cites",
            "value": 12
        },
        {
            "source": 7,
            "target": 69,
            "type": "cites",
            "value": 17
        },
        {
            "source": 7,
            "target": 70,
            "type": "cites",
            "value": 17
        },
        {
            "source": 356,
            "target": 68,
            "type": "cites",
            "value": 7
        },
        {
            "source": 356,
            "target": 69,
            "type": "cites",
            "value": 5
        },
        {
            "source": 356,
            "target": 70,
            "type": "cites",
            "value": 6
        },
        {
            "source": 354,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 354,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 354,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 7,
            "target": 71,
            "type": "cites",
            "value": 14
        },
        {
            "source": 7,
            "target": 121,
            "type": "cites",
            "value": 8
        },
        {
            "source": 356,
            "target": 71,
            "type": "cites",
            "value": 8
        },
        {
            "source": 356,
            "target": 121,
            "type": "cites",
            "value": 5
        },
        {
            "source": 356,
            "target": 7,
            "type": "cites",
            "value": 18
        },
        {
            "source": 71,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 7,
            "target": 77,
            "type": "cites",
            "value": 11
        },
        {
            "source": 7,
            "target": 188,
            "type": "cites",
            "value": 11
        },
        {
            "source": 356,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 356,
            "target": 188,
            "type": "cites",
            "value": 5
        },
        {
            "source": 7,
            "target": 197,
            "type": "cites",
            "value": 6
        },
        {
            "source": 356,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 155,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 300,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 357,
            "type": "cites",
            "value": 6
        },
        {
            "source": 358,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 359,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 360,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 319,
            "target": 233,
            "type": "cites",
            "value": 5
        },
        {
            "source": 361,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 325,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 362,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 233,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 233,
            "type": "cites",
            "value": 11
        },
        {
            "source": 319,
            "target": 243,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 243,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 243,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 282,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 200,
            "type": "cites",
            "value": 14
        },
        {
            "source": 26,
            "target": 24,
            "type": "cites",
            "value": 8
        },
        {
            "source": 26,
            "target": 215,
            "type": "cites",
            "value": 15
        },
        {
            "source": 244,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 359,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 359,
            "target": 244,
            "type": "cites",
            "value": 8
        },
        {
            "source": 359,
            "target": 363,
            "type": "cites",
            "value": 3
        },
        {
            "source": 359,
            "target": 323,
            "type": "cites",
            "value": 4
        },
        {
            "source": 319,
            "target": 363,
            "type": "cites",
            "value": 3
        },
        {
            "source": 325,
            "target": 320,
            "type": "cites",
            "value": 6
        },
        {
            "source": 325,
            "target": 244,
            "type": "cites",
            "value": 8
        },
        {
            "source": 325,
            "target": 363,
            "type": "cites",
            "value": 3
        },
        {
            "source": 325,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 363,
            "type": "cites",
            "value": 9
        },
        {
            "source": 359,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 325,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 325,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 205,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 364,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 83,
            "type": "cites",
            "value": 7
        },
        {
            "source": 244,
            "target": 365,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 366,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 345,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 7,
            "type": "cites",
            "value": 17
        },
        {
            "source": 244,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 367,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 276,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 204,
            "type": "cites",
            "value": 5
        },
        {
            "source": 368,
            "target": 34,
            "type": "cites",
            "value": 6
        },
        {
            "source": 368,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 34,
            "type": "cites",
            "value": 6
        },
        {
            "source": 299,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 368,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 368,
            "target": 370,
            "type": "cites",
            "value": 3
        },
        {
            "source": 368,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 368,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 370,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 371,
            "target": 370,
            "type": "cites",
            "value": 3
        },
        {
            "source": 371,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 371,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 302,
            "target": 370,
            "type": "cites",
            "value": 3
        },
        {
            "source": 302,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 302,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 373,
            "target": 374,
            "type": "cites",
            "value": 3
        },
        {
            "source": 373,
            "target": 375,
            "type": "cites",
            "value": 3
        },
        {
            "source": 373,
            "target": 372,
            "type": "cites",
            "value": 3
        },
        {
            "source": 373,
            "target": 376,
            "type": "cites",
            "value": 3
        },
        {
            "source": 373,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 373,
            "type": "cites",
            "value": 5
        },
        {
            "source": 372,
            "target": 374,
            "type": "cites",
            "value": 5
        },
        {
            "source": 372,
            "target": 375,
            "type": "cites",
            "value": 5
        },
        {
            "source": 372,
            "target": 376,
            "type": "cites",
            "value": 4
        },
        {
            "source": 372,
            "target": 215,
            "type": "cites",
            "value": 6
        },
        {
            "source": 372,
            "target": 377,
            "type": "cites",
            "value": 3
        },
        {
            "source": 373,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 373,
            "target": 378,
            "type": "cites",
            "value": 5
        },
        {
            "source": 373,
            "target": 379,
            "type": "cites",
            "value": 4
        },
        {
            "source": 373,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 378,
            "type": "cites",
            "value": 4
        },
        {
            "source": 372,
            "target": 379,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 372,
            "target": 51,
            "type": "cites",
            "value": 5
        },
        {
            "source": 372,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 372,
            "target": 380,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 381,
            "type": "cites",
            "value": 4
        },
        {
            "source": 372,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 372,
            "target": 287,
            "type": "cites",
            "value": 6
        },
        {
            "source": 372,
            "target": 293,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 382,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 383,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 384,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 385,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 38,
            "type": "cites",
            "value": 6
        },
        {
            "source": 41,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 80,
            "type": "cites",
            "value": 11
        },
        {
            "source": 41,
            "target": 320,
            "type": "cites",
            "value": 6
        },
        {
            "source": 41,
            "target": 244,
            "type": "cites",
            "value": 24
        },
        {
            "source": 41,
            "target": 243,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 14,
            "type": "cites",
            "value": 18
        },
        {
            "source": 41,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 317,
            "type": "cites",
            "value": 7
        },
        {
            "source": 326,
            "target": 225,
            "type": "cites",
            "value": 6
        },
        {
            "source": 326,
            "target": 86,
            "type": "cites",
            "value": 6
        },
        {
            "source": 326,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 386,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 387,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 388,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 320,
            "type": "cites",
            "value": 6
        },
        {
            "source": 326,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 363,
            "type": "cites",
            "value": 5
        },
        {
            "source": 326,
            "target": 323,
            "type": "cites",
            "value": 8
        },
        {
            "source": 326,
            "target": 244,
            "type": "cites",
            "value": 12
        },
        {
            "source": 389,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 276,
            "type": "cites",
            "value": 5
        },
        {
            "source": 326,
            "target": 282,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 204,
            "type": "cites",
            "value": 5
        },
        {
            "source": 326,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 326,
            "target": 390,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 92,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 391,
            "type": "cites",
            "value": 3
        },
        {
            "source": 392,
            "target": 233,
            "type": "cites",
            "value": 5
        },
        {
            "source": 393,
            "target": 233,
            "type": "cites",
            "value": 4
        },
        {
            "source": 394,
            "target": 233,
            "type": "cites",
            "value": 8
        },
        {
            "source": 243,
            "target": 233,
            "type": "cites",
            "value": 10
        },
        {
            "source": 243,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 80,
            "type": "cites",
            "value": 5
        },
        {
            "source": 392,
            "target": 243,
            "type": "cites",
            "value": 4
        },
        {
            "source": 392,
            "target": 395,
            "type": "cites",
            "value": 3
        },
        {
            "source": 393,
            "target": 243,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 243,
            "type": "cites",
            "value": 7
        },
        {
            "source": 394,
            "target": 396,
            "type": "cites",
            "value": 4
        },
        {
            "source": 394,
            "target": 395,
            "type": "cites",
            "value": 5
        },
        {
            "source": 243,
            "target": 396,
            "type": "cites",
            "value": 5
        },
        {
            "source": 243,
            "target": 395,
            "type": "cites",
            "value": 6
        },
        {
            "source": 233,
            "target": 243,
            "type": "cites",
            "value": 9
        },
        {
            "source": 233,
            "target": 396,
            "type": "cites",
            "value": 5
        },
        {
            "source": 233,
            "target": 395,
            "type": "cites",
            "value": 6
        },
        {
            "source": 243,
            "target": 394,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 394,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 243,
            "target": 225,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 317,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 225,
            "type": "cites",
            "value": 5
        },
        {
            "source": 392,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 392,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 392,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 392,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 392,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 392,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 393,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 394,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 394,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 363,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 323,
            "type": "cites",
            "value": 4
        },
        {
            "source": 394,
            "target": 244,
            "type": "cites",
            "value": 9
        },
        {
            "source": 243,
            "target": 320,
            "type": "cites",
            "value": 7
        },
        {
            "source": 243,
            "target": 321,
            "type": "cites",
            "value": 5
        },
        {
            "source": 243,
            "target": 363,
            "type": "cites",
            "value": 4
        },
        {
            "source": 243,
            "target": 323,
            "type": "cites",
            "value": 6
        },
        {
            "source": 243,
            "target": 244,
            "type": "cites",
            "value": 14
        },
        {
            "source": 233,
            "target": 320,
            "type": "cites",
            "value": 8
        },
        {
            "source": 233,
            "target": 321,
            "type": "cites",
            "value": 6
        },
        {
            "source": 233,
            "target": 363,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 323,
            "type": "cites",
            "value": 7
        },
        {
            "source": 233,
            "target": 244,
            "type": "cites",
            "value": 12
        },
        {
            "source": 392,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 243,
            "target": 322,
            "type": "cites",
            "value": 5
        },
        {
            "source": 233,
            "target": 322,
            "type": "cites",
            "value": 6
        },
        {
            "source": 233,
            "target": 324,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 367,
            "type": "cites",
            "value": 3
        },
        {
            "source": 392,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 394,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 243,
            "target": 328,
            "type": "cites",
            "value": 4
        },
        {
            "source": 243,
            "target": 14,
            "type": "cites",
            "value": 9
        },
        {
            "source": 233,
            "target": 328,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 233,
            "target": 235,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 25,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 397,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 394,
            "target": 398,
            "type": "cites",
            "value": 3
        },
        {
            "source": 243,
            "target": 25,
            "type": "cites",
            "value": 5
        },
        {
            "source": 243,
            "target": 397,
            "type": "cites",
            "value": 4
        },
        {
            "source": 243,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 243,
            "target": 398,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 25,
            "type": "cites",
            "value": 7
        },
        {
            "source": 233,
            "target": 397,
            "type": "cites",
            "value": 6
        },
        {
            "source": 233,
            "target": 195,
            "type": "cites",
            "value": 6
        },
        {
            "source": 233,
            "target": 26,
            "type": "cites",
            "value": 8
        },
        {
            "source": 233,
            "target": 398,
            "type": "cites",
            "value": 6
        },
        {
            "source": 394,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 399,
            "type": "cites",
            "value": 3
        },
        {
            "source": 400,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 400,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 83,
            "type": "cites",
            "value": 8
        },
        {
            "source": 187,
            "target": 132,
            "type": "cites",
            "value": 6
        },
        {
            "source": 187,
            "target": 52,
            "type": "cites",
            "value": 9
        },
        {
            "source": 187,
            "target": 4,
            "type": "cites",
            "value": 22
        },
        {
            "source": 400,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 400,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 401,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 73,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 7,
            "type": "cites",
            "value": 21
        },
        {
            "source": 187,
            "target": 345,
            "type": "cites",
            "value": 3
        },
        {
            "source": 402,
            "target": 403,
            "type": "cites",
            "value": 3
        },
        {
            "source": 402,
            "target": 136,
            "type": "cites",
            "value": 4
        },
        {
            "source": 402,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 402,
            "target": 404,
            "type": "cites",
            "value": 4
        },
        {
            "source": 405,
            "target": 403,
            "type": "cites",
            "value": 3
        },
        {
            "source": 405,
            "target": 136,
            "type": "cites",
            "value": 4
        },
        {
            "source": 405,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 405,
            "target": 404,
            "type": "cites",
            "value": 6
        },
        {
            "source": 27,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 328,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 322,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 406,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 25,
            "type": "cites",
            "value": 10
        },
        {
            "source": 14,
            "target": 397,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 26,
            "type": "cites",
            "value": 23
        },
        {
            "source": 14,
            "target": 398,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 407,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 408,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 409,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 87,
            "type": "cites",
            "value": 22
        },
        {
            "source": 14,
            "target": 86,
            "type": "cites",
            "value": 14
        },
        {
            "source": 14,
            "target": 410,
            "type": "cites",
            "value": 5
        },
        {
            "source": 411,
            "target": 204,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 412,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 7,
            "type": "cites",
            "value": 23
        },
        {
            "source": 204,
            "target": 194,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 80,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 38,
            "type": "cites",
            "value": 7
        },
        {
            "source": 204,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 102,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 29,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 220,
            "type": "cites",
            "value": 9
        },
        {
            "source": 413,
            "target": 220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 20,
            "target": 221,
            "type": "cites",
            "value": 3
        },
        {
            "source": 20,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 20,
            "target": 220,
            "type": "cites",
            "value": 5
        },
        {
            "source": 29,
            "target": 414,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 29,
            "target": 265,
            "type": "cites",
            "value": 6
        },
        {
            "source": 29,
            "target": 125,
            "type": "cites",
            "value": 7
        },
        {
            "source": 29,
            "target": 415,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 416,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 318,
            "type": "cites",
            "value": 3
        },
        {
            "source": 20,
            "target": 417,
            "type": "cites",
            "value": 4
        },
        {
            "source": 29,
            "target": 325,
            "type": "cites",
            "value": 4
        },
        {
            "source": 29,
            "target": 31,
            "type": "cites",
            "value": 6
        },
        {
            "source": 29,
            "target": 30,
            "type": "cites",
            "value": 8
        },
        {
            "source": 29,
            "target": 22,
            "type": "cites",
            "value": 21
        },
        {
            "source": 413,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 413,
            "target": 31,
            "type": "cites",
            "value": 3
        },
        {
            "source": 418,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 244,
            "type": "cites",
            "value": 22
        },
        {
            "source": 29,
            "target": 419,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 420,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 42,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 192,
            "type": "cites",
            "value": 4
        },
        {
            "source": 268,
            "target": 42,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 421,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 42,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 421,
            "type": "cites",
            "value": 13
        },
        {
            "source": 294,
            "target": 283,
            "type": "cites",
            "value": 13
        },
        {
            "source": 294,
            "target": 422,
            "type": "cites",
            "value": 9
        },
        {
            "source": 294,
            "target": 423,
            "type": "cites",
            "value": 9
        },
        {
            "source": 294,
            "target": 103,
            "type": "cites",
            "value": 29
        },
        {
            "source": 424,
            "target": 103,
            "type": "cites",
            "value": 8
        },
        {
            "source": 265,
            "target": 103,
            "type": "cites",
            "value": 9
        },
        {
            "source": 268,
            "target": 294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 268,
            "target": 283,
            "type": "cites",
            "value": 8
        },
        {
            "source": 268,
            "target": 422,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 423,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 103,
            "type": "cites",
            "value": 46
        },
        {
            "source": 246,
            "target": 294,
            "type": "cites",
            "value": 7
        },
        {
            "source": 246,
            "target": 283,
            "type": "cites",
            "value": 10
        },
        {
            "source": 246,
            "target": 422,
            "type": "cites",
            "value": 9
        },
        {
            "source": 246,
            "target": 423,
            "type": "cites",
            "value": 9
        },
        {
            "source": 246,
            "target": 103,
            "type": "cites",
            "value": 34
        },
        {
            "source": 425,
            "target": 294,
            "type": "cites",
            "value": 6
        },
        {
            "source": 425,
            "target": 283,
            "type": "cites",
            "value": 10
        },
        {
            "source": 425,
            "target": 422,
            "type": "cites",
            "value": 7
        },
        {
            "source": 425,
            "target": 423,
            "type": "cites",
            "value": 7
        },
        {
            "source": 425,
            "target": 103,
            "type": "cites",
            "value": 31
        },
        {
            "source": 283,
            "target": 294,
            "type": "cites",
            "value": 9
        },
        {
            "source": 283,
            "target": 422,
            "type": "cites",
            "value": 11
        },
        {
            "source": 283,
            "target": 423,
            "type": "cites",
            "value": 11
        },
        {
            "source": 283,
            "target": 103,
            "type": "cites",
            "value": 37
        },
        {
            "source": 103,
            "target": 294,
            "type": "cites",
            "value": 19
        },
        {
            "source": 103,
            "target": 283,
            "type": "cites",
            "value": 32
        },
        {
            "source": 103,
            "target": 426,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 427,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 428,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 422,
            "type": "cites",
            "value": 25
        },
        {
            "source": 103,
            "target": 423,
            "type": "cites",
            "value": 25
        },
        {
            "source": 294,
            "target": 251,
            "type": "cites",
            "value": 10
        },
        {
            "source": 294,
            "target": 425,
            "type": "cites",
            "value": 3
        },
        {
            "source": 294,
            "target": 22,
            "type": "cites",
            "value": 13
        },
        {
            "source": 265,
            "target": 22,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 251,
            "type": "cites",
            "value": 14
        },
        {
            "source": 268,
            "target": 22,
            "type": "cites",
            "value": 9
        },
        {
            "source": 246,
            "target": 251,
            "type": "cites",
            "value": 7
        },
        {
            "source": 246,
            "target": 425,
            "type": "cites",
            "value": 5
        },
        {
            "source": 246,
            "target": 22,
            "type": "cites",
            "value": 7
        },
        {
            "source": 425,
            "target": 251,
            "type": "cites",
            "value": 12
        },
        {
            "source": 425,
            "target": 22,
            "type": "cites",
            "value": 15
        },
        {
            "source": 283,
            "target": 251,
            "type": "cites",
            "value": 11
        },
        {
            "source": 283,
            "target": 425,
            "type": "cites",
            "value": 5
        },
        {
            "source": 283,
            "target": 22,
            "type": "cites",
            "value": 15
        },
        {
            "source": 103,
            "target": 251,
            "type": "cites",
            "value": 58
        },
        {
            "source": 103,
            "target": 425,
            "type": "cites",
            "value": 11
        },
        {
            "source": 103,
            "target": 22,
            "type": "cites",
            "value": 45
        },
        {
            "source": 294,
            "target": 24,
            "type": "cites",
            "value": 3
        },
        {
            "source": 294,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 391,
            "type": "cites",
            "value": 7
        },
        {
            "source": 265,
            "target": 24,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 24,
            "type": "cites",
            "value": 5
        },
        {
            "source": 283,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 24,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 26,
            "type": "cites",
            "value": 14
        },
        {
            "source": 294,
            "target": 249,
            "type": "cites",
            "value": 6
        },
        {
            "source": 294,
            "target": 235,
            "type": "cites",
            "value": 7
        },
        {
            "source": 294,
            "target": 233,
            "type": "cites",
            "value": 6
        },
        {
            "source": 268,
            "target": 249,
            "type": "cites",
            "value": 6
        },
        {
            "source": 268,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 249,
            "type": "cites",
            "value": 4
        },
        {
            "source": 246,
            "target": 235,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 249,
            "type": "cites",
            "value": 7
        },
        {
            "source": 283,
            "target": 249,
            "type": "cites",
            "value": 8
        },
        {
            "source": 283,
            "target": 235,
            "type": "cites",
            "value": 10
        },
        {
            "source": 283,
            "target": 233,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 249,
            "type": "cites",
            "value": 21
        },
        {
            "source": 103,
            "target": 235,
            "type": "cites",
            "value": 23
        },
        {
            "source": 103,
            "target": 233,
            "type": "cites",
            "value": 17
        },
        {
            "source": 103,
            "target": 429,
            "type": "cites",
            "value": 5
        },
        {
            "source": 294,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 265,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 265,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 244,
            "type": "cites",
            "value": 10
        },
        {
            "source": 268,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 246,
            "target": 244,
            "type": "cites",
            "value": 17
        },
        {
            "source": 246,
            "target": 320,
            "type": "cites",
            "value": 8
        },
        {
            "source": 425,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 425,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 244,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 244,
            "type": "cites",
            "value": 58
        },
        {
            "source": 103,
            "target": 430,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 320,
            "type": "cites",
            "value": 20
        },
        {
            "source": 103,
            "target": 431,
            "type": "cites",
            "value": 5
        },
        {
            "source": 246,
            "target": 187,
            "type": "cites",
            "value": 5
        },
        {
            "source": 425,
            "target": 276,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 276,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 187,
            "type": "cites",
            "value": 11
        },
        {
            "source": 103,
            "target": 92,
            "type": "cites",
            "value": 5
        },
        {
            "source": 294,
            "target": 14,
            "type": "cites",
            "value": 14
        },
        {
            "source": 265,
            "target": 14,
            "type": "cites",
            "value": 9
        },
        {
            "source": 268,
            "target": 14,
            "type": "cites",
            "value": 19
        },
        {
            "source": 246,
            "target": 14,
            "type": "cites",
            "value": 18
        },
        {
            "source": 425,
            "target": 14,
            "type": "cites",
            "value": 12
        },
        {
            "source": 283,
            "target": 14,
            "type": "cites",
            "value": 22
        },
        {
            "source": 103,
            "target": 14,
            "type": "cites",
            "value": 81
        },
        {
            "source": 265,
            "target": 204,
            "type": "cites",
            "value": 5
        },
        {
            "source": 246,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 432,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 204,
            "type": "cites",
            "value": 18
        },
        {
            "source": 294,
            "target": 246,
            "type": "cites",
            "value": 7
        },
        {
            "source": 294,
            "target": 124,
            "type": "cites",
            "value": 7
        },
        {
            "source": 424,
            "target": 246,
            "type": "cites",
            "value": 4
        },
        {
            "source": 424,
            "target": 124,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 246,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 336,
            "type": "cites",
            "value": 5
        },
        {
            "source": 265,
            "target": 124,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 232,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 246,
            "type": "cites",
            "value": 13
        },
        {
            "source": 268,
            "target": 124,
            "type": "cites",
            "value": 12
        },
        {
            "source": 246,
            "target": 336,
            "type": "cites",
            "value": 4
        },
        {
            "source": 246,
            "target": 124,
            "type": "cites",
            "value": 9
        },
        {
            "source": 246,
            "target": 232,
            "type": "cites",
            "value": 6
        },
        {
            "source": 425,
            "target": 246,
            "type": "cites",
            "value": 5
        },
        {
            "source": 425,
            "target": 124,
            "type": "cites",
            "value": 5
        },
        {
            "source": 283,
            "target": 246,
            "type": "cites",
            "value": 8
        },
        {
            "source": 283,
            "target": 124,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 246,
            "type": "cites",
            "value": 26
        },
        {
            "source": 103,
            "target": 433,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 434,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 435,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 336,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 124,
            "type": "cites",
            "value": 25
        },
        {
            "source": 103,
            "target": 436,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 232,
            "type": "cites",
            "value": 10
        },
        {
            "source": 268,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 246,
            "target": 102,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 437,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 438,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 102,
            "type": "cites",
            "value": 31
        },
        {
            "source": 294,
            "target": 248,
            "type": "cites",
            "value": 4
        },
        {
            "source": 268,
            "target": 248,
            "type": "cites",
            "value": 4
        },
        {
            "source": 246,
            "target": 248,
            "type": "cites",
            "value": 4
        },
        {
            "source": 425,
            "target": 248,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 248,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 247,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 248,
            "type": "cites",
            "value": 12
        },
        {
            "source": 268,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 293,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 38,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 4,
            "type": "cites",
            "value": 14
        },
        {
            "source": 294,
            "target": 439,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 439,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 179,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 439,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 179,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 440,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 178,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 439,
            "type": "cites",
            "value": 6
        },
        {
            "source": 294,
            "target": 441,
            "type": "cites",
            "value": 3
        },
        {
            "source": 294,
            "target": 268,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 441,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 442,
            "type": "cites",
            "value": 7
        },
        {
            "source": 246,
            "target": 268,
            "type": "cites",
            "value": 6
        },
        {
            "source": 425,
            "target": 268,
            "type": "cites",
            "value": 6
        },
        {
            "source": 283,
            "target": 441,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 268,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 441,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 442,
            "type": "cites",
            "value": 18
        },
        {
            "source": 103,
            "target": 268,
            "type": "cites",
            "value": 44
        },
        {
            "source": 268,
            "target": 443,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 444,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 443,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 444,
            "type": "cites",
            "value": 6
        },
        {
            "source": 294,
            "target": 30,
            "type": "cites",
            "value": 5
        },
        {
            "source": 425,
            "target": 30,
            "type": "cites",
            "value": 7
        },
        {
            "source": 283,
            "target": 30,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 30,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 445,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 221,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 221,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 71,
            "type": "cites",
            "value": 9
        },
        {
            "source": 62,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 377,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 167,
            "type": "cites",
            "value": 5
        },
        {
            "source": 62,
            "target": 446,
            "type": "cites",
            "value": 3
        },
        {
            "source": 378,
            "target": 167,
            "type": "cites",
            "value": 5
        },
        {
            "source": 378,
            "target": 446,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 378,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 378,
            "target": 126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 373,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 62,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 378,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 372,
            "target": 7,
            "type": "cites",
            "value": 11
        },
        {
            "source": 377,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 62,
            "target": 379,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 378,
            "type": "cites",
            "value": 5
        },
        {
            "source": 378,
            "target": 379,
            "type": "cites",
            "value": 8
        },
        {
            "source": 372,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 68,
            "type": "cites",
            "value": 5
        },
        {
            "source": 447,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 4,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 4,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 4,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 4,
            "target": 2,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 3,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 170,
            "type": "cites",
            "value": 16
        },
        {
            "source": 178,
            "target": 171,
            "type": "cites",
            "value": 16
        },
        {
            "source": 178,
            "target": 448,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 41,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 449,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 450,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 179,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 204,
            "type": "cites",
            "value": 7
        },
        {
            "source": 178,
            "target": 231,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 304,
            "type": "cites",
            "value": 7
        },
        {
            "source": 178,
            "target": 80,
            "type": "cites",
            "value": 12
        },
        {
            "source": 178,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 244,
            "type": "cites",
            "value": 28
        },
        {
            "source": 178,
            "target": 103,
            "type": "cites",
            "value": 14
        },
        {
            "source": 178,
            "target": 430,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 320,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 377,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 451,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 440,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 14,
            "type": "cites",
            "value": 24
        },
        {
            "source": 452,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 452,
            "target": 453,
            "type": "cites",
            "value": 3
        },
        {
            "source": 452,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 452,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 425,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 182,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 91,
            "type": "cites",
            "value": 15
        },
        {
            "source": 251,
            "target": 259,
            "type": "cites",
            "value": 8
        },
        {
            "source": 251,
            "target": 265,
            "type": "cites",
            "value": 8
        },
        {
            "source": 251,
            "target": 125,
            "type": "cites",
            "value": 9
        },
        {
            "source": 251,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 454,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 137,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 259,
            "type": "cites",
            "value": 9
        },
        {
            "source": 242,
            "target": 265,
            "type": "cites",
            "value": 9
        },
        {
            "source": 242,
            "target": 125,
            "type": "cites",
            "value": 10
        },
        {
            "source": 242,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 454,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 259,
            "type": "cites",
            "value": 12
        },
        {
            "source": 103,
            "target": 265,
            "type": "cites",
            "value": 14
        },
        {
            "source": 103,
            "target": 125,
            "type": "cites",
            "value": 20
        },
        {
            "source": 103,
            "target": 228,
            "type": "cites",
            "value": 9
        },
        {
            "source": 251,
            "target": 50,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 455,
            "type": "cites",
            "value": 9
        },
        {
            "source": 251,
            "target": 46,
            "type": "cites",
            "value": 8
        },
        {
            "source": 242,
            "target": 50,
            "type": "cites",
            "value": 7
        },
        {
            "source": 242,
            "target": 455,
            "type": "cites",
            "value": 8
        },
        {
            "source": 242,
            "target": 46,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 50,
            "type": "cites",
            "value": 13
        },
        {
            "source": 103,
            "target": 455,
            "type": "cites",
            "value": 15
        },
        {
            "source": 103,
            "target": 46,
            "type": "cites",
            "value": 15
        },
        {
            "source": 251,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 456,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 370,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 457,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 456,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 370,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 457,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 458,
            "type": "cites",
            "value": 11
        },
        {
            "source": 251,
            "target": 459,
            "type": "cites",
            "value": 4
        },
        {
            "source": 251,
            "target": 460,
            "type": "cites",
            "value": 4
        },
        {
            "source": 251,
            "target": 32,
            "type": "cites",
            "value": 13
        },
        {
            "source": 242,
            "target": 458,
            "type": "cites",
            "value": 12
        },
        {
            "source": 242,
            "target": 459,
            "type": "cites",
            "value": 4
        },
        {
            "source": 242,
            "target": 460,
            "type": "cites",
            "value": 4
        },
        {
            "source": 242,
            "target": 32,
            "type": "cites",
            "value": 12
        },
        {
            "source": 425,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 458,
            "type": "cites",
            "value": 18
        },
        {
            "source": 103,
            "target": 459,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 460,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 32,
            "type": "cites",
            "value": 25
        },
        {
            "source": 251,
            "target": 461,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 462,
            "type": "cites",
            "value": 7
        },
        {
            "source": 242,
            "target": 461,
            "type": "cites",
            "value": 7
        },
        {
            "source": 242,
            "target": 462,
            "type": "cites",
            "value": 7
        },
        {
            "source": 425,
            "target": 462,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 461,
            "type": "cites",
            "value": 13
        },
        {
            "source": 103,
            "target": 462,
            "type": "cites",
            "value": 14
        },
        {
            "source": 251,
            "target": 260,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 261,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 262,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 263,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 264,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 170,
            "type": "cites",
            "value": 9
        },
        {
            "source": 251,
            "target": 266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 260,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 261,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 262,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 263,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 264,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 170,
            "type": "cites",
            "value": 7
        },
        {
            "source": 242,
            "target": 266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 425,
            "target": 170,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 260,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 261,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 262,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 263,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 264,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 170,
            "type": "cites",
            "value": 22
        },
        {
            "source": 103,
            "target": 266,
            "type": "cites",
            "value": 7
        },
        {
            "source": 463,
            "target": 267,
            "type": "cites",
            "value": 4
        },
        {
            "source": 463,
            "target": 242,
            "type": "cites",
            "value": 5
        },
        {
            "source": 463,
            "target": 251,
            "type": "cites",
            "value": 4
        },
        {
            "source": 463,
            "target": 103,
            "type": "cites",
            "value": 9
        },
        {
            "source": 251,
            "target": 267,
            "type": "cites",
            "value": 26
        },
        {
            "source": 251,
            "target": 464,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 242,
            "type": "cites",
            "value": 32
        },
        {
            "source": 251,
            "target": 103,
            "type": "cites",
            "value": 66
        },
        {
            "source": 465,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 465,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 465,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 267,
            "type": "cites",
            "value": 21
        },
        {
            "source": 242,
            "target": 464,
            "type": "cites",
            "value": 4
        },
        {
            "source": 242,
            "target": 251,
            "type": "cites",
            "value": 18
        },
        {
            "source": 242,
            "target": 103,
            "type": "cites",
            "value": 40
        },
        {
            "source": 425,
            "target": 267,
            "type": "cites",
            "value": 5
        },
        {
            "source": 425,
            "target": 242,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 267,
            "type": "cites",
            "value": 47
        },
        {
            "source": 103,
            "target": 464,
            "type": "cites",
            "value": 10
        },
        {
            "source": 103,
            "target": 242,
            "type": "cites",
            "value": 56
        },
        {
            "source": 251,
            "target": 466,
            "type": "cites",
            "value": 6
        },
        {
            "source": 251,
            "target": 249,
            "type": "cites",
            "value": 9
        },
        {
            "source": 251,
            "target": 467,
            "type": "cites",
            "value": 6
        },
        {
            "source": 242,
            "target": 466,
            "type": "cites",
            "value": 4
        },
        {
            "source": 242,
            "target": 249,
            "type": "cites",
            "value": 4
        },
        {
            "source": 242,
            "target": 467,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 466,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 467,
            "type": "cites",
            "value": 9
        },
        {
            "source": 251,
            "target": 468,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 468,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 468,
            "type": "cites",
            "value": 8
        },
        {
            "source": 463,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 268,
            "type": "cites",
            "value": 20
        },
        {
            "source": 242,
            "target": 268,
            "type": "cites",
            "value": 15
        },
        {
            "source": 251,
            "target": 269,
            "type": "cites",
            "value": 11
        },
        {
            "source": 251,
            "target": 270,
            "type": "cites",
            "value": 11
        },
        {
            "source": 251,
            "target": 22,
            "type": "cites",
            "value": 15
        },
        {
            "source": 242,
            "target": 269,
            "type": "cites",
            "value": 9
        },
        {
            "source": 242,
            "target": 270,
            "type": "cites",
            "value": 9
        },
        {
            "source": 242,
            "target": 22,
            "type": "cites",
            "value": 10
        },
        {
            "source": 425,
            "target": 269,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 270,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 269,
            "type": "cites",
            "value": 24
        },
        {
            "source": 103,
            "target": 270,
            "type": "cites",
            "value": 24
        },
        {
            "source": 228,
            "target": 469,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 470,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 471,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 472,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 257,
            "type": "cites",
            "value": 7
        },
        {
            "source": 228,
            "target": 257,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 473,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 474,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 475,
            "type": "cites",
            "value": 3
        },
        {
            "source": 476,
            "target": 228,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 477,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 478,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 0,
            "type": "cites",
            "value": 15
        },
        {
            "source": 220,
            "target": 228,
            "type": "cites",
            "value": 12
        },
        {
            "source": 228,
            "target": 0,
            "type": "cites",
            "value": 11
        },
        {
            "source": 228,
            "target": 220,
            "type": "cites",
            "value": 8
        },
        {
            "source": 220,
            "target": 300,
            "type": "cites",
            "value": 6
        },
        {
            "source": 228,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 480,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 221,
            "type": "cites",
            "value": 10
        },
        {
            "source": 228,
            "target": 221,
            "type": "cites",
            "value": 5
        },
        {
            "source": 117,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 481,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 481,
            "target": 7,
            "type": "cites",
            "value": 10
        },
        {
            "source": 117,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 481,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 83,
            "target": 121,
            "type": "cites",
            "value": 4
        },
        {
            "source": 481,
            "target": 205,
            "type": "cites",
            "value": 3
        },
        {
            "source": 117,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 481,
            "target": 77,
            "type": "cites",
            "value": 5
        },
        {
            "source": 117,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 481,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 68,
            "type": "cites",
            "value": 7
        },
        {
            "source": 433,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 103,
            "type": "cites",
            "value": 6
        },
        {
            "source": 251,
            "target": 294,
            "type": "cites",
            "value": 6
        },
        {
            "source": 251,
            "target": 283,
            "type": "cites",
            "value": 10
        },
        {
            "source": 251,
            "target": 422,
            "type": "cites",
            "value": 8
        },
        {
            "source": 251,
            "target": 423,
            "type": "cites",
            "value": 8
        },
        {
            "source": 251,
            "target": 338,
            "type": "cites",
            "value": 6
        },
        {
            "source": 96,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 338,
            "type": "cites",
            "value": 9
        },
        {
            "source": 102,
            "target": 80,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 218,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 88,
            "type": "cites",
            "value": 4
        },
        {
            "source": 251,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 80,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 218,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 88,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 231,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 304,
            "type": "cites",
            "value": 14
        },
        {
            "source": 103,
            "target": 80,
            "type": "cites",
            "value": 25
        },
        {
            "source": 251,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 12,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 96,
            "target": 26,
            "type": "cites",
            "value": 10
        },
        {
            "source": 103,
            "target": 200,
            "type": "cites",
            "value": 10
        },
        {
            "source": 102,
            "target": 53,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 53,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 482,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 482,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 303,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 303,
            "type": "cites",
            "value": 11
        },
        {
            "source": 103,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 132,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 132,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 132,
            "target": 46,
            "type": "cites",
            "value": 8
        },
        {
            "source": 132,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 132,
            "target": 83,
            "type": "cites",
            "value": 5
        },
        {
            "source": 132,
            "target": 52,
            "type": "cites",
            "value": 9
        },
        {
            "source": 483,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 483,
            "target": 485,
            "type": "cites",
            "value": 4
        },
        {
            "source": 483,
            "target": 486,
            "type": "cites",
            "value": 6
        },
        {
            "source": 483,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 483,
            "target": 487,
            "type": "cites",
            "value": 3
        },
        {
            "source": 488,
            "target": 153,
            "type": "cites",
            "value": 4
        },
        {
            "source": 488,
            "target": 334,
            "type": "cites",
            "value": 5
        },
        {
            "source": 489,
            "target": 153,
            "type": "cites",
            "value": 3
        },
        {
            "source": 489,
            "target": 334,
            "type": "cites",
            "value": 3
        },
        {
            "source": 490,
            "target": 153,
            "type": "cites",
            "value": 3
        },
        {
            "source": 490,
            "target": 334,
            "type": "cites",
            "value": 3
        },
        {
            "source": 491,
            "target": 153,
            "type": "cites",
            "value": 3
        },
        {
            "source": 491,
            "target": 334,
            "type": "cites",
            "value": 3
        },
        {
            "source": 492,
            "target": 153,
            "type": "cites",
            "value": 3
        },
        {
            "source": 492,
            "target": 334,
            "type": "cites",
            "value": 3
        },
        {
            "source": 493,
            "target": 153,
            "type": "cites",
            "value": 3
        },
        {
            "source": 493,
            "target": 334,
            "type": "cites",
            "value": 3
        },
        {
            "source": 488,
            "target": 151,
            "type": "cites",
            "value": 7
        },
        {
            "source": 494,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 495,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 494,
            "target": 381,
            "type": "cites",
            "value": 4
        },
        {
            "source": 494,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 496,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 495,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 497,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 498,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 497,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 499,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 12,
            "type": "cites",
            "value": 7
        },
        {
            "source": 80,
            "target": 202,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 116,
            "type": "cites",
            "value": 4
        },
        {
            "source": 498,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 498,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 477,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 228,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 229,
            "type": "cites",
            "value": 4
        },
        {
            "source": 498,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 500,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 7,
            "target": 55,
            "type": "cites",
            "value": 17
        },
        {
            "source": 7,
            "target": 56,
            "type": "cites",
            "value": 7
        },
        {
            "source": 7,
            "target": 501,
            "type": "cites",
            "value": 10
        },
        {
            "source": 7,
            "target": 205,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 83,
            "type": "cites",
            "value": 6
        },
        {
            "source": 7,
            "target": 2,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 3,
            "type": "cites",
            "value": 4
        },
        {
            "source": 7,
            "target": 4,
            "type": "cites",
            "value": 8
        },
        {
            "source": 502,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 414,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 503,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 235,
            "type": "cites",
            "value": 7
        },
        {
            "source": 502,
            "target": 233,
            "type": "cites",
            "value": 7
        },
        {
            "source": 502,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 504,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 38,
            "type": "cites",
            "value": 7
        },
        {
            "source": 222,
            "target": 341,
            "type": "cites",
            "value": 3
        },
        {
            "source": 505,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 506,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 506,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 507,
            "target": 1,
            "type": "cites",
            "value": 7
        },
        {
            "source": 507,
            "target": 111,
            "type": "cites",
            "value": 7
        },
        {
            "source": 391,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 391,
            "target": 301,
            "type": "cites",
            "value": 5
        },
        {
            "source": 391,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 391,
            "target": 26,
            "type": "cites",
            "value": 9
        },
        {
            "source": 71,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 508,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 509,
            "target": 404,
            "type": "cites",
            "value": 3
        },
        {
            "source": 510,
            "target": 136,
            "type": "cites",
            "value": 4
        },
        {
            "source": 510,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 511,
            "target": 403,
            "type": "cites",
            "value": 3
        },
        {
            "source": 511,
            "target": 136,
            "type": "cites",
            "value": 11
        },
        {
            "source": 511,
            "target": 512,
            "type": "cites",
            "value": 4
        },
        {
            "source": 511,
            "target": 26,
            "type": "cites",
            "value": 12
        },
        {
            "source": 511,
            "target": 404,
            "type": "cites",
            "value": 9
        },
        {
            "source": 26,
            "target": 403,
            "type": "cites",
            "value": 9
        },
        {
            "source": 26,
            "target": 25,
            "type": "cites",
            "value": 12
        },
        {
            "source": 26,
            "target": 512,
            "type": "cites",
            "value": 8
        },
        {
            "source": 26,
            "target": 511,
            "type": "cites",
            "value": 8
        },
        {
            "source": 26,
            "target": 404,
            "type": "cites",
            "value": 22
        },
        {
            "source": 136,
            "target": 403,
            "type": "cites",
            "value": 9
        },
        {
            "source": 136,
            "target": 25,
            "type": "cites",
            "value": 6
        },
        {
            "source": 136,
            "target": 512,
            "type": "cites",
            "value": 8
        },
        {
            "source": 136,
            "target": 511,
            "type": "cites",
            "value": 8
        },
        {
            "source": 136,
            "target": 404,
            "type": "cites",
            "value": 22
        },
        {
            "source": 510,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 26,
            "target": 63,
            "type": "cites",
            "value": 12
        },
        {
            "source": 26,
            "target": 513,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 514,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 515,
            "type": "cites",
            "value": 10
        },
        {
            "source": 178,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 178,
            "target": 7,
            "type": "cites",
            "value": 11
        },
        {
            "source": 41,
            "target": 103,
            "type": "cites",
            "value": 12
        },
        {
            "source": 178,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 41,
            "target": 235,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 235,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 22,
            "type": "cites",
            "value": 5
        },
        {
            "source": 344,
            "target": 46,
            "type": "cites",
            "value": 6
        },
        {
            "source": 344,
            "target": 516,
            "type": "cites",
            "value": 3
        },
        {
            "source": 344,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 344,
            "target": 517,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 516,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 198,
            "type": "cites",
            "value": 5
        },
        {
            "source": 52,
            "target": 517,
            "type": "cites",
            "value": 4
        },
        {
            "source": 344,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 341,
            "type": "cites",
            "value": 8
        },
        {
            "source": 52,
            "target": 518,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 519,
            "type": "cites",
            "value": 3
        },
        {
            "source": 344,
            "target": 7,
            "type": "cites",
            "value": 9
        },
        {
            "source": 345,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 52,
            "target": 86,
            "type": "cites",
            "value": 7
        },
        {
            "source": 52,
            "target": 87,
            "type": "cites",
            "value": 10
        },
        {
            "source": 52,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 520,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 521,
            "type": "cites",
            "value": 3
        },
        {
            "source": 522,
            "target": 8,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 8,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 151,
            "type": "cites",
            "value": 9
        },
        {
            "source": 7,
            "target": 475,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 523,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 524,
            "type": "cites",
            "value": 5
        },
        {
            "source": 176,
            "target": 290,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 290,
            "type": "cites",
            "value": 3
        },
        {
            "source": 525,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 403,
            "type": "cites",
            "value": 6
        },
        {
            "source": 404,
            "target": 136,
            "type": "cites",
            "value": 20
        },
        {
            "source": 404,
            "target": 26,
            "type": "cites",
            "value": 23
        },
        {
            "source": 526,
            "target": 136,
            "type": "cites",
            "value": 4
        },
        {
            "source": 526,
            "target": 26,
            "type": "cites",
            "value": 12
        },
        {
            "source": 527,
            "target": 136,
            "type": "cites",
            "value": 4
        },
        {
            "source": 527,
            "target": 26,
            "type": "cites",
            "value": 8
        },
        {
            "source": 528,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 529,
            "type": "cites",
            "value": 8
        },
        {
            "source": 26,
            "target": 529,
            "type": "cites",
            "value": 8
        },
        {
            "source": 136,
            "target": 529,
            "type": "cites",
            "value": 7
        },
        {
            "source": 526,
            "target": 527,
            "type": "cites",
            "value": 3
        },
        {
            "source": 527,
            "target": 526,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 526,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 527,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 116,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 530,
            "type": "cites",
            "value": 9
        },
        {
            "source": 404,
            "target": 190,
            "type": "cites",
            "value": 13
        },
        {
            "source": 26,
            "target": 530,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 190,
            "type": "cites",
            "value": 9
        },
        {
            "source": 136,
            "target": 530,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 190,
            "type": "cites",
            "value": 7
        },
        {
            "source": 531,
            "target": 43,
            "type": "cites",
            "value": 3
        },
        {
            "source": 532,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 532,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 531,
            "target": 12,
            "type": "cites",
            "value": 10
        },
        {
            "source": 533,
            "target": 72,
            "type": "cites",
            "value": 7
        },
        {
            "source": 534,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 533,
            "target": 281,
            "type": "cites",
            "value": 3
        },
        {
            "source": 534,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 535,
            "target": 536,
            "type": "cites",
            "value": 4
        },
        {
            "source": 537,
            "target": 538,
            "type": "cites",
            "value": 4
        },
        {
            "source": 537,
            "target": 204,
            "type": "cites",
            "value": 6
        },
        {
            "source": 539,
            "target": 537,
            "type": "cites",
            "value": 3
        },
        {
            "source": 537,
            "target": 540,
            "type": "cites",
            "value": 11
        },
        {
            "source": 537,
            "target": 541,
            "type": "cites",
            "value": 15
        },
        {
            "source": 537,
            "target": 542,
            "type": "cites",
            "value": 8
        },
        {
            "source": 537,
            "target": 543,
            "type": "cites",
            "value": 3
        },
        {
            "source": 537,
            "target": 190,
            "type": "cites",
            "value": 6
        },
        {
            "source": 153,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 153,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 544,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 545,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 204,
            "type": "cites",
            "value": 13
        },
        {
            "source": 189,
            "target": 546,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 547,
            "type": "cites",
            "value": 14
        },
        {
            "source": 189,
            "target": 484,
            "type": "cites",
            "value": 11
        },
        {
            "source": 548,
            "target": 549,
            "type": "cites",
            "value": 3
        },
        {
            "source": 550,
            "target": 549,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 549,
            "type": "cites",
            "value": 3
        },
        {
            "source": 548,
            "target": 486,
            "type": "cites",
            "value": 10
        },
        {
            "source": 551,
            "target": 486,
            "type": "cites",
            "value": 5
        },
        {
            "source": 550,
            "target": 486,
            "type": "cites",
            "value": 10
        },
        {
            "source": 404,
            "target": 486,
            "type": "cites",
            "value": 10
        },
        {
            "source": 548,
            "target": 404,
            "type": "cites",
            "value": 7
        },
        {
            "source": 548,
            "target": 552,
            "type": "cites",
            "value": 4
        },
        {
            "source": 548,
            "target": 550,
            "type": "cites",
            "value": 5
        },
        {
            "source": 551,
            "target": 404,
            "type": "cites",
            "value": 4
        },
        {
            "source": 551,
            "target": 552,
            "type": "cites",
            "value": 3
        },
        {
            "source": 551,
            "target": 550,
            "type": "cites",
            "value": 4
        },
        {
            "source": 551,
            "target": 548,
            "type": "cites",
            "value": 4
        },
        {
            "source": 550,
            "target": 404,
            "type": "cites",
            "value": 5
        },
        {
            "source": 550,
            "target": 552,
            "type": "cites",
            "value": 4
        },
        {
            "source": 550,
            "target": 548,
            "type": "cites",
            "value": 5
        },
        {
            "source": 404,
            "target": 552,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 550,
            "type": "cites",
            "value": 5
        },
        {
            "source": 404,
            "target": 548,
            "type": "cites",
            "value": 6
        },
        {
            "source": 404,
            "target": 300,
            "type": "cites",
            "value": 5
        },
        {
            "source": 548,
            "target": 553,
            "type": "cites",
            "value": 4
        },
        {
            "source": 548,
            "target": 554,
            "type": "cites",
            "value": 3
        },
        {
            "source": 551,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 550,
            "target": 553,
            "type": "cites",
            "value": 4
        },
        {
            "source": 550,
            "target": 554,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 553,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 554,
            "type": "cites",
            "value": 3
        },
        {
            "source": 548,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 548,
            "target": 555,
            "type": "cites",
            "value": 4
        },
        {
            "source": 548,
            "target": 556,
            "type": "cites",
            "value": 7
        },
        {
            "source": 548,
            "target": 130,
            "type": "cites",
            "value": 4
        },
        {
            "source": 551,
            "target": 556,
            "type": "cites",
            "value": 3
        },
        {
            "source": 550,
            "target": 555,
            "type": "cites",
            "value": 4
        },
        {
            "source": 550,
            "target": 556,
            "type": "cites",
            "value": 7
        },
        {
            "source": 550,
            "target": 130,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 555,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 556,
            "type": "cites",
            "value": 7
        },
        {
            "source": 404,
            "target": 130,
            "type": "cites",
            "value": 5
        },
        {
            "source": 557,
            "target": 558,
            "type": "cites",
            "value": 3
        },
        {
            "source": 559,
            "target": 558,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 558,
            "type": "cites",
            "value": 4
        },
        {
            "source": 557,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 559,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 559,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 560,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 316,
            "type": "cites",
            "value": 10
        },
        {
            "source": 204,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 190,
            "type": "cites",
            "value": 8
        },
        {
            "source": 541,
            "target": 543,
            "type": "cites",
            "value": 3
        },
        {
            "source": 561,
            "target": 562,
            "type": "cites",
            "value": 5
        },
        {
            "source": 561,
            "target": 541,
            "type": "cites",
            "value": 10
        },
        {
            "source": 562,
            "target": 561,
            "type": "cites",
            "value": 4
        },
        {
            "source": 562,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 563,
            "target": 561,
            "type": "cites",
            "value": 5
        },
        {
            "source": 563,
            "target": 562,
            "type": "cites",
            "value": 4
        },
        {
            "source": 563,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 564,
            "target": 561,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 561,
            "type": "cites",
            "value": 5
        },
        {
            "source": 541,
            "target": 562,
            "type": "cites",
            "value": 4
        },
        {
            "source": 561,
            "target": 565,
            "type": "cites",
            "value": 9
        },
        {
            "source": 562,
            "target": 565,
            "type": "cites",
            "value": 7
        },
        {
            "source": 563,
            "target": 565,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 565,
            "type": "cites",
            "value": 6
        },
        {
            "source": 561,
            "target": 566,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 566,
            "type": "cites",
            "value": 3
        },
        {
            "source": 10,
            "target": 12,
            "type": "cites",
            "value": 11
        },
        {
            "source": 467,
            "target": 12,
            "type": "cites",
            "value": 10
        },
        {
            "source": 567,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 7,
            "target": 12,
            "type": "cites",
            "value": 8
        },
        {
            "source": 7,
            "target": 202,
            "type": "cites",
            "value": 5
        },
        {
            "source": 12,
            "target": 84,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 83,
            "type": "cites",
            "value": 8
        },
        {
            "source": 12,
            "target": 202,
            "type": "cites",
            "value": 5
        },
        {
            "source": 10,
            "target": 6,
            "type": "cites",
            "value": 4
        },
        {
            "source": 467,
            "target": 10,
            "type": "cites",
            "value": 5
        },
        {
            "source": 12,
            "target": 11,
            "type": "cites",
            "value": 6
        },
        {
            "source": 12,
            "target": 134,
            "type": "cites",
            "value": 6
        },
        {
            "source": 12,
            "target": 7,
            "type": "cites",
            "value": 19
        },
        {
            "source": 10,
            "target": 482,
            "type": "cites",
            "value": 3
        },
        {
            "source": 10,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 482,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 482,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 251,
            "type": "cites",
            "value": 11
        },
        {
            "source": 467,
            "target": 303,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 103,
            "type": "cites",
            "value": 24
        },
        {
            "source": 265,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 0,
            "type": "cites",
            "value": 9
        },
        {
            "source": 265,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 265,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 568,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 569,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 46,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 570,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 571,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 417,
            "type": "cites",
            "value": 6
        },
        {
            "source": 425,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 7,
            "type": "cites",
            "value": 11
        },
        {
            "source": 22,
            "target": 445,
            "type": "cites",
            "value": 6
        },
        {
            "source": 425,
            "target": 572,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 572,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 573,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 574,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 575,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 576,
            "type": "cites",
            "value": 6
        },
        {
            "source": 151,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 60,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 151,
            "target": 334,
            "type": "cites",
            "value": 6
        },
        {
            "source": 60,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 151,
            "target": 577,
            "type": "cites",
            "value": 3
        },
        {
            "source": 151,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 60,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 578,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 579,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 580,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 579,
            "target": 77,
            "type": "cites",
            "value": 6
        },
        {
            "source": 372,
            "target": 357,
            "type": "cites",
            "value": 3
        },
        {
            "source": 372,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 372,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 581,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 581,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 581,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 581,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 287,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 190,
            "type": "cites",
            "value": 7
        },
        {
            "source": 232,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 582,
            "type": "cites",
            "value": 3
        },
        {
            "source": 583,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 584,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 585,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 586,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 587,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 587,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 533,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 533,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 137,
            "type": "cites",
            "value": 11
        },
        {
            "source": 125,
            "target": 588,
            "type": "cites",
            "value": 5
        },
        {
            "source": 125,
            "target": 589,
            "type": "cites",
            "value": 5
        },
        {
            "source": 125,
            "target": 135,
            "type": "cites",
            "value": 8
        },
        {
            "source": 125,
            "target": 590,
            "type": "cites",
            "value": 8
        },
        {
            "source": 591,
            "target": 592,
            "type": "cites",
            "value": 6
        },
        {
            "source": 593,
            "target": 591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 594,
            "target": 591,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 595,
            "type": "cites",
            "value": 6
        },
        {
            "source": 596,
            "target": 591,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 556,
            "type": "cites",
            "value": 3
        },
        {
            "source": 596,
            "target": 556,
            "type": "cites",
            "value": 8
        },
        {
            "source": 591,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 591,
            "target": 582,
            "type": "cites",
            "value": 4
        },
        {
            "source": 591,
            "target": 597,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 598,
            "type": "cites",
            "value": 15
        },
        {
            "source": 596,
            "target": 598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 591,
            "target": 599,
            "type": "cites",
            "value": 3
        },
        {
            "source": 591,
            "target": 600,
            "type": "cites",
            "value": 3
        },
        {
            "source": 594,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 594,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 591,
            "target": 540,
            "type": "cites",
            "value": 33
        },
        {
            "source": 591,
            "target": 541,
            "type": "cites",
            "value": 50
        },
        {
            "source": 596,
            "target": 540,
            "type": "cites",
            "value": 8
        },
        {
            "source": 596,
            "target": 541,
            "type": "cites",
            "value": 12
        },
        {
            "source": 591,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 596,
            "target": 486,
            "type": "cites",
            "value": 8
        },
        {
            "source": 591,
            "target": 601,
            "type": "cites",
            "value": 8
        },
        {
            "source": 12,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 602,
            "target": 12,
            "type": "cites",
            "value": 7
        },
        {
            "source": 603,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 604,
            "target": 12,
            "type": "cites",
            "value": 8
        },
        {
            "source": 602,
            "target": 10,
            "type": "cites",
            "value": 4
        },
        {
            "source": 603,
            "target": 10,
            "type": "cites",
            "value": 3
        },
        {
            "source": 604,
            "target": 10,
            "type": "cites",
            "value": 4
        },
        {
            "source": 602,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 126,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 132,
            "target": 345,
            "type": "cites",
            "value": 5
        },
        {
            "source": 132,
            "target": 126,
            "type": "cites",
            "value": 5
        },
        {
            "source": 605,
            "target": 484,
            "type": "cites",
            "value": 7
        },
        {
            "source": 605,
            "target": 556,
            "type": "cites",
            "value": 6
        },
        {
            "source": 605,
            "target": 486,
            "type": "cites",
            "value": 8
        },
        {
            "source": 605,
            "target": 130,
            "type": "cites",
            "value": 4
        },
        {
            "source": 606,
            "target": 598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 601,
            "target": 598,
            "type": "cites",
            "value": 4
        },
        {
            "source": 601,
            "target": 591,
            "type": "cites",
            "value": 3
        },
        {
            "source": 601,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 607,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 607,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 607,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 154,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 155,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 155,
            "target": 188,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 155,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 155,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 607,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 607,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 154,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 609,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 609,
            "target": 610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 609,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 611,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 611,
            "target": 610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 611,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 612,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 612,
            "target": 610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 612,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 613,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 613,
            "target": 610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 613,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 614,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 614,
            "target": 610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 614,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 608,
            "target": 193,
            "type": "cites",
            "value": 5
        },
        {
            "source": 609,
            "target": 611,
            "type": "cites",
            "value": 3
        },
        {
            "source": 609,
            "target": 614,
            "type": "cites",
            "value": 3
        },
        {
            "source": 609,
            "target": 608,
            "type": "cites",
            "value": 11
        },
        {
            "source": 611,
            "target": 608,
            "type": "cites",
            "value": 10
        },
        {
            "source": 612,
            "target": 608,
            "type": "cites",
            "value": 8
        },
        {
            "source": 613,
            "target": 608,
            "type": "cites",
            "value": 8
        },
        {
            "source": 614,
            "target": 608,
            "type": "cites",
            "value": 10
        },
        {
            "source": 608,
            "target": 611,
            "type": "cites",
            "value": 3
        },
        {
            "source": 608,
            "target": 614,
            "type": "cites",
            "value": 3
        },
        {
            "source": 609,
            "target": 615,
            "type": "cites",
            "value": 4
        },
        {
            "source": 609,
            "target": 616,
            "type": "cites",
            "value": 3
        },
        {
            "source": 611,
            "target": 615,
            "type": "cites",
            "value": 3
        },
        {
            "source": 612,
            "target": 615,
            "type": "cites",
            "value": 3
        },
        {
            "source": 613,
            "target": 615,
            "type": "cites",
            "value": 3
        },
        {
            "source": 614,
            "target": 615,
            "type": "cites",
            "value": 3
        },
        {
            "source": 608,
            "target": 615,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 616,
            "type": "cites",
            "value": 3
        },
        {
            "source": 608,
            "target": 617,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 618,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 619,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 620,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 621,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 622,
            "type": "cites",
            "value": 4
        },
        {
            "source": 608,
            "target": 623,
            "type": "cites",
            "value": 4
        },
        {
            "source": 624,
            "target": 625,
            "type": "cites",
            "value": 4
        },
        {
            "source": 624,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 624,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 626,
            "target": 625,
            "type": "cites",
            "value": 4
        },
        {
            "source": 626,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 626,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 312,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 627,
            "type": "cites",
            "value": 5
        },
        {
            "source": 312,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 312,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 628,
            "target": 625,
            "type": "cites",
            "value": 4
        },
        {
            "source": 628,
            "target": 540,
            "type": "cites",
            "value": 4
        },
        {
            "source": 628,
            "target": 541,
            "type": "cites",
            "value": 7
        },
        {
            "source": 582,
            "target": 541,
            "type": "cites",
            "value": 7
        },
        {
            "source": 596,
            "target": 625,
            "type": "cites",
            "value": 6
        },
        {
            "source": 628,
            "target": 591,
            "type": "cites",
            "value": 3
        },
        {
            "source": 629,
            "target": 331,
            "type": "cites",
            "value": 6
        },
        {
            "source": 629,
            "target": 630,
            "type": "cites",
            "value": 4
        },
        {
            "source": 629,
            "target": 330,
            "type": "cites",
            "value": 9
        },
        {
            "source": 331,
            "target": 629,
            "type": "cites",
            "value": 3
        },
        {
            "source": 331,
            "target": 630,
            "type": "cites",
            "value": 6
        },
        {
            "source": 330,
            "target": 629,
            "type": "cites",
            "value": 3
        },
        {
            "source": 330,
            "target": 630,
            "type": "cites",
            "value": 6
        },
        {
            "source": 629,
            "target": 631,
            "type": "cites",
            "value": 5
        },
        {
            "source": 629,
            "target": 332,
            "type": "cites",
            "value": 3
        },
        {
            "source": 629,
            "target": 632,
            "type": "cites",
            "value": 6
        },
        {
            "source": 331,
            "target": 631,
            "type": "cites",
            "value": 10
        },
        {
            "source": 331,
            "target": 632,
            "type": "cites",
            "value": 8
        },
        {
            "source": 330,
            "target": 631,
            "type": "cites",
            "value": 10
        },
        {
            "source": 330,
            "target": 632,
            "type": "cites",
            "value": 9
        },
        {
            "source": 331,
            "target": 633,
            "type": "cites",
            "value": 4
        },
        {
            "source": 330,
            "target": 633,
            "type": "cites",
            "value": 4
        },
        {
            "source": 629,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 331,
            "target": 627,
            "type": "cites",
            "value": 6
        },
        {
            "source": 331,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 331,
            "target": 634,
            "type": "cites",
            "value": 3
        },
        {
            "source": 330,
            "target": 627,
            "type": "cites",
            "value": 6
        },
        {
            "source": 330,
            "target": 313,
            "type": "cites",
            "value": 6
        },
        {
            "source": 330,
            "target": 634,
            "type": "cites",
            "value": 4
        },
        {
            "source": 629,
            "target": 635,
            "type": "cites",
            "value": 4
        },
        {
            "source": 629,
            "target": 636,
            "type": "cites",
            "value": 4
        },
        {
            "source": 331,
            "target": 635,
            "type": "cites",
            "value": 8
        },
        {
            "source": 331,
            "target": 636,
            "type": "cites",
            "value": 8
        },
        {
            "source": 330,
            "target": 635,
            "type": "cites",
            "value": 11
        },
        {
            "source": 330,
            "target": 636,
            "type": "cites",
            "value": 12
        },
        {
            "source": 544,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 595,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 591,
            "type": "cites",
            "value": 8
        },
        {
            "source": 637,
            "target": 591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 638,
            "target": 591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 639,
            "target": 591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 226,
            "target": 591,
            "type": "cites",
            "value": 5
        },
        {
            "source": 226,
            "target": 222,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 245,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 245,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 353,
            "target": 640,
            "type": "cites",
            "value": 3
        },
        {
            "source": 353,
            "target": 641,
            "type": "cites",
            "value": 7
        },
        {
            "source": 353,
            "target": 349,
            "type": "cites",
            "value": 5
        },
        {
            "source": 349,
            "target": 642,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 640,
            "type": "cites",
            "value": 6
        },
        {
            "source": 349,
            "target": 641,
            "type": "cites",
            "value": 18
        },
        {
            "source": 136,
            "target": 455,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 455,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 328,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 328,
            "type": "cites",
            "value": 3
        },
        {
            "source": 398,
            "target": 328,
            "type": "cites",
            "value": 3
        },
        {
            "source": 398,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 397,
            "type": "cites",
            "value": 7
        },
        {
            "source": 25,
            "target": 26,
            "type": "cites",
            "value": 14
        },
        {
            "source": 25,
            "target": 398,
            "type": "cites",
            "value": 7
        },
        {
            "source": 136,
            "target": 397,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 398,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 397,
            "type": "cites",
            "value": 6
        },
        {
            "source": 26,
            "target": 398,
            "type": "cites",
            "value": 6
        },
        {
            "source": 398,
            "target": 25,
            "type": "cites",
            "value": 5
        },
        {
            "source": 398,
            "target": 397,
            "type": "cites",
            "value": 5
        },
        {
            "source": 398,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 245,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 158,
            "type": "cites",
            "value": 6
        },
        {
            "source": 42,
            "target": 8,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 202,
            "type": "cites",
            "value": 5
        },
        {
            "source": 54,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 43,
            "target": 208,
            "type": "cites",
            "value": 3
        },
        {
            "source": 43,
            "target": 209,
            "type": "cites",
            "value": 3
        },
        {
            "source": 43,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 43,
            "target": 12,
            "type": "cites",
            "value": 9
        },
        {
            "source": 43,
            "target": 643,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 30,
            "target": 244,
            "type": "cites",
            "value": 12
        },
        {
            "source": 30,
            "target": 572,
            "type": "cites",
            "value": 9
        },
        {
            "source": 30,
            "target": 29,
            "type": "cites",
            "value": 17
        },
        {
            "source": 30,
            "target": 445,
            "type": "cites",
            "value": 9
        },
        {
            "source": 644,
            "target": 30,
            "type": "cites",
            "value": 4
        },
        {
            "source": 445,
            "target": 30,
            "type": "cites",
            "value": 10
        },
        {
            "source": 445,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 445,
            "target": 572,
            "type": "cites",
            "value": 3
        },
        {
            "source": 445,
            "target": 29,
            "type": "cites",
            "value": 6
        },
        {
            "source": 30,
            "target": 645,
            "type": "cites",
            "value": 8
        },
        {
            "source": 30,
            "target": 22,
            "type": "cites",
            "value": 33
        },
        {
            "source": 644,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 445,
            "target": 645,
            "type": "cites",
            "value": 4
        },
        {
            "source": 445,
            "target": 22,
            "type": "cites",
            "value": 15
        },
        {
            "source": 268,
            "target": 160,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 14,
            "type": "cites",
            "value": 10
        },
        {
            "source": 103,
            "target": 160,
            "type": "cites",
            "value": 8
        },
        {
            "source": 244,
            "target": 14,
            "type": "cites",
            "value": 52
        },
        {
            "source": 14,
            "target": 160,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 303,
            "type": "cites",
            "value": 10
        },
        {
            "source": 14,
            "target": 138,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 265,
            "type": "cites",
            "value": 5
        },
        {
            "source": 160,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 0,
            "type": "cites",
            "value": 11
        },
        {
            "source": 244,
            "target": 0,
            "type": "cites",
            "value": 9
        },
        {
            "source": 244,
            "target": 265,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 265,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 169,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 184,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 185,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 169,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 184,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 182,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 185,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 62,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 52,
            "type": "cites",
            "value": 10
        },
        {
            "source": 244,
            "target": 62,
            "type": "cites",
            "value": 6
        },
        {
            "source": 244,
            "target": 52,
            "type": "cites",
            "value": 14
        },
        {
            "source": 14,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 62,
            "type": "cites",
            "value": 8
        },
        {
            "source": 160,
            "target": 233,
            "type": "cites",
            "value": 4
        },
        {
            "source": 36,
            "target": 132,
            "type": "cites",
            "value": 7
        },
        {
            "source": 268,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 160,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 132,
            "type": "cites",
            "value": 6
        },
        {
            "source": 244,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 132,
            "type": "cites",
            "value": 11
        },
        {
            "source": 14,
            "target": 126,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 245,
            "type": "cites",
            "value": 9
        },
        {
            "source": 244,
            "target": 245,
            "type": "cites",
            "value": 6
        },
        {
            "source": 36,
            "target": 246,
            "type": "cites",
            "value": 8
        },
        {
            "source": 36,
            "target": 103,
            "type": "cites",
            "value": 17
        },
        {
            "source": 36,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 160,
            "target": 103,
            "type": "cites",
            "value": 8
        },
        {
            "source": 160,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 336,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 103,
            "type": "cites",
            "value": 37
        },
        {
            "source": 244,
            "target": 232,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 435,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 336,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 268,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 72,
            "type": "cites",
            "value": 17
        },
        {
            "source": 14,
            "target": 199,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 72,
            "type": "cites",
            "value": 17
        },
        {
            "source": 14,
            "target": 200,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 141,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 283,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 422,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 423,
            "type": "cites",
            "value": 6
        },
        {
            "source": 160,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 321,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 322,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 324,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 323,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 320,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 321,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 324,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 323,
            "type": "cites",
            "value": 7
        },
        {
            "source": 160,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 63,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 63,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 142,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 390,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 225,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 390,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 225,
            "type": "cites",
            "value": 8
        },
        {
            "source": 244,
            "target": 646,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 647,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 648,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 649,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 646,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 647,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 648,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 649,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 451,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 451,
            "type": "cites",
            "value": 7
        },
        {
            "source": 36,
            "target": 442,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 650,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 42,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 36,
            "type": "cites",
            "value": 8
        },
        {
            "source": 103,
            "target": 36,
            "type": "cites",
            "value": 14
        },
        {
            "source": 103,
            "target": 650,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 442,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 650,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 42,
            "type": "cites",
            "value": 7
        },
        {
            "source": 244,
            "target": 235,
            "type": "cites",
            "value": 12
        },
        {
            "source": 36,
            "target": 124,
            "type": "cites",
            "value": 7
        },
        {
            "source": 244,
            "target": 124,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 247,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 499,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 499,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 24,
            "type": "cites",
            "value": 5
        },
        {
            "source": 36,
            "target": 23,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 15,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 23,
            "type": "cites",
            "value": 13
        },
        {
            "source": 244,
            "target": 23,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 295,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 651,
            "type": "cites",
            "value": 6
        },
        {
            "source": 244,
            "target": 651,
            "type": "cites",
            "value": 14
        },
        {
            "source": 14,
            "target": 651,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 251,
            "type": "cites",
            "value": 9
        },
        {
            "source": 244,
            "target": 652,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 653,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 267,
            "type": "cites",
            "value": 11
        },
        {
            "source": 268,
            "target": 269,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 242,
            "type": "cites",
            "value": 11
        },
        {
            "source": 268,
            "target": 270,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 267,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 268,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 35,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 37,
            "type": "cites",
            "value": 3
        },
        {
            "source": 499,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 24,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 24,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 24,
            "target": 7,
            "type": "cites",
            "value": 11
        },
        {
            "source": 499,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 499,
            "target": 244,
            "type": "cites",
            "value": 13
        },
        {
            "source": 499,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 24,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 499,
            "target": 14,
            "type": "cites",
            "value": 12
        },
        {
            "source": 24,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 499,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 499,
            "target": 26,
            "type": "cites",
            "value": 11
        },
        {
            "source": 24,
            "target": 200,
            "type": "cites",
            "value": 8
        },
        {
            "source": 24,
            "target": 654,
            "type": "cites",
            "value": 4
        },
        {
            "source": 24,
            "target": 215,
            "type": "cites",
            "value": 8
        },
        {
            "source": 24,
            "target": 282,
            "type": "cites",
            "value": 3
        },
        {
            "source": 24,
            "target": 515,
            "type": "cites",
            "value": 6
        },
        {
            "source": 24,
            "target": 26,
            "type": "cites",
            "value": 13
        },
        {
            "source": 499,
            "target": 72,
            "type": "cites",
            "value": 9
        },
        {
            "source": 24,
            "target": 72,
            "type": "cites",
            "value": 10
        },
        {
            "source": 499,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 24,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 24,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 499,
            "target": 24,
            "type": "cites",
            "value": 6
        },
        {
            "source": 24,
            "target": 499,
            "type": "cites",
            "value": 4
        },
        {
            "source": 499,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 233,
            "type": "cites",
            "value": 7
        },
        {
            "source": 243,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 304,
            "type": "cites",
            "value": 9
        },
        {
            "source": 210,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 26,
            "type": "cites",
            "value": 8
        },
        {
            "source": 243,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 185,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 243,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 655,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 656,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 657,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 123,
            "target": 7,
            "type": "cites",
            "value": 9
        },
        {
            "source": 101,
            "target": 7,
            "type": "cites",
            "value": 14
        },
        {
            "source": 96,
            "target": 658,
            "type": "cites",
            "value": 5
        },
        {
            "source": 96,
            "target": 659,
            "type": "cites",
            "value": 5
        },
        {
            "source": 96,
            "target": 660,
            "type": "cites",
            "value": 5
        },
        {
            "source": 96,
            "target": 661,
            "type": "cites",
            "value": 5
        },
        {
            "source": 123,
            "target": 116,
            "type": "cites",
            "value": 3
        },
        {
            "source": 123,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 123,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 100,
            "target": 658,
            "type": "cites",
            "value": 5
        },
        {
            "source": 100,
            "target": 659,
            "type": "cites",
            "value": 5
        },
        {
            "source": 100,
            "target": 660,
            "type": "cites",
            "value": 5
        },
        {
            "source": 100,
            "target": 661,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 116,
            "type": "cites",
            "value": 6
        },
        {
            "source": 101,
            "target": 658,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 113,
            "type": "cites",
            "value": 6
        },
        {
            "source": 101,
            "target": 659,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 660,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 661,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 26,
            "type": "cites",
            "value": 9
        },
        {
            "source": 96,
            "target": 68,
            "type": "cites",
            "value": 7
        },
        {
            "source": 96,
            "target": 69,
            "type": "cites",
            "value": 5
        },
        {
            "source": 96,
            "target": 70,
            "type": "cites",
            "value": 5
        },
        {
            "source": 100,
            "target": 68,
            "type": "cites",
            "value": 5
        },
        {
            "source": 100,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 100,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 101,
            "target": 68,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 101,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 123,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 101,
            "target": 71,
            "type": "cites",
            "value": 7
        },
        {
            "source": 101,
            "target": 121,
            "type": "cites",
            "value": 5
        },
        {
            "source": 123,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 101,
            "target": 122,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 96,
            "target": 205,
            "type": "cites",
            "value": 10
        },
        {
            "source": 96,
            "target": 662,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 123,
            "target": 205,
            "type": "cites",
            "value": 6
        },
        {
            "source": 100,
            "target": 205,
            "type": "cites",
            "value": 10
        },
        {
            "source": 100,
            "target": 662,
            "type": "cites",
            "value": 4
        },
        {
            "source": 100,
            "target": 178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 101,
            "target": 205,
            "type": "cites",
            "value": 10
        },
        {
            "source": 101,
            "target": 662,
            "type": "cites",
            "value": 4
        },
        {
            "source": 101,
            "target": 178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 158,
            "type": "cites",
            "value": 5
        },
        {
            "source": 123,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 123,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 100,
            "target": 158,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 83,
            "type": "cites",
            "value": 10
        },
        {
            "source": 101,
            "target": 158,
            "type": "cites",
            "value": 5
        },
        {
            "source": 96,
            "target": 188,
            "type": "cites",
            "value": 6
        },
        {
            "source": 100,
            "target": 188,
            "type": "cites",
            "value": 4
        },
        {
            "source": 101,
            "target": 188,
            "type": "cites",
            "value": 4
        },
        {
            "source": 29,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 31,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 416,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 167,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 446,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 135,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 265,
            "type": "cites",
            "value": 9
        },
        {
            "source": 22,
            "target": 125,
            "type": "cites",
            "value": 22
        },
        {
            "source": 31,
            "target": 29,
            "type": "cites",
            "value": 5
        },
        {
            "source": 31,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 31,
            "target": 244,
            "type": "cites",
            "value": 17
        },
        {
            "source": 30,
            "target": 220,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 220,
            "type": "cites",
            "value": 11
        },
        {
            "source": 22,
            "target": 244,
            "type": "cites",
            "value": 31
        },
        {
            "source": 99,
            "target": 663,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 50,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 455,
            "type": "cites",
            "value": 7
        },
        {
            "source": 99,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 46,
            "type": "cites",
            "value": 12
        },
        {
            "source": 136,
            "target": 192,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 341,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 192,
            "type": "cites",
            "value": 7
        },
        {
            "source": 77,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 664,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 77,
            "target": 83,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 116,
            "type": "cites",
            "value": 3
        },
        {
            "source": 153,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 7,
            "type": "cites",
            "value": 29
        },
        {
            "source": 338,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 665,
            "target": 7,
            "type": "cites",
            "value": 10
        },
        {
            "source": 666,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 667,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 665,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 77,
            "target": 71,
            "type": "cites",
            "value": 10
        },
        {
            "source": 77,
            "target": 121,
            "type": "cites",
            "value": 5
        },
        {
            "source": 665,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 665,
            "target": 77,
            "type": "cites",
            "value": 6
        },
        {
            "source": 666,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 667,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 77,
            "target": 205,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 113,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 523,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 155,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 188,
            "type": "cites",
            "value": 8
        },
        {
            "source": 77,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 2,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 3,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 5,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 94,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 94,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 568,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 425,
            "target": 99,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 99,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 240,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 241,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 242,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 103,
            "type": "cites",
            "value": 31
        },
        {
            "source": 94,
            "target": 221,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 221,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 251,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 425,
            "target": 257,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 257,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 206,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 192,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 46,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 158,
            "type": "cites",
            "value": 6
        },
        {
            "source": 42,
            "target": 103,
            "type": "cites",
            "value": 6
        },
        {
            "source": 42,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 282,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 1,
            "type": "cites",
            "value": 5
        },
        {
            "source": 42,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 99,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 42,
            "target": 83,
            "type": "cites",
            "value": 6
        },
        {
            "source": 42,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 668,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 669,
            "type": "cites",
            "value": 4
        },
        {
            "source": 42,
            "target": 24,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 113,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 113,
            "type": "cites",
            "value": 5
        },
        {
            "source": 668,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 7,
            "type": "cites",
            "value": 10
        },
        {
            "source": 14,
            "target": 77,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 155,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 7,
            "type": "cites",
            "value": 29
        },
        {
            "source": 42,
            "target": 90,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 377,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 2,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 381,
            "type": "cites",
            "value": 6
        },
        {
            "source": 40,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 670,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 60,
            "type": "cites",
            "value": 6
        },
        {
            "source": 265,
            "target": 670,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 60,
            "type": "cites",
            "value": 5
        },
        {
            "source": 265,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 166,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 320,
            "type": "cites",
            "value": 14
        },
        {
            "source": 22,
            "target": 321,
            "type": "cites",
            "value": 9
        },
        {
            "source": 22,
            "target": 363,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 323,
            "type": "cites",
            "value": 9
        },
        {
            "source": 29,
            "target": 320,
            "type": "cites",
            "value": 12
        },
        {
            "source": 29,
            "target": 321,
            "type": "cites",
            "value": 10
        },
        {
            "source": 29,
            "target": 363,
            "type": "cites",
            "value": 5
        },
        {
            "source": 29,
            "target": 323,
            "type": "cites",
            "value": 12
        },
        {
            "source": 568,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 299,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 40,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 63,
            "type": "cites",
            "value": 5
        },
        {
            "source": 671,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 672,
            "target": 12,
            "type": "cites",
            "value": 8
        },
        {
            "source": 60,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 531,
            "target": 6,
            "type": "cites",
            "value": 6
        },
        {
            "source": 672,
            "target": 6,
            "type": "cites",
            "value": 6
        },
        {
            "source": 531,
            "target": 10,
            "type": "cites",
            "value": 4
        },
        {
            "source": 672,
            "target": 10,
            "type": "cites",
            "value": 4
        },
        {
            "source": 531,
            "target": 673,
            "type": "cites",
            "value": 3
        },
        {
            "source": 672,
            "target": 531,
            "type": "cites",
            "value": 3
        },
        {
            "source": 672,
            "target": 673,
            "type": "cites",
            "value": 3
        },
        {
            "source": 60,
            "target": 674,
            "type": "cites",
            "value": 4
        },
        {
            "source": 60,
            "target": 34,
            "type": "cites",
            "value": 7
        },
        {
            "source": 60,
            "target": 369,
            "type": "cites",
            "value": 6
        },
        {
            "source": 79,
            "target": 208,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 209,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 675,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 676,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 208,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 677,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 678,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 209,
            "type": "cites",
            "value": 6
        },
        {
            "source": 80,
            "target": 26,
            "type": "cites",
            "value": 8
        },
        {
            "source": 679,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 213,
            "type": "cites",
            "value": 3
        },
        {
            "source": 679,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 679,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 679,
            "target": 101,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 213,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 116,
            "type": "cites",
            "value": 3
        },
        {
            "source": 679,
            "target": 79,
            "type": "cites",
            "value": 4
        },
        {
            "source": 679,
            "target": 80,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 85,
            "type": "cites",
            "value": 5
        },
        {
            "source": 679,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 246,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 158,
            "type": "cites",
            "value": 5
        },
        {
            "source": 441,
            "target": 283,
            "type": "cites",
            "value": 4
        },
        {
            "source": 441,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 441,
            "target": 422,
            "type": "cites",
            "value": 4
        },
        {
            "source": 441,
            "target": 423,
            "type": "cites",
            "value": 4
        },
        {
            "source": 441,
            "target": 103,
            "type": "cites",
            "value": 10
        },
        {
            "source": 442,
            "target": 103,
            "type": "cites",
            "value": 8
        },
        {
            "source": 680,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 124,
            "target": 294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 283,
            "type": "cites",
            "value": 7
        },
        {
            "source": 124,
            "target": 251,
            "type": "cites",
            "value": 5
        },
        {
            "source": 124,
            "target": 422,
            "type": "cites",
            "value": 7
        },
        {
            "source": 124,
            "target": 423,
            "type": "cites",
            "value": 7
        },
        {
            "source": 124,
            "target": 425,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 103,
            "type": "cites",
            "value": 23
        },
        {
            "source": 441,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 441,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 442,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 680,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 268,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 246,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 442,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 113,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 659,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 249,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 248,
            "type": "cites",
            "value": 3
        },
        {
            "source": 441,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 681,
            "type": "cites",
            "value": 6
        },
        {
            "source": 442,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 458,
            "type": "cites",
            "value": 6
        },
        {
            "source": 268,
            "target": 32,
            "type": "cites",
            "value": 6
        },
        {
            "source": 268,
            "target": 461,
            "type": "cites",
            "value": 4
        },
        {
            "source": 268,
            "target": 462,
            "type": "cites",
            "value": 4
        },
        {
            "source": 246,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 83,
            "type": "cites",
            "value": 8
        },
        {
            "source": 246,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 269,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 270,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 269,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 270,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 246,
            "type": "cites",
            "value": 4
        },
        {
            "source": 441,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 441,
            "target": 100,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 96,
            "type": "cites",
            "value": 5
        },
        {
            "source": 246,
            "target": 100,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 96,
            "type": "cites",
            "value": 5
        },
        {
            "source": 124,
            "target": 100,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 148,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 96,
            "type": "cites",
            "value": 12
        },
        {
            "source": 103,
            "target": 100,
            "type": "cites",
            "value": 9
        },
        {
            "source": 246,
            "target": 80,
            "type": "cites",
            "value": 5
        },
        {
            "source": 246,
            "target": 581,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 80,
            "type": "cites",
            "value": 5
        },
        {
            "source": 124,
            "target": 581,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 581,
            "type": "cites",
            "value": 9
        },
        {
            "source": 317,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 317,
            "target": 225,
            "type": "cites",
            "value": 5
        },
        {
            "source": 317,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 317,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 214,
            "target": 682,
            "type": "cites",
            "value": 3
        },
        {
            "source": 200,
            "target": 63,
            "type": "cites",
            "value": 11
        },
        {
            "source": 214,
            "target": 49,
            "type": "cites",
            "value": 3
        },
        {
            "source": 200,
            "target": 52,
            "type": "cites",
            "value": 8
        },
        {
            "source": 214,
            "target": 61,
            "type": "cites",
            "value": 4
        },
        {
            "source": 214,
            "target": 44,
            "type": "cites",
            "value": 9
        },
        {
            "source": 200,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 200,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 683,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 683,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 122,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 683,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 68,
            "type": "cites",
            "value": 5
        },
        {
            "source": 122,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 153,
            "target": 68,
            "type": "cites",
            "value": 5
        },
        {
            "source": 153,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 153,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 683,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 683,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 122,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 122,
            "target": 188,
            "type": "cites",
            "value": 5
        },
        {
            "source": 122,
            "target": 7,
            "type": "cites",
            "value": 13
        },
        {
            "source": 153,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 153,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 683,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 684,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 684,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 201,
            "target": 72,
            "type": "cites",
            "value": 11
        },
        {
            "source": 684,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 684,
            "target": 201,
            "type": "cites",
            "value": 5
        },
        {
            "source": 201,
            "target": 684,
            "type": "cites",
            "value": 4
        },
        {
            "source": 685,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 113,
            "target": 686,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 113,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 151,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 334,
            "target": 8,
            "type": "cites",
            "value": 4
        },
        {
            "source": 334,
            "target": 9,
            "type": "cites",
            "value": 5
        },
        {
            "source": 334,
            "target": 687,
            "type": "cites",
            "value": 5
        },
        {
            "source": 334,
            "target": 151,
            "type": "cites",
            "value": 29
        },
        {
            "source": 334,
            "target": 688,
            "type": "cites",
            "value": 4
        },
        {
            "source": 334,
            "target": 152,
            "type": "cites",
            "value": 4
        },
        {
            "source": 151,
            "target": 688,
            "type": "cites",
            "value": 7
        },
        {
            "source": 151,
            "target": 152,
            "type": "cites",
            "value": 7
        },
        {
            "source": 151,
            "target": 177,
            "type": "cites",
            "value": 9
        },
        {
            "source": 334,
            "target": 12,
            "type": "cites",
            "value": 18
        },
        {
            "source": 134,
            "target": 12,
            "type": "cites",
            "value": 8
        },
        {
            "source": 334,
            "target": 10,
            "type": "cites",
            "value": 5
        },
        {
            "source": 334,
            "target": 6,
            "type": "cites",
            "value": 7
        },
        {
            "source": 134,
            "target": 6,
            "type": "cites",
            "value": 4
        },
        {
            "source": 334,
            "target": 11,
            "type": "cites",
            "value": 3
        },
        {
            "source": 334,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 151,
            "target": 7,
            "type": "cites",
            "value": 9
        },
        {
            "source": 334,
            "target": 134,
            "type": "cites",
            "value": 3
        },
        {
            "source": 151,
            "target": 134,
            "type": "cites",
            "value": 3
        },
        {
            "source": 334,
            "target": 689,
            "type": "cites",
            "value": 4
        },
        {
            "source": 151,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 132,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 690,
            "target": 52,
            "type": "cites",
            "value": 12
        },
        {
            "source": 132,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 126,
            "target": 0,
            "type": "cites",
            "value": 6
        },
        {
            "source": 378,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 379,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 379,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 378,
            "target": 125,
            "type": "cites",
            "value": 11
        },
        {
            "source": 379,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 378,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 379,
            "target": 378,
            "type": "cites",
            "value": 4
        },
        {
            "source": 378,
            "target": 691,
            "type": "cites",
            "value": 4
        },
        {
            "source": 378,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 395,
            "target": 243,
            "type": "cites",
            "value": 4
        },
        {
            "source": 395,
            "target": 233,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 390,
            "type": "cites",
            "value": 3
        },
        {
            "source": 395,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 144,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 143,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 94,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 406,
            "type": "cites",
            "value": 6
        },
        {
            "source": 36,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 36,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 68,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 69,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 68,
            "type": "cites",
            "value": 5
        },
        {
            "source": 268,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 71,
            "type": "cites",
            "value": 7
        },
        {
            "source": 36,
            "target": 121,
            "type": "cites",
            "value": 4
        },
        {
            "source": 36,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 14,
            "target": 71,
            "type": "cites",
            "value": 14
        },
        {
            "source": 14,
            "target": 117,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 118,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 119,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 120,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 121,
            "type": "cites",
            "value": 10
        },
        {
            "source": 103,
            "target": 121,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 7,
            "type": "cites",
            "value": 21
        },
        {
            "source": 36,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 36,
            "target": 202,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 202,
            "type": "cites",
            "value": 11
        },
        {
            "source": 103,
            "target": 202,
            "type": "cites",
            "value": 4
        },
        {
            "source": 36,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 36,
            "target": 188,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 188,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 188,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 85,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 85,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 147,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 146,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 101,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 197,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 1,
            "type": "cites",
            "value": 9
        },
        {
            "source": 268,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 455,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 259,
            "type": "cites",
            "value": 4
        },
        {
            "source": 268,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 171,
            "type": "cites",
            "value": 14
        },
        {
            "source": 692,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 692,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 692,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 692,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 692,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 268,
            "target": 693,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 693,
            "type": "cites",
            "value": 13
        },
        {
            "source": 251,
            "target": 463,
            "type": "cites",
            "value": 6
        },
        {
            "source": 694,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 694,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 694,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 694,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 694,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 695,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 695,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 695,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 695,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 695,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 696,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 696,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 696,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 696,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 696,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 242,
            "target": 693,
            "type": "cites",
            "value": 10
        },
        {
            "source": 242,
            "target": 463,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 693,
            "type": "cites",
            "value": 21
        },
        {
            "source": 103,
            "target": 463,
            "type": "cites",
            "value": 9
        },
        {
            "source": 268,
            "target": 697,
            "type": "cites",
            "value": 3
        },
        {
            "source": 268,
            "target": 698,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 697,
            "type": "cites",
            "value": 6
        },
        {
            "source": 251,
            "target": 442,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 698,
            "type": "cites",
            "value": 6
        },
        {
            "source": 242,
            "target": 697,
            "type": "cites",
            "value": 6
        },
        {
            "source": 242,
            "target": 442,
            "type": "cites",
            "value": 6
        },
        {
            "source": 242,
            "target": 698,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 697,
            "type": "cites",
            "value": 11
        },
        {
            "source": 103,
            "target": 698,
            "type": "cites",
            "value": 11
        },
        {
            "source": 251,
            "target": 240,
            "type": "cites",
            "value": 10
        },
        {
            "source": 251,
            "target": 99,
            "type": "cites",
            "value": 12
        },
        {
            "source": 251,
            "target": 241,
            "type": "cites",
            "value": 10
        },
        {
            "source": 251,
            "target": 271,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 272,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 273,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 274,
            "type": "cites",
            "value": 5
        },
        {
            "source": 242,
            "target": 240,
            "type": "cites",
            "value": 6
        },
        {
            "source": 242,
            "target": 99,
            "type": "cites",
            "value": 7
        },
        {
            "source": 242,
            "target": 241,
            "type": "cites",
            "value": 6
        },
        {
            "source": 242,
            "target": 271,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 272,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 273,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 274,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 240,
            "type": "cites",
            "value": 19
        },
        {
            "source": 103,
            "target": 99,
            "type": "cites",
            "value": 22
        },
        {
            "source": 103,
            "target": 241,
            "type": "cites",
            "value": 19
        },
        {
            "source": 103,
            "target": 271,
            "type": "cites",
            "value": 11
        },
        {
            "source": 103,
            "target": 272,
            "type": "cites",
            "value": 12
        },
        {
            "source": 103,
            "target": 273,
            "type": "cites",
            "value": 11
        },
        {
            "source": 103,
            "target": 274,
            "type": "cites",
            "value": 11
        },
        {
            "source": 251,
            "target": 246,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 124,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 572,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 572,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 14,
            "type": "cites",
            "value": 19
        },
        {
            "source": 242,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 242,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 125,
            "type": "cites",
            "value": 20
        },
        {
            "source": 22,
            "target": 459,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 699,
            "type": "cites",
            "value": 4
        },
        {
            "source": 39,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 40,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 249,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 201,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 53,
            "type": "cites",
            "value": 5
        },
        {
            "source": 41,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 701,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 701,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 651,
            "type": "cites",
            "value": 8
        },
        {
            "source": 167,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 167,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 135,
            "type": "cites",
            "value": 8
        },
        {
            "source": 167,
            "target": 195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 167,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 244,
            "type": "cites",
            "value": 11
        },
        {
            "source": 167,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 406,
            "type": "cites",
            "value": 4
        },
        {
            "source": 167,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 167,
            "target": 25,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 702,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 702,
            "target": 235,
            "type": "cites",
            "value": 3
        },
        {
            "source": 702,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 283,
            "target": 200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 703,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 283,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 704,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 442,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 705,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 706,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 442,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 706,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 707,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 303,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 155,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 156,
            "target": 151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 6,
            "target": 151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 708,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 708,
            "target": 709,
            "type": "cites",
            "value": 3
        },
        {
            "source": 708,
            "target": 710,
            "type": "cites",
            "value": 3
        },
        {
            "source": 708,
            "target": 187,
            "type": "cites",
            "value": 5
        },
        {
            "source": 708,
            "target": 151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 708,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 708,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 24,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 708,
            "target": 223,
            "type": "cites",
            "value": 4
        },
        {
            "source": 24,
            "target": 708,
            "type": "cites",
            "value": 5
        },
        {
            "source": 24,
            "target": 223,
            "type": "cites",
            "value": 3
        },
        {
            "source": 708,
            "target": 669,
            "type": "cites",
            "value": 3
        },
        {
            "source": 708,
            "target": 24,
            "type": "cites",
            "value": 5
        },
        {
            "source": 24,
            "target": 669,
            "type": "cites",
            "value": 6
        },
        {
            "source": 708,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 708,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 24,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 24,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 711,
            "type": "cites",
            "value": 4
        },
        {
            "source": 526,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 527,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 187,
            "type": "cites",
            "value": 6
        },
        {
            "source": 526,
            "target": 371,
            "type": "cites",
            "value": 3
        },
        {
            "source": 526,
            "target": 302,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 371,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 302,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 304,
            "type": "cites",
            "value": 9
        },
        {
            "source": 41,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 41,
            "target": 712,
            "type": "cites",
            "value": 5
        },
        {
            "source": 498,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 2,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 3,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 206,
            "type": "cites",
            "value": 6
        },
        {
            "source": 0,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 231,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 713,
            "target": 714,
            "type": "cites",
            "value": 4
        },
        {
            "source": 713,
            "target": 231,
            "type": "cites",
            "value": 7
        },
        {
            "source": 0,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 714,
            "type": "cites",
            "value": 8
        },
        {
            "source": 231,
            "target": 715,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 141,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 46,
            "type": "cites",
            "value": 6
        },
        {
            "source": 231,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 37,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 378,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 378,
            "target": 25,
            "type": "cites",
            "value": 3
        },
        {
            "source": 175,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 91,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 716,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 7,
            "type": "cites",
            "value": 17
        },
        {
            "source": 53,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 46,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 53,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 52,
            "type": "cites",
            "value": 5
        },
        {
            "source": 46,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 187,
            "type": "cites",
            "value": 5
        },
        {
            "source": 717,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 717,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 716,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 716,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 717,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 717,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 718,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 718,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 153,
            "target": 151,
            "type": "cites",
            "value": 8
        },
        {
            "source": 122,
            "target": 151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 334,
            "target": 709,
            "type": "cites",
            "value": 3
        },
        {
            "source": 334,
            "target": 710,
            "type": "cites",
            "value": 3
        },
        {
            "source": 334,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 334,
            "target": 156,
            "type": "cites",
            "value": 3
        },
        {
            "source": 719,
            "target": 720,
            "type": "cites",
            "value": 4
        },
        {
            "source": 721,
            "target": 720,
            "type": "cites",
            "value": 4
        },
        {
            "source": 722,
            "target": 720,
            "type": "cites",
            "value": 4
        },
        {
            "source": 723,
            "target": 724,
            "type": "cites",
            "value": 5
        },
        {
            "source": 723,
            "target": 720,
            "type": "cites",
            "value": 11
        },
        {
            "source": 720,
            "target": 724,
            "type": "cites",
            "value": 7
        },
        {
            "source": 720,
            "target": 711,
            "type": "cites",
            "value": 3
        },
        {
            "source": 720,
            "target": 723,
            "type": "cites",
            "value": 5
        },
        {
            "source": 723,
            "target": 725,
            "type": "cites",
            "value": 4
        },
        {
            "source": 720,
            "target": 725,
            "type": "cites",
            "value": 4
        },
        {
            "source": 723,
            "target": 726,
            "type": "cites",
            "value": 3
        },
        {
            "source": 723,
            "target": 727,
            "type": "cites",
            "value": 3
        },
        {
            "source": 723,
            "target": 728,
            "type": "cites",
            "value": 3
        },
        {
            "source": 720,
            "target": 726,
            "type": "cites",
            "value": 3
        },
        {
            "source": 720,
            "target": 727,
            "type": "cites",
            "value": 3
        },
        {
            "source": 720,
            "target": 728,
            "type": "cites",
            "value": 3
        },
        {
            "source": 723,
            "target": 153,
            "type": "cites",
            "value": 3
        },
        {
            "source": 723,
            "target": 729,
            "type": "cites",
            "value": 3
        },
        {
            "source": 720,
            "target": 153,
            "type": "cites",
            "value": 4
        },
        {
            "source": 720,
            "target": 729,
            "type": "cites",
            "value": 4
        },
        {
            "source": 723,
            "target": 222,
            "type": "cites",
            "value": 3
        },
        {
            "source": 720,
            "target": 222,
            "type": "cites",
            "value": 4
        },
        {
            "source": 79,
            "target": 158,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 158,
            "type": "cites",
            "value": 4
        },
        {
            "source": 452,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 452,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 452,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 730,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 731,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 732,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 733,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 440,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 734,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 735,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 736,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 737,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 738,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 735,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 736,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 737,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 738,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 739,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 739,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 740,
            "type": "cites",
            "value": 3
        },
        {
            "source": 334,
            "target": 741,
            "type": "cites",
            "value": 4
        },
        {
            "source": 334,
            "target": 742,
            "type": "cites",
            "value": 4
        },
        {
            "source": 334,
            "target": 743,
            "type": "cites",
            "value": 3
        },
        {
            "source": 739,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 499,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 708,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 739,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 708,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 499,
            "target": 71,
            "type": "cites",
            "value": 7
        },
        {
            "source": 499,
            "target": 121,
            "type": "cites",
            "value": 5
        },
        {
            "source": 24,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 24,
            "target": 514,
            "type": "cites",
            "value": 3
        },
        {
            "source": 499,
            "target": 188,
            "type": "cites",
            "value": 4
        },
        {
            "source": 499,
            "target": 83,
            "type": "cites",
            "value": 5
        },
        {
            "source": 267,
            "target": 242,
            "type": "cites",
            "value": 11
        },
        {
            "source": 267,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 267,
            "target": 103,
            "type": "cites",
            "value": 15
        },
        {
            "source": 744,
            "target": 242,
            "type": "cites",
            "value": 5
        },
        {
            "source": 744,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 745,
            "target": 242,
            "type": "cites",
            "value": 5
        },
        {
            "source": 745,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 464,
            "target": 242,
            "type": "cites",
            "value": 7
        },
        {
            "source": 464,
            "target": 103,
            "type": "cites",
            "value": 9
        },
        {
            "source": 267,
            "target": 251,
            "type": "cites",
            "value": 6
        },
        {
            "source": 744,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 745,
            "target": 251,
            "type": "cites",
            "value": 3
        },
        {
            "source": 464,
            "target": 251,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 252,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 253,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 254,
            "type": "cites",
            "value": 4
        },
        {
            "source": 251,
            "target": 187,
            "type": "cites",
            "value": 5
        },
        {
            "source": 267,
            "target": 50,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 455,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 458,
            "type": "cites",
            "value": 6
        },
        {
            "source": 267,
            "target": 32,
            "type": "cites",
            "value": 6
        },
        {
            "source": 267,
            "target": 461,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 462,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 259,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 265,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 267,
            "target": 693,
            "type": "cites",
            "value": 5
        },
        {
            "source": 267,
            "target": 268,
            "type": "cites",
            "value": 5
        },
        {
            "source": 744,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 745,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 464,
            "target": 267,
            "type": "cites",
            "value": 5
        },
        {
            "source": 464,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 267,
            "target": 697,
            "type": "cites",
            "value": 3
        },
        {
            "source": 267,
            "target": 442,
            "type": "cites",
            "value": 3
        },
        {
            "source": 267,
            "target": 698,
            "type": "cites",
            "value": 3
        },
        {
            "source": 267,
            "target": 269,
            "type": "cites",
            "value": 3
        },
        {
            "source": 267,
            "target": 270,
            "type": "cites",
            "value": 3
        },
        {
            "source": 267,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 341,
            "type": "cites",
            "value": 11
        },
        {
            "source": 50,
            "target": 192,
            "type": "cites",
            "value": 19
        },
        {
            "source": 52,
            "target": 192,
            "type": "cites",
            "value": 15
        },
        {
            "source": 50,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 746,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 747,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 72,
            "type": "cites",
            "value": 36
        },
        {
            "source": 0,
            "target": 748,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 198,
            "type": "cites",
            "value": 11
        },
        {
            "source": 0,
            "target": 749,
            "type": "cites",
            "value": 5
        },
        {
            "source": 0,
            "target": 287,
            "type": "cites",
            "value": 13
        },
        {
            "source": 0,
            "target": 750,
            "type": "cites",
            "value": 7
        },
        {
            "source": 0,
            "target": 197,
            "type": "cites",
            "value": 7
        },
        {
            "source": 0,
            "target": 751,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 752,
            "target": 99,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 96,
            "type": "cites",
            "value": 6
        },
        {
            "source": 251,
            "target": 23,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 99,
            "type": "cites",
            "value": 5
        },
        {
            "source": 663,
            "target": 96,
            "type": "cites",
            "value": 4
        },
        {
            "source": 663,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 752,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 240,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 241,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 103,
            "type": "cites",
            "value": 8
        },
        {
            "source": 99,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 251,
            "target": 102,
            "type": "cites",
            "value": 8
        },
        {
            "source": 251,
            "target": 100,
            "type": "cites",
            "value": 4
        },
        {
            "source": 251,
            "target": 101,
            "type": "cites",
            "value": 4
        },
        {
            "source": 99,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 99,
            "target": 121,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 663,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 251,
            "target": 148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 442,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 116,
            "type": "cites",
            "value": 6
        },
        {
            "source": 102,
            "target": 532,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 102,
            "target": 412,
            "type": "cites",
            "value": 3
        },
        {
            "source": 753,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 158,
            "type": "cites",
            "value": 5
        },
        {
            "source": 102,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 84,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 83,
            "type": "cites",
            "value": 11
        },
        {
            "source": 102,
            "target": 202,
            "type": "cites",
            "value": 4
        },
        {
            "source": 79,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 79,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 79,
            "target": 83,
            "type": "cites",
            "value": 6
        },
        {
            "source": 79,
            "target": 202,
            "type": "cites",
            "value": 4
        },
        {
            "source": 79,
            "target": 43,
            "type": "cites",
            "value": 4
        },
        {
            "source": 753,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 7,
            "type": "cites",
            "value": 14
        },
        {
            "source": 102,
            "target": 79,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 121,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 14,
            "type": "cites",
            "value": 12
        },
        {
            "source": 14,
            "target": 670,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 686,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 686,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 195,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 304,
            "type": "cites",
            "value": 15
        },
        {
            "source": 22,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 204,
            "type": "cites",
            "value": 16
        },
        {
            "source": 22,
            "target": 754,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 414,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 755,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 756,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 757,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 197,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 229,
            "type": "cites",
            "value": 6
        },
        {
            "source": 220,
            "target": 758,
            "type": "cites",
            "value": 6
        },
        {
            "source": 220,
            "target": 759,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 760,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 229,
            "type": "cites",
            "value": 10
        },
        {
            "source": 228,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 22,
            "type": "cites",
            "value": 5
        },
        {
            "source": 761,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 762,
            "target": 186,
            "type": "cites",
            "value": 5
        },
        {
            "source": 592,
            "target": 591,
            "type": "cites",
            "value": 12
        },
        {
            "source": 763,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 764,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 592,
            "target": 625,
            "type": "cites",
            "value": 16
        },
        {
            "source": 592,
            "target": 540,
            "type": "cites",
            "value": 13
        },
        {
            "source": 592,
            "target": 765,
            "type": "cites",
            "value": 8
        },
        {
            "source": 592,
            "target": 541,
            "type": "cites",
            "value": 24
        },
        {
            "source": 592,
            "target": 766,
            "type": "cites",
            "value": 3
        },
        {
            "source": 592,
            "target": 598,
            "type": "cites",
            "value": 7
        },
        {
            "source": 592,
            "target": 767,
            "type": "cites",
            "value": 3
        },
        {
            "source": 592,
            "target": 768,
            "type": "cites",
            "value": 3
        },
        {
            "source": 592,
            "target": 769,
            "type": "cites",
            "value": 3
        },
        {
            "source": 592,
            "target": 770,
            "type": "cites",
            "value": 3
        },
        {
            "source": 592,
            "target": 771,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 135,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 166,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 160,
            "target": 328,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 320,
            "target": 317,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 225,
            "type": "cites",
            "value": 4
        },
        {
            "source": 772,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 772,
            "target": 323,
            "type": "cites",
            "value": 4
        },
        {
            "source": 772,
            "target": 244,
            "type": "cites",
            "value": 9
        },
        {
            "source": 320,
            "target": 321,
            "type": "cites",
            "value": 6
        },
        {
            "source": 320,
            "target": 363,
            "type": "cites",
            "value": 4
        },
        {
            "source": 320,
            "target": 323,
            "type": "cites",
            "value": 8
        },
        {
            "source": 320,
            "target": 244,
            "type": "cites",
            "value": 35
        },
        {
            "source": 772,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 322,
            "type": "cites",
            "value": 7
        },
        {
            "source": 320,
            "target": 324,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 773,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 773,
            "type": "cites",
            "value": 8
        },
        {
            "source": 244,
            "target": 326,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 328,
            "type": "cites",
            "value": 4
        },
        {
            "source": 320,
            "target": 406,
            "type": "cites",
            "value": 5
        },
        {
            "source": 320,
            "target": 14,
            "type": "cites",
            "value": 16
        },
        {
            "source": 244,
            "target": 328,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 406,
            "type": "cites",
            "value": 14
        },
        {
            "source": 244,
            "target": 167,
            "type": "cites",
            "value": 9
        },
        {
            "source": 320,
            "target": 646,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 647,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 648,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 649,
            "type": "cites",
            "value": 4
        },
        {
            "source": 320,
            "target": 245,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 125,
            "type": "cites",
            "value": 11
        },
        {
            "source": 320,
            "target": 25,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 397,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 398,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 25,
            "type": "cites",
            "value": 6
        },
        {
            "source": 244,
            "target": 397,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 398,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 774,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 102,
            "type": "cites",
            "value": 9
        },
        {
            "source": 244,
            "target": 102,
            "type": "cites",
            "value": 30
        },
        {
            "source": 283,
            "target": 234,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 234,
            "type": "cites",
            "value": 6
        },
        {
            "source": 429,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 775,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 776,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 777,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 778,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 779,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 780,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 781,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 782,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 193,
            "type": "cites",
            "value": 7
        },
        {
            "source": 283,
            "target": 282,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 215,
            "type": "cites",
            "value": 5
        },
        {
            "source": 294,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 294,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 282,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 215,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 783,
            "type": "cites",
            "value": 4
        },
        {
            "source": 281,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 46,
            "type": "cites",
            "value": 11
        },
        {
            "source": 785,
            "target": 720,
            "type": "cites",
            "value": 3
        },
        {
            "source": 786,
            "target": 720,
            "type": "cites",
            "value": 3
        },
        {
            "source": 787,
            "target": 720,
            "type": "cites",
            "value": 3
        },
        {
            "source": 788,
            "target": 720,
            "type": "cites",
            "value": 3
        },
        {
            "source": 789,
            "target": 720,
            "type": "cites",
            "value": 3
        },
        {
            "source": 556,
            "target": 596,
            "type": "cites",
            "value": 4
        },
        {
            "source": 556,
            "target": 553,
            "type": "cites",
            "value": 5
        },
        {
            "source": 556,
            "target": 554,
            "type": "cites",
            "value": 4
        },
        {
            "source": 790,
            "target": 556,
            "type": "cites",
            "value": 3
        },
        {
            "source": 556,
            "target": 486,
            "type": "cites",
            "value": 11
        },
        {
            "source": 556,
            "target": 555,
            "type": "cites",
            "value": 3
        },
        {
            "source": 556,
            "target": 130,
            "type": "cites",
            "value": 7
        },
        {
            "source": 556,
            "target": 487,
            "type": "cites",
            "value": 9
        },
        {
            "source": 334,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 791,
            "target": 334,
            "type": "cites",
            "value": 4
        },
        {
            "source": 337,
            "target": 334,
            "type": "cites",
            "value": 4
        },
        {
            "source": 792,
            "target": 334,
            "type": "cites",
            "value": 4
        },
        {
            "source": 791,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 337,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 792,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 793,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 794,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 795,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 71,
            "target": 12,
            "type": "cites",
            "value": 7
        },
        {
            "source": 46,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 71,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 46,
            "target": 194,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 245,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 796,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 245,
            "target": 125,
            "type": "cites",
            "value": 7
        },
        {
            "source": 245,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 245,
            "target": 797,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 798,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 241,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 251,
            "type": "cites",
            "value": 10
        },
        {
            "source": 466,
            "target": 240,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 99,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 242,
            "type": "cites",
            "value": 12
        },
        {
            "source": 466,
            "target": 103,
            "type": "cites",
            "value": 23
        },
        {
            "source": 249,
            "target": 251,
            "type": "cites",
            "value": 7
        },
        {
            "source": 249,
            "target": 242,
            "type": "cites",
            "value": 8
        },
        {
            "source": 249,
            "target": 103,
            "type": "cites",
            "value": 22
        },
        {
            "source": 467,
            "target": 241,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 240,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 99,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 242,
            "type": "cites",
            "value": 12
        },
        {
            "source": 466,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 235,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 235,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 249,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 267,
            "type": "cites",
            "value": 10
        },
        {
            "source": 466,
            "target": 693,
            "type": "cites",
            "value": 5
        },
        {
            "source": 466,
            "target": 268,
            "type": "cites",
            "value": 9
        },
        {
            "source": 249,
            "target": 267,
            "type": "cites",
            "value": 6
        },
        {
            "source": 249,
            "target": 693,
            "type": "cites",
            "value": 3
        },
        {
            "source": 249,
            "target": 268,
            "type": "cites",
            "value": 6
        },
        {
            "source": 467,
            "target": 267,
            "type": "cites",
            "value": 10
        },
        {
            "source": 467,
            "target": 693,
            "type": "cites",
            "value": 5
        },
        {
            "source": 467,
            "target": 268,
            "type": "cites",
            "value": 9
        },
        {
            "source": 103,
            "target": 799,
            "type": "cites",
            "value": 4
        },
        {
            "source": 466,
            "target": 442,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 442,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 269,
            "type": "cites",
            "value": 4
        },
        {
            "source": 466,
            "target": 270,
            "type": "cites",
            "value": 4
        },
        {
            "source": 466,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 467,
            "target": 269,
            "type": "cites",
            "value": 4
        },
        {
            "source": 467,
            "target": 270,
            "type": "cites",
            "value": 4
        },
        {
            "source": 467,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 249,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 249,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 249,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 272,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 703,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 43,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 800,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 801,
            "target": 293,
            "type": "cites",
            "value": 3
        },
        {
            "source": 801,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 802,
            "target": 293,
            "type": "cites",
            "value": 3
        },
        {
            "source": 802,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 803,
            "target": 293,
            "type": "cites",
            "value": 3
        },
        {
            "source": 803,
            "target": 38,
            "type": "cites",
            "value": 5
        },
        {
            "source": 804,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 805,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 544,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 806,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 544,
            "target": 807,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 808,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 809,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 504,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 314,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 810,
            "type": "cites",
            "value": 4
        },
        {
            "source": 544,
            "target": 811,
            "type": "cites",
            "value": 5
        },
        {
            "source": 231,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 812,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 813,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 228,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 590,
            "type": "cites",
            "value": 5
        },
        {
            "source": 231,
            "target": 125,
            "type": "cites",
            "value": 8
        },
        {
            "source": 231,
            "target": 699,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 404,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 62,
            "type": "cites",
            "value": 7
        },
        {
            "source": 178,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 55,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 46,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 533,
            "target": 245,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 189,
            "type": "cites",
            "value": 5
        },
        {
            "source": 54,
            "target": 814,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 533,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 814,
            "type": "cites",
            "value": 6
        },
        {
            "source": 245,
            "target": 815,
            "type": "cites",
            "value": 4
        },
        {
            "source": 285,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 285,
            "target": 287,
            "type": "cites",
            "value": 5
        },
        {
            "source": 54,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 54,
            "target": 34,
            "type": "cites",
            "value": 6
        },
        {
            "source": 285,
            "target": 816,
            "type": "cites",
            "value": 3
        },
        {
            "source": 285,
            "target": 817,
            "type": "cites",
            "value": 3
        },
        {
            "source": 285,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 54,
            "target": 300,
            "type": "cites",
            "value": 12
        },
        {
            "source": 245,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 54,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 54,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 818,
            "target": 189,
            "type": "cites",
            "value": 6
        },
        {
            "source": 819,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 820,
            "target": 189,
            "type": "cites",
            "value": 7
        },
        {
            "source": 189,
            "target": 821,
            "type": "cites",
            "value": 3
        },
        {
            "source": 820,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 535,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 186,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 822,
            "type": "cites",
            "value": 5
        },
        {
            "source": 304,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 823,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 304,
            "target": 195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 146,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 37,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 146,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 146,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 146,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 180,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 824,
            "target": 244,
            "type": "cites",
            "value": 9
        },
        {
            "source": 824,
            "target": 325,
            "type": "cites",
            "value": 3
        },
        {
            "source": 824,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 825,
            "target": 244,
            "type": "cites",
            "value": 9
        },
        {
            "source": 825,
            "target": 325,
            "type": "cites",
            "value": 3
        },
        {
            "source": 825,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 180,
            "target": 244,
            "type": "cites",
            "value": 11
        },
        {
            "source": 180,
            "target": 325,
            "type": "cites",
            "value": 3
        },
        {
            "source": 180,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 826,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 198,
            "type": "cites",
            "value": 5
        },
        {
            "source": 240,
            "target": 241,
            "type": "cites",
            "value": 4
        },
        {
            "source": 240,
            "target": 251,
            "type": "cites",
            "value": 5
        },
        {
            "source": 240,
            "target": 99,
            "type": "cites",
            "value": 5
        },
        {
            "source": 240,
            "target": 242,
            "type": "cites",
            "value": 8
        },
        {
            "source": 240,
            "target": 103,
            "type": "cites",
            "value": 14
        },
        {
            "source": 241,
            "target": 251,
            "type": "cites",
            "value": 5
        },
        {
            "source": 241,
            "target": 240,
            "type": "cites",
            "value": 4
        },
        {
            "source": 241,
            "target": 99,
            "type": "cites",
            "value": 5
        },
        {
            "source": 241,
            "target": 242,
            "type": "cites",
            "value": 8
        },
        {
            "source": 241,
            "target": 103,
            "type": "cites",
            "value": 14
        },
        {
            "source": 827,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 609,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 828,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 829,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 608,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 827,
            "target": 608,
            "type": "cites",
            "value": 6
        },
        {
            "source": 608,
            "target": 830,
            "type": "cites",
            "value": 3
        },
        {
            "source": 110,
            "target": 72,
            "type": "cites",
            "value": 9
        },
        {
            "source": 106,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 105,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 110,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 831,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 549,
            "target": 486,
            "type": "cites",
            "value": 22
        },
        {
            "source": 228,
            "target": 193,
            "type": "cites",
            "value": 5
        },
        {
            "source": 832,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 501,
            "type": "cites",
            "value": 6
        },
        {
            "source": 228,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 228,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 404,
            "target": 316,
            "type": "cites",
            "value": 7
        },
        {
            "source": 338,
            "target": 833,
            "type": "cites",
            "value": 4
        },
        {
            "source": 338,
            "target": 834,
            "type": "cites",
            "value": 3
        },
        {
            "source": 835,
            "target": 836,
            "type": "cites",
            "value": 4
        },
        {
            "source": 835,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 837,
            "target": 836,
            "type": "cites",
            "value": 4
        },
        {
            "source": 837,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 838,
            "target": 836,
            "type": "cites",
            "value": 4
        },
        {
            "source": 838,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 836,
            "type": "cites",
            "value": 4
        },
        {
            "source": 338,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 338,
            "target": 287,
            "type": "cites",
            "value": 10
        },
        {
            "source": 419,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 189,
            "type": "cites",
            "value": 10
        },
        {
            "source": 189,
            "target": 839,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 840,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 841,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 842,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 558,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 381,
            "type": "cites",
            "value": 8
        },
        {
            "source": 312,
            "target": 287,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 189,
            "target": 287,
            "type": "cites",
            "value": 5
        },
        {
            "source": 818,
            "target": 740,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 740,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 740,
            "type": "cites",
            "value": 5
        },
        {
            "source": 312,
            "target": 547,
            "type": "cites",
            "value": 5
        },
        {
            "source": 412,
            "target": 547,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 189,
            "type": "cites",
            "value": 16
        },
        {
            "source": 412,
            "target": 189,
            "type": "cites",
            "value": 5
        },
        {
            "source": 818,
            "target": 347,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 347,
            "type": "cites",
            "value": 5
        },
        {
            "source": 312,
            "target": 816,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 817,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 300,
            "type": "cites",
            "value": 5
        },
        {
            "source": 189,
            "target": 816,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 817,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 300,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 820,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 346,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 670,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 60,
            "type": "cites",
            "value": 5
        },
        {
            "source": 189,
            "target": 91,
            "type": "cites",
            "value": 6
        },
        {
            "source": 312,
            "target": 311,
            "type": "cites",
            "value": 4
        },
        {
            "source": 312,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 412,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 843,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 844,
            "type": "cites",
            "value": 3
        },
        {
            "source": 845,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 23,
            "target": 14,
            "type": "cites",
            "value": 12
        },
        {
            "source": 846,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 406,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 23,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 23,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 60,
            "type": "cites",
            "value": 10
        },
        {
            "source": 23,
            "target": 71,
            "type": "cites",
            "value": 4
        },
        {
            "source": 23,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 23,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 23,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 23,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 241,
            "target": 90,
            "type": "cites",
            "value": 6
        },
        {
            "source": 241,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 241,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 251,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 240,
            "target": 90,
            "type": "cites",
            "value": 6
        },
        {
            "source": 240,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 240,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 252,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 253,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 254,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 242,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 90,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 111,
            "type": "cites",
            "value": 8
        },
        {
            "source": 252,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 252,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 253,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 253,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 254,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 254,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 847,
            "type": "cites",
            "value": 6
        },
        {
            "source": 241,
            "target": 455,
            "type": "cites",
            "value": 3
        },
        {
            "source": 240,
            "target": 455,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 257,
            "type": "cites",
            "value": 6
        },
        {
            "source": 241,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 240,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 241,
            "target": 458,
            "type": "cites",
            "value": 3
        },
        {
            "source": 241,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 240,
            "target": 458,
            "type": "cites",
            "value": 3
        },
        {
            "source": 240,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 458,
            "type": "cites",
            "value": 3
        },
        {
            "source": 99,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 377,
            "type": "cites",
            "value": 3
        },
        {
            "source": 241,
            "target": 693,
            "type": "cites",
            "value": 3
        },
        {
            "source": 241,
            "target": 267,
            "type": "cites",
            "value": 6
        },
        {
            "source": 240,
            "target": 693,
            "type": "cites",
            "value": 3
        },
        {
            "source": 240,
            "target": 267,
            "type": "cites",
            "value": 6
        },
        {
            "source": 99,
            "target": 693,
            "type": "cites",
            "value": 3
        },
        {
            "source": 241,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 240,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 258,
            "type": "cites",
            "value": 3
        },
        {
            "source": 23,
            "target": 233,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 357,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 364,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 317,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 339,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 848,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 849,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 327,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 850,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 205,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 365,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 366,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 170,
            "type": "cites",
            "value": 16
        },
        {
            "source": 103,
            "target": 851,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 852,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 416,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 853,
            "type": "cites",
            "value": 6
        },
        {
            "source": 29,
            "target": 322,
            "type": "cites",
            "value": 8
        },
        {
            "source": 29,
            "target": 324,
            "type": "cites",
            "value": 6
        },
        {
            "source": 31,
            "target": 320,
            "type": "cites",
            "value": 6
        },
        {
            "source": 31,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 244,
            "type": "cites",
            "value": 9
        },
        {
            "source": 29,
            "target": 774,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 774,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 29,
            "type": "cites",
            "value": 7
        },
        {
            "source": 29,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 31,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 445,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 645,
            "type": "cites",
            "value": 3
        },
        {
            "source": 468,
            "target": 251,
            "type": "cites",
            "value": 5
        },
        {
            "source": 468,
            "target": 242,
            "type": "cites",
            "value": 7
        },
        {
            "source": 468,
            "target": 103,
            "type": "cites",
            "value": 13
        },
        {
            "source": 249,
            "target": 294,
            "type": "cites",
            "value": 3
        },
        {
            "source": 249,
            "target": 283,
            "type": "cites",
            "value": 4
        },
        {
            "source": 249,
            "target": 422,
            "type": "cites",
            "value": 3
        },
        {
            "source": 249,
            "target": 423,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 249,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 467,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 468,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 466,
            "target": 88,
            "type": "cites",
            "value": 4
        },
        {
            "source": 466,
            "target": 847,
            "type": "cites",
            "value": 3
        },
        {
            "source": 466,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 249,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 467,
            "target": 88,
            "type": "cites",
            "value": 4
        },
        {
            "source": 467,
            "target": 847,
            "type": "cites",
            "value": 3
        },
        {
            "source": 467,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 468,
            "target": 88,
            "type": "cites",
            "value": 4
        },
        {
            "source": 468,
            "target": 847,
            "type": "cites",
            "value": 3
        },
        {
            "source": 468,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 249,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 468,
            "target": 693,
            "type": "cites",
            "value": 3
        },
        {
            "source": 468,
            "target": 267,
            "type": "cites",
            "value": 6
        },
        {
            "source": 468,
            "target": 268,
            "type": "cites",
            "value": 4
        },
        {
            "source": 546,
            "target": 547,
            "type": "cites",
            "value": 6
        },
        {
            "source": 854,
            "target": 547,
            "type": "cites",
            "value": 3
        },
        {
            "source": 855,
            "target": 547,
            "type": "cites",
            "value": 3
        },
        {
            "source": 546,
            "target": 312,
            "type": "cites",
            "value": 3
        },
        {
            "source": 546,
            "target": 189,
            "type": "cites",
            "value": 5
        },
        {
            "source": 547,
            "target": 312,
            "type": "cites",
            "value": 3
        },
        {
            "source": 547,
            "target": 189,
            "type": "cites",
            "value": 17
        },
        {
            "source": 78,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 43,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 581,
            "type": "cites",
            "value": 7
        },
        {
            "source": 303,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 856,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 160,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 303,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 303,
            "target": 850,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 60,
            "type": "cites",
            "value": 4
        },
        {
            "source": 303,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 783,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 193,
            "type": "cites",
            "value": 9
        },
        {
            "source": 669,
            "target": 42,
            "type": "cites",
            "value": 5
        },
        {
            "source": 669,
            "target": 857,
            "type": "cites",
            "value": 4
        },
        {
            "source": 669,
            "target": 24,
            "type": "cites",
            "value": 4
        },
        {
            "source": 669,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 24,
            "target": 42,
            "type": "cites",
            "value": 5
        },
        {
            "source": 24,
            "target": 857,
            "type": "cites",
            "value": 4
        },
        {
            "source": 669,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 116,
            "target": 208,
            "type": "cites",
            "value": 4
        },
        {
            "source": 116,
            "target": 26,
            "type": "cites",
            "value": 8
        },
        {
            "source": 116,
            "target": 209,
            "type": "cites",
            "value": 4
        },
        {
            "source": 858,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 464,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 859,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 141,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 116,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 294,
            "target": 170,
            "type": "cites",
            "value": 9
        },
        {
            "source": 294,
            "target": 171,
            "type": "cites",
            "value": 9
        },
        {
            "source": 294,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 170,
            "type": "cites",
            "value": 11
        },
        {
            "source": 283,
            "target": 171,
            "type": "cites",
            "value": 11
        },
        {
            "source": 283,
            "target": 257,
            "type": "cites",
            "value": 4
        },
        {
            "source": 422,
            "target": 170,
            "type": "cites",
            "value": 10
        },
        {
            "source": 422,
            "target": 171,
            "type": "cites",
            "value": 10
        },
        {
            "source": 422,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 170,
            "type": "cites",
            "value": 10
        },
        {
            "source": 423,
            "target": 171,
            "type": "cites",
            "value": 10
        },
        {
            "source": 423,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 171,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 170,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 171,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 231,
            "type": "cites",
            "value": 6
        },
        {
            "source": 283,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 581,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 80,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 860,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 179,
            "type": "cites",
            "value": 9
        },
        {
            "source": 422,
            "target": 294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 422,
            "target": 283,
            "type": "cites",
            "value": 7
        },
        {
            "source": 422,
            "target": 423,
            "type": "cites",
            "value": 5
        },
        {
            "source": 422,
            "target": 103,
            "type": "cites",
            "value": 17
        },
        {
            "source": 423,
            "target": 294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 423,
            "target": 283,
            "type": "cites",
            "value": 7
        },
        {
            "source": 423,
            "target": 422,
            "type": "cites",
            "value": 5
        },
        {
            "source": 423,
            "target": 103,
            "type": "cites",
            "value": 17
        },
        {
            "source": 22,
            "target": 294,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 283,
            "type": "cites",
            "value": 9
        },
        {
            "source": 22,
            "target": 422,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 423,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 187,
            "type": "cites",
            "value": 8
        },
        {
            "source": 294,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 91,
            "type": "cites",
            "value": 6
        },
        {
            "source": 422,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 423,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 169,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 182,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 91,
            "type": "cites",
            "value": 9
        },
        {
            "source": 283,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 462,
            "type": "cites",
            "value": 5
        },
        {
            "source": 251,
            "target": 248,
            "type": "cites",
            "value": 3
        },
        {
            "source": 422,
            "target": 249,
            "type": "cites",
            "value": 3
        },
        {
            "source": 422,
            "target": 14,
            "type": "cites",
            "value": 10
        },
        {
            "source": 423,
            "target": 249,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 14,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 248,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 235,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 185,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 185,
            "type": "cites",
            "value": 6
        },
        {
            "source": 283,
            "target": 861,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 422,
            "target": 861,
            "type": "cites",
            "value": 3
        },
        {
            "source": 422,
            "target": 439,
            "type": "cites",
            "value": 4
        },
        {
            "source": 422,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 861,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 439,
            "type": "cites",
            "value": 4
        },
        {
            "source": 423,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 425,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 439,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 861,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 377,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 338,
            "type": "cites",
            "value": 9
        },
        {
            "source": 422,
            "target": 251,
            "type": "cites",
            "value": 6
        },
        {
            "source": 423,
            "target": 251,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 862,
            "type": "cites",
            "value": 3
        },
        {
            "source": 294,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 294,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 267,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 269,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 270,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 242,
            "type": "cites",
            "value": 4
        },
        {
            "source": 422,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 422,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 422,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 422,
            "target": 22,
            "type": "cites",
            "value": 7
        },
        {
            "source": 423,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 22,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 267,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 268,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 294,
            "target": 572,
            "type": "cites",
            "value": 4
        },
        {
            "source": 283,
            "target": 572,
            "type": "cites",
            "value": 4
        },
        {
            "source": 422,
            "target": 30,
            "type": "cites",
            "value": 4
        },
        {
            "source": 422,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 422,
            "target": 572,
            "type": "cites",
            "value": 4
        },
        {
            "source": 423,
            "target": 30,
            "type": "cites",
            "value": 4
        },
        {
            "source": 423,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 423,
            "target": 572,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 693,
            "type": "cites",
            "value": 3
        },
        {
            "source": 144,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 143,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 863,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 864,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 144,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 143,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 459,
            "target": 265,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 265,
            "type": "cites",
            "value": 6
        },
        {
            "source": 265,
            "target": 865,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 866,
            "type": "cites",
            "value": 3
        },
        {
            "source": 867,
            "target": 265,
            "type": "cites",
            "value": 6
        },
        {
            "source": 868,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 869,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 459,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 0,
            "target": 867,
            "type": "cites",
            "value": 4
        },
        {
            "source": 870,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 871,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 867,
            "type": "cites",
            "value": 7
        },
        {
            "source": 867,
            "target": 0,
            "type": "cites",
            "value": 6
        },
        {
            "source": 265,
            "target": 375,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 872,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 177,
            "type": "cites",
            "value": 7
        },
        {
            "source": 265,
            "target": 873,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 459,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 867,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 459,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 46,
            "type": "cites",
            "value": 11
        },
        {
            "source": 459,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 459,
            "target": 125,
            "type": "cites",
            "value": 8
        },
        {
            "source": 0,
            "target": 137,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 125,
            "type": "cites",
            "value": 14
        },
        {
            "source": 265,
            "target": 137,
            "type": "cites",
            "value": 8
        },
        {
            "source": 265,
            "target": 125,
            "type": "cites",
            "value": 23
        },
        {
            "source": 867,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 867,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 0,
            "target": 875,
            "type": "cites",
            "value": 3
        },
        {
            "source": 459,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 0,
            "target": 63,
            "type": "cites",
            "value": 8
        },
        {
            "source": 867,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 198,
            "target": 876,
            "type": "cites",
            "value": 3
        },
        {
            "source": 8,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 198,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 198,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 8,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 206,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 305,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 309,
            "type": "cites",
            "value": 5
        },
        {
            "source": 502,
            "target": 308,
            "type": "cites",
            "value": 5
        },
        {
            "source": 502,
            "target": 310,
            "type": "cites",
            "value": 5
        },
        {
            "source": 206,
            "target": 310,
            "type": "cites",
            "value": 7
        },
        {
            "source": 877,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 206,
            "target": 38,
            "type": "cites",
            "value": 6
        },
        {
            "source": 206,
            "target": 502,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 502,
            "type": "cites",
            "value": 6
        },
        {
            "source": 204,
            "target": 414,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 503,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 235,
            "type": "cites",
            "value": 6
        },
        {
            "source": 502,
            "target": 417,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 417,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 9,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 76,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 446,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 76,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 76,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 446,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 878,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 879,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 502,
            "target": 880,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 881,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 875,
            "type": "cites",
            "value": 4
        },
        {
            "source": 502,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 206,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 137,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 879,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 880,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 881,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 231,
            "type": "cites",
            "value": 6
        },
        {
            "source": 204,
            "target": 875,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 125,
            "type": "cites",
            "value": 8
        },
        {
            "source": 206,
            "target": 307,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 221,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 221,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 882,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 221,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 221,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 882,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 189,
            "type": "cites",
            "value": 5
        },
        {
            "source": 883,
            "target": 884,
            "type": "cites",
            "value": 6
        },
        {
            "source": 275,
            "target": 885,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 885,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 275,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 876,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 275,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 712,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 10,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 833,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 7,
            "type": "cites",
            "value": 22
        },
        {
            "source": 4,
            "target": 113,
            "type": "cites",
            "value": 16
        },
        {
            "source": 4,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 886,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 887,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 888,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 889,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 100,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 101,
            "target": 96,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 101,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 101,
            "target": 84,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 659,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 178,
            "type": "cites",
            "value": 8
        },
        {
            "source": 36,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 101,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 96,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 101,
            "target": 102,
            "type": "cites",
            "value": 5
        },
        {
            "source": 101,
            "target": 100,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 194,
            "type": "cites",
            "value": 3
        },
        {
            "source": 890,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 303,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 891,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 892,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 156,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 893,
            "target": 12,
            "type": "cites",
            "value": 4
        },
        {
            "source": 893,
            "target": 6,
            "type": "cites",
            "value": 3
        },
        {
            "source": 894,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 894,
            "target": 6,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 501,
            "type": "cites",
            "value": 6
        },
        {
            "source": 192,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 132,
            "target": 7,
            "type": "cites",
            "value": 14
        },
        {
            "source": 132,
            "target": 501,
            "type": "cites",
            "value": 8
        },
        {
            "source": 341,
            "target": 192,
            "type": "cites",
            "value": 6
        },
        {
            "source": 132,
            "target": 895,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 345,
            "type": "cites",
            "value": 4
        },
        {
            "source": 896,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 825,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 824,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 180,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 651,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 651,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 218,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 847,
            "type": "cites",
            "value": 6
        },
        {
            "source": 651,
            "target": 235,
            "type": "cites",
            "value": 4
        },
        {
            "source": 651,
            "target": 14,
            "type": "cites",
            "value": 12
        },
        {
            "source": 651,
            "target": 103,
            "type": "cites",
            "value": 15
        },
        {
            "source": 897,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 898,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 651,
            "target": 102,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 899,
            "type": "cites",
            "value": 3
        },
        {
            "source": 651,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 32,
            "type": "cites",
            "value": 7
        },
        {
            "source": 443,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 444,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 99,
            "type": "cites",
            "value": 3
        },
        {
            "source": 246,
            "target": 23,
            "type": "cites",
            "value": 4
        },
        {
            "source": 443,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 444,
            "target": 96,
            "type": "cites",
            "value": 3
        },
        {
            "source": 124,
            "target": 23,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 102,
            "type": "cites",
            "value": 6
        },
        {
            "source": 443,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 444,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 186,
            "type": "cites",
            "value": 7
        },
        {
            "source": 103,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 686,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 686,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 142,
            "target": 275,
            "type": "cites",
            "value": 4
        },
        {
            "source": 142,
            "target": 281,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 275,
            "type": "cites",
            "value": 6
        },
        {
            "source": 293,
            "target": 281,
            "type": "cites",
            "value": 6
        },
        {
            "source": 293,
            "target": 192,
            "type": "cites",
            "value": 4
        },
        {
            "source": 900,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 900,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 142,
            "target": 38,
            "type": "cites",
            "value": 8
        },
        {
            "source": 901,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 900,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 38,
            "type": "cites",
            "value": 25
        },
        {
            "source": 293,
            "target": 902,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 903,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 308,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 309,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 310,
            "type": "cites",
            "value": 4
        },
        {
            "source": 901,
            "target": 12,
            "type": "cites",
            "value": 8
        },
        {
            "source": 900,
            "target": 12,
            "type": "cites",
            "value": 8
        },
        {
            "source": 142,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 895,
            "type": "cites",
            "value": 3
        },
        {
            "source": 142,
            "target": 904,
            "type": "cites",
            "value": 3
        },
        {
            "source": 142,
            "target": 293,
            "type": "cites",
            "value": 6
        },
        {
            "source": 293,
            "target": 142,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 904,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 167,
            "type": "cites",
            "value": 7
        },
        {
            "source": 293,
            "target": 905,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 906,
            "type": "cites",
            "value": 3
        },
        {
            "source": 741,
            "target": 688,
            "type": "cites",
            "value": 3
        },
        {
            "source": 741,
            "target": 151,
            "type": "cites",
            "value": 10
        },
        {
            "source": 741,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 152,
            "target": 151,
            "type": "cites",
            "value": 8
        },
        {
            "source": 152,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 907,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 741,
            "target": 710,
            "type": "cites",
            "value": 3
        },
        {
            "source": 741,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 152,
            "target": 709,
            "type": "cites",
            "value": 3
        },
        {
            "source": 152,
            "target": 710,
            "type": "cites",
            "value": 4
        },
        {
            "source": 152,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 151,
            "target": 709,
            "type": "cites",
            "value": 7
        },
        {
            "source": 151,
            "target": 710,
            "type": "cites",
            "value": 8
        },
        {
            "source": 151,
            "target": 187,
            "type": "cites",
            "value": 11
        },
        {
            "source": 741,
            "target": 6,
            "type": "cites",
            "value": 3
        },
        {
            "source": 741,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 152,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 908,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 909,
            "target": 608,
            "type": "cites",
            "value": 3
        },
        {
            "source": 910,
            "target": 608,
            "type": "cites",
            "value": 3
        },
        {
            "source": 911,
            "target": 544,
            "type": "cites",
            "value": 3
        },
        {
            "source": 591,
            "target": 544,
            "type": "cites",
            "value": 4
        },
        {
            "source": 591,
            "target": 912,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 913,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 914,
            "type": "cites",
            "value": 6
        },
        {
            "source": 911,
            "target": 591,
            "type": "cites",
            "value": 8
        },
        {
            "source": 915,
            "target": 591,
            "type": "cites",
            "value": 3
        },
        {
            "source": 591,
            "target": 916,
            "type": "cites",
            "value": 8
        },
        {
            "source": 591,
            "target": 917,
            "type": "cites",
            "value": 5
        },
        {
            "source": 591,
            "target": 918,
            "type": "cites",
            "value": 5
        },
        {
            "source": 591,
            "target": 919,
            "type": "cites",
            "value": 5
        },
        {
            "source": 594,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 911,
            "target": 625,
            "type": "cites",
            "value": 9
        },
        {
            "source": 911,
            "target": 540,
            "type": "cites",
            "value": 6
        },
        {
            "source": 911,
            "target": 541,
            "type": "cites",
            "value": 11
        },
        {
            "source": 915,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 915,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 591,
            "target": 625,
            "type": "cites",
            "value": 33
        },
        {
            "source": 911,
            "target": 598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 591,
            "target": 766,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 767,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 768,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 765,
            "type": "cites",
            "value": 15
        },
        {
            "source": 591,
            "target": 920,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 921,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 922,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 923,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 924,
            "type": "cites",
            "value": 4
        },
        {
            "source": 591,
            "target": 925,
            "type": "cites",
            "value": 5
        },
        {
            "source": 591,
            "target": 926,
            "type": "cites",
            "value": 4
        },
        {
            "source": 591,
            "target": 130,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 769,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 770,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 771,
            "type": "cites",
            "value": 7
        },
        {
            "source": 911,
            "target": 810,
            "type": "cites",
            "value": 3
        },
        {
            "source": 911,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 911,
            "target": 811,
            "type": "cites",
            "value": 3
        },
        {
            "source": 591,
            "target": 810,
            "type": "cites",
            "value": 5
        },
        {
            "source": 591,
            "target": 314,
            "type": "cites",
            "value": 7
        },
        {
            "source": 591,
            "target": 811,
            "type": "cites",
            "value": 5
        },
        {
            "source": 591,
            "target": 484,
            "type": "cites",
            "value": 6
        },
        {
            "source": 591,
            "target": 927,
            "type": "cites",
            "value": 3
        },
        {
            "source": 650,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 928,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 929,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 930,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 931,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 932,
            "type": "cites",
            "value": 4
        },
        {
            "source": 650,
            "target": 103,
            "type": "cites",
            "value": 6
        },
        {
            "source": 928,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 929,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 930,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 124,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 276,
            "target": 390,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 317,
            "type": "cites",
            "value": 6
        },
        {
            "source": 933,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 933,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 244,
            "type": "cites",
            "value": 14
        },
        {
            "source": 247,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 248,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 247,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 248,
            "target": 103,
            "type": "cites",
            "value": 6
        },
        {
            "source": 247,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 247,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 248,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 275,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 281,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 293,
            "target": 934,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 935,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 688,
            "type": "cites",
            "value": 4
        },
        {
            "source": 142,
            "target": 934,
            "type": "cites",
            "value": 3
        },
        {
            "source": 142,
            "target": 935,
            "type": "cites",
            "value": 3
        },
        {
            "source": 142,
            "target": 688,
            "type": "cites",
            "value": 3
        },
        {
            "source": 904,
            "target": 934,
            "type": "cites",
            "value": 3
        },
        {
            "source": 904,
            "target": 935,
            "type": "cites",
            "value": 3
        },
        {
            "source": 904,
            "target": 688,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 934,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 935,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 688,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 803,
            "type": "cites",
            "value": 3
        },
        {
            "source": 904,
            "target": 293,
            "type": "cites",
            "value": 3
        },
        {
            "source": 904,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 293,
            "type": "cites",
            "value": 8
        },
        {
            "source": 38,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 906,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 293,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 142,
            "target": 7,
            "type": "cites",
            "value": 11
        },
        {
            "source": 904,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 38,
            "target": 501,
            "type": "cites",
            "value": 5
        },
        {
            "source": 936,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 132,
            "target": 4,
            "type": "cites",
            "value": 9
        },
        {
            "source": 206,
            "target": 280,
            "type": "cites",
            "value": 5
        },
        {
            "source": 206,
            "target": 198,
            "type": "cites",
            "value": 6
        },
        {
            "source": 206,
            "target": 700,
            "type": "cites",
            "value": 4
        },
        {
            "source": 206,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 937,
            "target": 206,
            "type": "cites",
            "value": 3
        },
        {
            "source": 937,
            "target": 938,
            "type": "cites",
            "value": 3
        },
        {
            "source": 937,
            "target": 201,
            "type": "cites",
            "value": 3
        },
        {
            "source": 937,
            "target": 939,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 938,
            "type": "cites",
            "value": 4
        },
        {
            "source": 206,
            "target": 201,
            "type": "cites",
            "value": 8
        },
        {
            "source": 206,
            "target": 939,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 658,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 113,
            "type": "cites",
            "value": 5
        },
        {
            "source": 102,
            "target": 659,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 660,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 661,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 205,
            "type": "cites",
            "value": 6
        },
        {
            "source": 102,
            "target": 662,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 125,
            "type": "cites",
            "value": 7
        },
        {
            "source": 220,
            "target": 204,
            "type": "cites",
            "value": 6
        },
        {
            "source": 832,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 940,
            "type": "cites",
            "value": 4
        },
        {
            "source": 304,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 941,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 942,
            "target": 304,
            "type": "cites",
            "value": 8
        },
        {
            "source": 304,
            "target": 88,
            "type": "cites",
            "value": 5
        },
        {
            "source": 304,
            "target": 80,
            "type": "cites",
            "value": 5
        },
        {
            "source": 942,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 304,
            "target": 193,
            "type": "cites",
            "value": 15
        },
        {
            "source": 942,
            "target": 25,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 25,
            "type": "cites",
            "value": 4
        },
        {
            "source": 304,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 304,
            "target": 86,
            "type": "cites",
            "value": 5
        },
        {
            "source": 499,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 943,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 944,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 945,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 24,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 499,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 462,
            "target": 169,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 184,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 182,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 185,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 462,
            "target": 946,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 947,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 948,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 949,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 14,
            "type": "cites",
            "value": 9
        },
        {
            "source": 462,
            "target": 950,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 179,
            "type": "cites",
            "value": 6
        },
        {
            "source": 462,
            "target": 440,
            "type": "cites",
            "value": 5
        },
        {
            "source": 462,
            "target": 178,
            "type": "cites",
            "value": 6
        },
        {
            "source": 462,
            "target": 951,
            "type": "cites",
            "value": 3
        },
        {
            "source": 462,
            "target": 439,
            "type": "cites",
            "value": 3
        },
        {
            "source": 462,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 462,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 435,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 874,
            "type": "cites",
            "value": 10
        },
        {
            "source": 336,
            "target": 952,
            "type": "cites",
            "value": 5
        },
        {
            "source": 336,
            "target": 873,
            "type": "cites",
            "value": 7
        },
        {
            "source": 336,
            "target": 953,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 954,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 232,
            "type": "cites",
            "value": 13
        },
        {
            "source": 22,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 873,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 874,
            "type": "cites",
            "value": 10
        },
        {
            "source": 232,
            "target": 952,
            "type": "cites",
            "value": 6
        },
        {
            "source": 232,
            "target": 873,
            "type": "cites",
            "value": 7
        },
        {
            "source": 232,
            "target": 953,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 336,
            "type": "cites",
            "value": 12
        },
        {
            "source": 232,
            "target": 954,
            "type": "cites",
            "value": 6
        },
        {
            "source": 435,
            "target": 244,
            "type": "cites",
            "value": 8
        },
        {
            "source": 435,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 244,
            "type": "cites",
            "value": 14
        },
        {
            "source": 336,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 244,
            "target": 80,
            "type": "cites",
            "value": 10
        },
        {
            "source": 244,
            "target": 581,
            "type": "cites",
            "value": 8
        },
        {
            "source": 244,
            "target": 931,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 932,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 244,
            "type": "cites",
            "value": 14
        },
        {
            "source": 232,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 232,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 167,
            "type": "cites",
            "value": 6
        },
        {
            "source": 336,
            "target": 446,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 955,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 956,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 446,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 167,
            "type": "cites",
            "value": 6
        },
        {
            "source": 232,
            "target": 446,
            "type": "cites",
            "value": 4
        },
        {
            "source": 435,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 102,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 102,
            "type": "cites",
            "value": 13
        },
        {
            "source": 244,
            "target": 957,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 958,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 102,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 643,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 643,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 196,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 196,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 959,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 196,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 959,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 196,
            "type": "cites",
            "value": 9
        },
        {
            "source": 244,
            "target": 959,
            "type": "cites",
            "value": 9
        },
        {
            "source": 232,
            "target": 196,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 52,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 959,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 960,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 960,
            "type": "cites",
            "value": 8
        },
        {
            "source": 336,
            "target": 700,
            "type": "cites",
            "value": 9
        },
        {
            "source": 336,
            "target": 961,
            "type": "cites",
            "value": 8
        },
        {
            "source": 336,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 33,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 961,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 700,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 961,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 33,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 700,
            "type": "cites",
            "value": 8
        },
        {
            "source": 244,
            "target": 961,
            "type": "cites",
            "value": 8
        },
        {
            "source": 232,
            "target": 700,
            "type": "cites",
            "value": 10
        },
        {
            "source": 232,
            "target": 961,
            "type": "cites",
            "value": 9
        },
        {
            "source": 232,
            "target": 177,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 33,
            "type": "cites",
            "value": 11
        },
        {
            "source": 244,
            "target": 712,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 712,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 962,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 963,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 962,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 963,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 962,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 963,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 4,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 244,
            "target": 783,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 4,
            "type": "cites",
            "value": 9
        },
        {
            "source": 336,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 187,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 435,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 125,
            "type": "cites",
            "value": 7
        },
        {
            "source": 232,
            "target": 125,
            "type": "cites",
            "value": 7
        },
        {
            "source": 336,
            "target": 964,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 965,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 966,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 967,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 968,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 969,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 970,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 971,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 964,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 965,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 966,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 967,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 968,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 969,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 970,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 971,
            "type": "cites",
            "value": 3
        },
        {
            "source": 435,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 972,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 706,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 972,
            "type": "cites",
            "value": 12
        },
        {
            "source": 244,
            "target": 973,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 974,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 975,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 976,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 977,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 421,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 406,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 978,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 979,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 977,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 978,
            "type": "cites",
            "value": 5
        },
        {
            "source": 103,
            "target": 979,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 977,
            "type": "cites",
            "value": 15
        },
        {
            "source": 244,
            "target": 421,
            "type": "cites",
            "value": 14
        },
        {
            "source": 244,
            "target": 978,
            "type": "cites",
            "value": 11
        },
        {
            "source": 244,
            "target": 979,
            "type": "cites",
            "value": 11
        },
        {
            "source": 336,
            "target": 980,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 981,
            "type": "cites",
            "value": 6
        },
        {
            "source": 336,
            "target": 982,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 983,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 984,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 985,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 986,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 980,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 981,
            "type": "cites",
            "value": 6
        },
        {
            "source": 232,
            "target": 982,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 983,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 984,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 985,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 986,
            "type": "cites",
            "value": 3
        },
        {
            "source": 435,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 201,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 112,
            "target": 113,
            "type": "cites",
            "value": 4
        },
        {
            "source": 201,
            "target": 46,
            "type": "cites",
            "value": 5
        },
        {
            "source": 112,
            "target": 46,
            "type": "cites",
            "value": 6
        },
        {
            "source": 987,
            "target": 201,
            "type": "cites",
            "value": 3
        },
        {
            "source": 42,
            "target": 201,
            "type": "cites",
            "value": 5
        },
        {
            "source": 112,
            "target": 52,
            "type": "cites",
            "value": 9
        },
        {
            "source": 112,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 7,
            "type": "cites",
            "value": 15
        },
        {
            "source": 42,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 85,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 9,
            "type": "cites",
            "value": 8
        },
        {
            "source": 228,
            "target": 8,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 151,
            "type": "cites",
            "value": 6
        },
        {
            "source": 228,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 577,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 197,
            "type": "cites",
            "value": 6
        },
        {
            "source": 26,
            "target": 641,
            "type": "cites",
            "value": 3
        },
        {
            "source": 526,
            "target": 391,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 669,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 391,
            "type": "cites",
            "value": 8
        },
        {
            "source": 26,
            "target": 857,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 479,
            "type": "cites",
            "value": 9
        },
        {
            "source": 455,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 235,
            "target": 204,
            "type": "cites",
            "value": 5
        },
        {
            "source": 233,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 988,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 195,
            "type": "cites",
            "value": 5
        },
        {
            "source": 235,
            "target": 193,
            "type": "cites",
            "value": 6
        },
        {
            "source": 233,
            "target": 193,
            "type": "cites",
            "value": 7
        },
        {
            "source": 235,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 53,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 301,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 479,
            "type": "cites",
            "value": 8
        },
        {
            "source": 233,
            "target": 301,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 479,
            "type": "cites",
            "value": 7
        },
        {
            "source": 235,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 989,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 42,
            "target": 124,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 990,
            "target": 225,
            "type": "cites",
            "value": 5
        },
        {
            "source": 991,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 992,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 390,
            "target": 225,
            "type": "cites",
            "value": 5
        },
        {
            "source": 990,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 990,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 990,
            "target": 363,
            "type": "cites",
            "value": 3
        },
        {
            "source": 990,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 991,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 991,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 991,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 992,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 992,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 992,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 390,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 390,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 390,
            "target": 363,
            "type": "cites",
            "value": 3
        },
        {
            "source": 390,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 225,
            "target": 320,
            "type": "cites",
            "value": 8
        },
        {
            "source": 225,
            "target": 244,
            "type": "cites",
            "value": 8
        },
        {
            "source": 225,
            "target": 363,
            "type": "cites",
            "value": 4
        },
        {
            "source": 225,
            "target": 323,
            "type": "cites",
            "value": 8
        },
        {
            "source": 993,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 994,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 151,
            "type": "cites",
            "value": 8
        },
        {
            "source": 222,
            "target": 8,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 687,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 178,
            "type": "cites",
            "value": 8
        },
        {
            "source": 41,
            "target": 449,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 450,
            "type": "cites",
            "value": 6
        },
        {
            "source": 12,
            "target": 70,
            "type": "cites",
            "value": 5
        },
        {
            "source": 12,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 995,
            "target": 62,
            "type": "cites",
            "value": 6
        },
        {
            "source": 995,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 206,
            "target": 62,
            "type": "cites",
            "value": 6
        },
        {
            "source": 206,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 995,
            "target": 200,
            "type": "cites",
            "value": 10
        },
        {
            "source": 995,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 995,
            "target": 215,
            "type": "cites",
            "value": 10
        },
        {
            "source": 206,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 995,
            "target": 996,
            "type": "cites",
            "value": 4
        },
        {
            "source": 995,
            "target": 46,
            "type": "cites",
            "value": 7
        },
        {
            "source": 995,
            "target": 580,
            "type": "cites",
            "value": 4
        },
        {
            "source": 206,
            "target": 46,
            "type": "cites",
            "value": 6
        },
        {
            "source": 206,
            "target": 580,
            "type": "cites",
            "value": 7
        },
        {
            "source": 995,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 206,
            "target": 997,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 995,
            "target": 310,
            "type": "cites",
            "value": 7
        },
        {
            "source": 995,
            "target": 307,
            "type": "cites",
            "value": 4
        },
        {
            "source": 206,
            "target": 995,
            "type": "cites",
            "value": 4
        },
        {
            "source": 995,
            "target": 206,
            "type": "cites",
            "value": 4
        },
        {
            "source": 995,
            "target": 201,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 998,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 68,
            "type": "cites",
            "value": 5
        },
        {
            "source": 38,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 878,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 895,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 280,
            "type": "cites",
            "value": 3
        },
        {
            "source": 565,
            "target": 920,
            "type": "cites",
            "value": 4
        },
        {
            "source": 565,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 561,
            "target": 920,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 313,
            "type": "cites",
            "value": 5
        },
        {
            "source": 381,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 475,
            "type": "cites",
            "value": 5
        },
        {
            "source": 381,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 707,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 707,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 71,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 281,
            "type": "cites",
            "value": 3
        },
        {
            "source": 707,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 643,
            "type": "cites",
            "value": 4
        },
        {
            "source": 12,
            "target": 475,
            "type": "cites",
            "value": 3
        },
        {
            "source": 12,
            "target": 999,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 52,
            "type": "cites",
            "value": 10
        },
        {
            "source": 492,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 62,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 492,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 281,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 177,
            "type": "cites",
            "value": 5
        },
        {
            "source": 62,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 686,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 68,
            "type": "cites",
            "value": 6
        },
        {
            "source": 158,
            "target": 70,
            "type": "cites",
            "value": 6
        },
        {
            "source": 158,
            "target": 1000,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 158,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1001,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1002,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1003,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 214,
            "target": 64,
            "type": "cites",
            "value": 3
        },
        {
            "source": 214,
            "target": 65,
            "type": "cites",
            "value": 3
        },
        {
            "source": 480,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 480,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 480,
            "target": 571,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1004,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1005,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1006,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1007,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1008,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1009,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 200,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1010,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1011,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1012,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1013,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 200,
            "target": 25,
            "type": "cites",
            "value": 3
        },
        {
            "source": 200,
            "target": 26,
            "type": "cites",
            "value": 10
        },
        {
            "source": 200,
            "target": 24,
            "type": "cites",
            "value": 4
        },
        {
            "source": 200,
            "target": 72,
            "type": "cites",
            "value": 10
        },
        {
            "source": 900,
            "target": 10,
            "type": "cites",
            "value": 3
        },
        {
            "source": 900,
            "target": 6,
            "type": "cites",
            "value": 6
        },
        {
            "source": 901,
            "target": 10,
            "type": "cites",
            "value": 3
        },
        {
            "source": 901,
            "target": 6,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1014,
            "target": 10,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1014,
            "target": 6,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1014,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 700,
            "target": 10,
            "type": "cites",
            "value": 3
        },
        {
            "source": 700,
            "target": 6,
            "type": "cites",
            "value": 6
        },
        {
            "source": 700,
            "target": 12,
            "type": "cites",
            "value": 6
        },
        {
            "source": 700,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 700,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 526,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 25,
            "target": 391,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 195,
            "type": "cites",
            "value": 6
        },
        {
            "source": 526,
            "target": 25,
            "type": "cites",
            "value": 5
        },
        {
            "source": 526,
            "target": 397,
            "type": "cites",
            "value": 4
        },
        {
            "source": 526,
            "target": 398,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1015,
            "target": 565,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1015,
            "target": 1016,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1015,
            "target": 920,
            "type": "cites",
            "value": 3
        },
        {
            "source": 548,
            "target": 565,
            "type": "cites",
            "value": 3
        },
        {
            "source": 548,
            "target": 1016,
            "type": "cites",
            "value": 3
        },
        {
            "source": 548,
            "target": 920,
            "type": "cites",
            "value": 3
        },
        {
            "source": 584,
            "target": 565,
            "type": "cites",
            "value": 3
        },
        {
            "source": 584,
            "target": 1016,
            "type": "cites",
            "value": 3
        },
        {
            "source": 584,
            "target": 920,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 565,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 1016,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 920,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 372,
            "type": "cites",
            "value": 3
        },
        {
            "source": 476,
            "target": 62,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 62,
            "type": "cites",
            "value": 8
        },
        {
            "source": 228,
            "target": 1017,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 52,
            "type": "cites",
            "value": 8
        },
        {
            "source": 228,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 717,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 717,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1018,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 77,
            "target": 55,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1019,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 602,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 77,
            "type": "cites",
            "value": 4
        },
        {
            "source": 177,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1018,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1018,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1019,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 77,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 77,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 690,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 596,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 596,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 478,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 478,
            "target": 220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 882,
            "target": 229,
            "type": "cites",
            "value": 3
        },
        {
            "source": 882,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 882,
            "target": 220,
            "type": "cites",
            "value": 4
        },
        {
            "source": 221,
            "target": 229,
            "type": "cites",
            "value": 5
        },
        {
            "source": 221,
            "target": 228,
            "type": "cites",
            "value": 6
        },
        {
            "source": 221,
            "target": 220,
            "type": "cites",
            "value": 6
        },
        {
            "source": 478,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 882,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 478,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 478,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 882,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 882,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 221,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 32,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 34,
            "type": "cites",
            "value": 14
        },
        {
            "source": 32,
            "target": 369,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1020,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 7,
            "type": "cites",
            "value": 13
        },
        {
            "source": 72,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 282,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 0,
            "type": "cites",
            "value": 12
        },
        {
            "source": 72,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 1021,
            "type": "cites",
            "value": 7
        },
        {
            "source": 72,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 198,
            "type": "cites",
            "value": 9
        },
        {
            "source": 72,
            "target": 287,
            "type": "cites",
            "value": 6
        },
        {
            "source": 72,
            "target": 26,
            "type": "cites",
            "value": 10
        },
        {
            "source": 72,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 72,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 83,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1022,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1022,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 625,
            "target": 540,
            "type": "cites",
            "value": 24
        },
        {
            "source": 625,
            "target": 541,
            "type": "cites",
            "value": 35
        },
        {
            "source": 1023,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1023,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1024,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1024,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1025,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1025,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1026,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1026,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 540,
            "target": 625,
            "type": "cites",
            "value": 25
        },
        {
            "source": 540,
            "target": 541,
            "type": "cites",
            "value": 44
        },
        {
            "source": 541,
            "target": 625,
            "type": "cites",
            "value": 33
        },
        {
            "source": 541,
            "target": 540,
            "type": "cites",
            "value": 40
        },
        {
            "source": 625,
            "target": 766,
            "type": "cites",
            "value": 5
        },
        {
            "source": 625,
            "target": 598,
            "type": "cites",
            "value": 11
        },
        {
            "source": 625,
            "target": 767,
            "type": "cites",
            "value": 5
        },
        {
            "source": 625,
            "target": 768,
            "type": "cites",
            "value": 5
        },
        {
            "source": 625,
            "target": 765,
            "type": "cites",
            "value": 14
        },
        {
            "source": 540,
            "target": 766,
            "type": "cites",
            "value": 5
        },
        {
            "source": 540,
            "target": 598,
            "type": "cites",
            "value": 16
        },
        {
            "source": 540,
            "target": 767,
            "type": "cites",
            "value": 4
        },
        {
            "source": 540,
            "target": 768,
            "type": "cites",
            "value": 4
        },
        {
            "source": 540,
            "target": 765,
            "type": "cites",
            "value": 23
        },
        {
            "source": 541,
            "target": 766,
            "type": "cites",
            "value": 9
        },
        {
            "source": 541,
            "target": 598,
            "type": "cites",
            "value": 19
        },
        {
            "source": 541,
            "target": 767,
            "type": "cites",
            "value": 7
        },
        {
            "source": 541,
            "target": 768,
            "type": "cites",
            "value": 7
        },
        {
            "source": 541,
            "target": 765,
            "type": "cites",
            "value": 27
        },
        {
            "source": 625,
            "target": 920,
            "type": "cites",
            "value": 4
        },
        {
            "source": 625,
            "target": 921,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 601,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 922,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 923,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 920,
            "type": "cites",
            "value": 6
        },
        {
            "source": 541,
            "target": 921,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 601,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 922,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 923,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 769,
            "type": "cites",
            "value": 5
        },
        {
            "source": 625,
            "target": 770,
            "type": "cites",
            "value": 5
        },
        {
            "source": 625,
            "target": 771,
            "type": "cites",
            "value": 5
        },
        {
            "source": 540,
            "target": 769,
            "type": "cites",
            "value": 6
        },
        {
            "source": 540,
            "target": 770,
            "type": "cites",
            "value": 6
        },
        {
            "source": 540,
            "target": 771,
            "type": "cites",
            "value": 6
        },
        {
            "source": 541,
            "target": 769,
            "type": "cites",
            "value": 7
        },
        {
            "source": 541,
            "target": 770,
            "type": "cites",
            "value": 7
        },
        {
            "source": 541,
            "target": 771,
            "type": "cites",
            "value": 7
        },
        {
            "source": 540,
            "target": 582,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 582,
            "type": "cites",
            "value": 7
        },
        {
            "source": 625,
            "target": 591,
            "type": "cites",
            "value": 5
        },
        {
            "source": 625,
            "target": 592,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 591,
            "type": "cites",
            "value": 6
        },
        {
            "source": 541,
            "target": 592,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1027,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1027,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1027,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1027,
            "target": 541,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1028,
            "target": 12,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1,
            "target": 132,
            "type": "cites",
            "value": 5
        },
        {
            "source": 111,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 111,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 111,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 630,
            "target": 330,
            "type": "cites",
            "value": 5
        },
        {
            "source": 630,
            "target": 632,
            "type": "cites",
            "value": 6
        },
        {
            "source": 331,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 630,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 330,
            "target": 484,
            "type": "cites",
            "value": 5
        },
        {
            "source": 630,
            "target": 313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 630,
            "target": 635,
            "type": "cites",
            "value": 5
        },
        {
            "source": 630,
            "target": 636,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1029,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1030,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 690,
            "target": 132,
            "type": "cites",
            "value": 11
        },
        {
            "source": 690,
            "target": 192,
            "type": "cites",
            "value": 4
        },
        {
            "source": 690,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1029,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1030,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 690,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 690,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 690,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 90,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 90,
            "target": 507,
            "type": "cites",
            "value": 4
        },
        {
            "source": 90,
            "target": 1031,
            "type": "cites",
            "value": 4
        },
        {
            "source": 90,
            "target": 111,
            "type": "cites",
            "value": 6
        },
        {
            "source": 90,
            "target": 1032,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1,
            "target": 90,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1,
            "target": 507,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 1031,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 111,
            "type": "cites",
            "value": 19
        },
        {
            "source": 1,
            "target": 1032,
            "type": "cites",
            "value": 4
        },
        {
            "source": 90,
            "target": 192,
            "type": "cites",
            "value": 4
        },
        {
            "source": 90,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 192,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 46,
            "type": "cites",
            "value": 5
        },
        {
            "source": 90,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 1,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 142,
            "target": 55,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 55,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 55,
            "type": "cites",
            "value": 9
        },
        {
            "source": 4,
            "target": 56,
            "type": "cites",
            "value": 5
        },
        {
            "source": 4,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 68,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 69,
            "type": "cites",
            "value": 5
        },
        {
            "source": 4,
            "target": 70,
            "type": "cites",
            "value": 5
        },
        {
            "source": 4,
            "target": 71,
            "type": "cites",
            "value": 5
        },
        {
            "source": 4,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 188,
            "type": "cites",
            "value": 4
        },
        {
            "source": 272,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 272,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 272,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 257,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 83,
            "target": 85,
            "type": "cites",
            "value": 3
        },
        {
            "source": 84,
            "target": 7,
            "type": "cites",
            "value": 9
        },
        {
            "source": 90,
            "target": 257,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 112,
            "type": "cites",
            "value": 5
        },
        {
            "source": 111,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 111,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 111,
            "target": 90,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 90,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1033,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 7,
            "target": 194,
            "type": "cites",
            "value": 6
        },
        {
            "source": 103,
            "target": 194,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 194,
            "type": "cites",
            "value": 6
        },
        {
            "source": 725,
            "target": 720,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1034,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1035,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 71,
            "type": "cites",
            "value": 6
        },
        {
            "source": 158,
            "target": 121,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1036,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1037,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1038,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1039,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 72,
            "type": "cites",
            "value": 15
        },
        {
            "source": 158,
            "target": 188,
            "type": "cites",
            "value": 5
        },
        {
            "source": 158,
            "target": 0,
            "type": "cites",
            "value": 6
        },
        {
            "source": 213,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 724,
            "type": "cites",
            "value": 5
        },
        {
            "source": 222,
            "target": 711,
            "type": "cites",
            "value": 3
        },
        {
            "source": 222,
            "target": 720,
            "type": "cites",
            "value": 5
        },
        {
            "source": 572,
            "target": 22,
            "type": "cites",
            "value": 12
        },
        {
            "source": 572,
            "target": 29,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1040,
            "target": 22,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1040,
            "target": 29,
            "type": "cites",
            "value": 4
        },
        {
            "source": 572,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 572,
            "target": 14,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1040,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 30,
            "target": 102,
            "type": "cites",
            "value": 5
        },
        {
            "source": 30,
            "target": 14,
            "type": "cites",
            "value": 10
        },
        {
            "source": 22,
            "target": 106,
            "type": "cites",
            "value": 4
        },
        {
            "source": 572,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 572,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 572,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 572,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1040,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 320,
            "type": "cites",
            "value": 7
        },
        {
            "source": 30,
            "target": 321,
            "type": "cites",
            "value": 5
        },
        {
            "source": 30,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 30,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 30,
            "target": 324,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 322,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 324,
            "type": "cites",
            "value": 5
        },
        {
            "source": 572,
            "target": 447,
            "type": "cites",
            "value": 3
        },
        {
            "source": 572,
            "target": 480,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 447,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 480,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 447,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 480,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 247,
            "type": "cites",
            "value": 3
        },
        {
            "source": 572,
            "target": 30,
            "type": "cites",
            "value": 7
        },
        {
            "source": 572,
            "target": 220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1040,
            "target": 30,
            "type": "cites",
            "value": 4
        },
        {
            "source": 30,
            "target": 1041,
            "type": "cites",
            "value": 6
        },
        {
            "source": 30,
            "target": 1042,
            "type": "cites",
            "value": 6
        },
        {
            "source": 30,
            "target": 1043,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 1041,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 1042,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 1043,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 645,
            "type": "cites",
            "value": 4
        },
        {
            "source": 30,
            "target": 573,
            "type": "cites",
            "value": 6
        },
        {
            "source": 30,
            "target": 574,
            "type": "cites",
            "value": 6
        },
        {
            "source": 30,
            "target": 575,
            "type": "cites",
            "value": 6
        },
        {
            "source": 30,
            "target": 576,
            "type": "cites",
            "value": 6
        },
        {
            "source": 30,
            "target": 228,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 228,
            "type": "cites",
            "value": 10
        },
        {
            "source": 1044,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1045,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 77,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 132,
            "type": "cites",
            "value": 5
        },
        {
            "source": 132,
            "target": 158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 83,
            "target": 69,
            "type": "cites",
            "value": 6
        },
        {
            "source": 83,
            "target": 70,
            "type": "cites",
            "value": 7
        },
        {
            "source": 132,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 132,
            "target": 577,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 505,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 300,
            "type": "cites",
            "value": 6
        },
        {
            "source": 244,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 499,
            "target": 102,
            "type": "cites",
            "value": 5
        },
        {
            "source": 499,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 681,
            "type": "cites",
            "value": 10
        },
        {
            "source": 244,
            "target": 1046,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 1047,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 681,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 1046,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 1047,
            "type": "cites",
            "value": 7
        },
        {
            "source": 934,
            "target": 151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 478,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 9,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 8,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 9,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 338,
            "type": "cites",
            "value": 5
        },
        {
            "source": 478,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 20,
            "target": 1,
            "type": "cites",
            "value": 5
        },
        {
            "source": 20,
            "target": 111,
            "type": "cites",
            "value": 6
        },
        {
            "source": 22,
            "target": 1,
            "type": "cites",
            "value": 7
        },
        {
            "source": 22,
            "target": 111,
            "type": "cites",
            "value": 8
        },
        {
            "source": 20,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 0,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 229,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 42,
            "type": "cites",
            "value": 3
        },
        {
            "source": 43,
            "target": 72,
            "type": "cites",
            "value": 6
        },
        {
            "source": 43,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1048,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 687,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 113,
            "type": "cites",
            "value": 4
        },
        {
            "source": 72,
            "target": 4,
            "type": "cites",
            "value": 17
        },
        {
            "source": 72,
            "target": 44,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1049,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1049,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1049,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 475,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 1050,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 1051,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 475,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 1052,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 185,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 55,
            "type": "cites",
            "value": 7
        },
        {
            "source": 26,
            "target": 56,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 590,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 1053,
            "type": "cites",
            "value": 3
        },
        {
            "source": 377,
            "target": 451,
            "type": "cites",
            "value": 3
        },
        {
            "source": 377,
            "target": 46,
            "type": "cites",
            "value": 5
        },
        {
            "source": 377,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 451,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1054,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 700,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 700,
            "target": 177,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1055,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 226,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1056,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1056,
            "target": 220,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1056,
            "target": 221,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1056,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1056,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 895,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 272,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 717,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 717,
            "target": 7,
            "type": "cites",
            "value": 12
        },
        {
            "source": 717,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 717,
            "target": 475,
            "type": "cites",
            "value": 4
        },
        {
            "source": 717,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 717,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 710,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 710,
            "target": 151,
            "type": "cites",
            "value": 6
        },
        {
            "source": 710,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 12,
            "type": "cites",
            "value": 7
        },
        {
            "source": 187,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 55,
            "type": "cites",
            "value": 5
        },
        {
            "source": 187,
            "target": 56,
            "type": "cites",
            "value": 4
        },
        {
            "source": 710,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 885,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 1057,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1058,
            "target": 61,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1058,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1058,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 701,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 200,
            "target": 62,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1059,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1060,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1061,
            "target": 63,
            "type": "cites",
            "value": 4
        },
        {
            "source": 200,
            "target": 215,
            "type": "cites",
            "value": 10
        },
        {
            "source": 200,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 200,
            "target": 193,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1062,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1063,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 700,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 961,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 33,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 895,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 1064,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 1065,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1066,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 177,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 32,
            "type": "cites",
            "value": 4
        },
        {
            "source": 177,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 177,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 177,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 688,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 688,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 688,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 934,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 882,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 229,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 229,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 231,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 716,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1067,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 716,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1067,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1067,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 4,
            "type": "cites",
            "value": 8
        },
        {
            "source": 38,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 549,
            "target": 627,
            "type": "cites",
            "value": 4
        },
        {
            "source": 549,
            "target": 483,
            "type": "cites",
            "value": 4
        },
        {
            "source": 549,
            "target": 553,
            "type": "cites",
            "value": 11
        },
        {
            "source": 547,
            "target": 1068,
            "type": "cites",
            "value": 4
        },
        {
            "source": 547,
            "target": 486,
            "type": "cites",
            "value": 4
        },
        {
            "source": 547,
            "target": 487,
            "type": "cites",
            "value": 4
        },
        {
            "source": 535,
            "target": 186,
            "type": "cites",
            "value": 14
        },
        {
            "source": 535,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 535,
            "target": 1069,
            "type": "cites",
            "value": 3
        },
        {
            "source": 535,
            "target": 1070,
            "type": "cites",
            "value": 4
        },
        {
            "source": 535,
            "target": 1071,
            "type": "cites",
            "value": 3
        },
        {
            "source": 535,
            "target": 1072,
            "type": "cites",
            "value": 3
        },
        {
            "source": 535,
            "target": 1073,
            "type": "cites",
            "value": 3
        },
        {
            "source": 596,
            "target": 487,
            "type": "cites",
            "value": 6
        },
        {
            "source": 276,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 541,
            "type": "cites",
            "value": 6
        },
        {
            "source": 596,
            "target": 130,
            "type": "cites",
            "value": 4
        },
        {
            "source": 596,
            "target": 627,
            "type": "cites",
            "value": 10
        },
        {
            "source": 596,
            "target": 484,
            "type": "cites",
            "value": 10
        },
        {
            "source": 596,
            "target": 485,
            "type": "cites",
            "value": 8
        },
        {
            "source": 596,
            "target": 765,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1074,
            "target": 810,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1074,
            "target": 314,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1074,
            "target": 811,
            "type": "cites",
            "value": 5
        },
        {
            "source": 540,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 529,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1075,
            "target": 537,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1076,
            "target": 540,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1076,
            "target": 537,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1076,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1077,
            "target": 537,
            "type": "cites",
            "value": 6
        },
        {
            "source": 566,
            "target": 540,
            "type": "cites",
            "value": 4
        },
        {
            "source": 566,
            "target": 541,
            "type": "cites",
            "value": 6
        },
        {
            "source": 540,
            "target": 537,
            "type": "cites",
            "value": 10
        },
        {
            "source": 541,
            "target": 537,
            "type": "cites",
            "value": 13
        },
        {
            "source": 1076,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 537,
            "target": 765,
            "type": "cites",
            "value": 5
        },
        {
            "source": 537,
            "target": 1078,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 130,
            "type": "cites",
            "value": 6
        },
        {
            "source": 541,
            "target": 130,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1075,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1076,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1077,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 537,
            "target": 1079,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 1079,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1079,
            "type": "cites",
            "value": 7
        },
        {
            "source": 181,
            "target": 169,
            "type": "cites",
            "value": 6
        },
        {
            "source": 181,
            "target": 185,
            "type": "cites",
            "value": 5
        },
        {
            "source": 181,
            "target": 91,
            "type": "cites",
            "value": 7
        },
        {
            "source": 318,
            "target": 91,
            "type": "cites",
            "value": 7
        },
        {
            "source": 91,
            "target": 181,
            "type": "cites",
            "value": 10
        },
        {
            "source": 91,
            "target": 169,
            "type": "cites",
            "value": 11
        },
        {
            "source": 91,
            "target": 1080,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 185,
            "type": "cites",
            "value": 15
        },
        {
            "source": 181,
            "target": 540,
            "type": "cites",
            "value": 8
        },
        {
            "source": 181,
            "target": 541,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1081,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1081,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 318,
            "target": 540,
            "type": "cites",
            "value": 4
        },
        {
            "source": 318,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 91,
            "target": 540,
            "type": "cites",
            "value": 7
        },
        {
            "source": 91,
            "target": 541,
            "type": "cites",
            "value": 10
        },
        {
            "source": 91,
            "target": 314,
            "type": "cites",
            "value": 4
        },
        {
            "source": 181,
            "target": 598,
            "type": "cites",
            "value": 6
        },
        {
            "source": 91,
            "target": 598,
            "type": "cites",
            "value": 6
        },
        {
            "source": 91,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 318,
            "target": 416,
            "type": "cites",
            "value": 4
        },
        {
            "source": 181,
            "target": 769,
            "type": "cites",
            "value": 3
        },
        {
            "source": 181,
            "target": 770,
            "type": "cites",
            "value": 3
        },
        {
            "source": 181,
            "target": 771,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 552,
            "target": 486,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 552,
            "target": 556,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1082,
            "target": 839,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1082,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 839,
            "target": 504,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1083,
            "target": 839,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1083,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 316,
            "target": 839,
            "type": "cites",
            "value": 3
        },
        {
            "source": 316,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 504,
            "target": 808,
            "type": "cites",
            "value": 3
        },
        {
            "source": 504,
            "target": 839,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 839,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 504,
            "type": "cites",
            "value": 4
        },
        {
            "source": 316,
            "target": 91,
            "type": "cites",
            "value": 6
        },
        {
            "source": 504,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 316,
            "type": "cites",
            "value": 5
        },
        {
            "source": 627,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 605,
            "target": 485,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1084,
            "target": 556,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1084,
            "target": 486,
            "type": "cites",
            "value": 5
        },
        {
            "source": 605,
            "target": 487,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 312,
            "type": "cites",
            "value": 8
        },
        {
            "source": 178,
            "target": 189,
            "type": "cites",
            "value": 11
        },
        {
            "source": 178,
            "target": 1085,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1074,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1086,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 810,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 811,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1086,
            "target": 287,
            "type": "cites",
            "value": 12
        },
        {
            "source": 1086,
            "target": 1087,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 338,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1086,
            "target": 1088,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1089,
            "type": "cites",
            "value": 4
        },
        {
            "source": 566,
            "target": 565,
            "type": "cites",
            "value": 3
        },
        {
            "source": 566,
            "target": 920,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1090,
            "target": 403,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1090,
            "target": 136,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1090,
            "target": 26,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1090,
            "target": 404,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1091,
            "target": 403,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1091,
            "target": 136,
            "type": "cites",
            "value": 11
        },
        {
            "source": 1091,
            "target": 512,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1091,
            "target": 511,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1091,
            "target": 26,
            "type": "cites",
            "value": 13
        },
        {
            "source": 1091,
            "target": 404,
            "type": "cites",
            "value": 13
        },
        {
            "source": 543,
            "target": 403,
            "type": "cites",
            "value": 3
        },
        {
            "source": 543,
            "target": 136,
            "type": "cites",
            "value": 6
        },
        {
            "source": 543,
            "target": 26,
            "type": "cites",
            "value": 7
        },
        {
            "source": 543,
            "target": 404,
            "type": "cites",
            "value": 6
        },
        {
            "source": 543,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1091,
            "target": 529,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1091,
            "target": 190,
            "type": "cites",
            "value": 7
        },
        {
            "source": 543,
            "target": 190,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1092,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1092,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1092,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 566,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 561,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 561,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1093,
            "target": 625,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1093,
            "target": 540,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1093,
            "target": 541,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1094,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1094,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1094,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1095,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1095,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1095,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 405,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1096,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1097,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 405,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 313,
            "type": "cites",
            "value": 6
        },
        {
            "source": 349,
            "target": 1098,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 1099,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1100,
            "target": 349,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1101,
            "target": 349,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1102,
            "target": 349,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1100,
            "target": 641,
            "type": "cites",
            "value": 4
        },
        {
            "source": 349,
            "target": 1103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 349,
            "target": 186,
            "type": "cites",
            "value": 8
        },
        {
            "source": 332,
            "target": 330,
            "type": "cites",
            "value": 4
        },
        {
            "source": 332,
            "target": 632,
            "type": "cites",
            "value": 5
        },
        {
            "source": 631,
            "target": 330,
            "type": "cites",
            "value": 4
        },
        {
            "source": 631,
            "target": 632,
            "type": "cites",
            "value": 5
        },
        {
            "source": 631,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 631,
            "target": 627,
            "type": "cites",
            "value": 4
        },
        {
            "source": 631,
            "target": 635,
            "type": "cites",
            "value": 4
        },
        {
            "source": 631,
            "target": 636,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 34,
            "type": "cites",
            "value": 13
        },
        {
            "source": 300,
            "target": 369,
            "type": "cites",
            "value": 6
        },
        {
            "source": 300,
            "target": 1104,
            "type": "cites",
            "value": 6
        },
        {
            "source": 300,
            "target": 190,
            "type": "cites",
            "value": 6
        },
        {
            "source": 300,
            "target": 816,
            "type": "cites",
            "value": 8
        },
        {
            "source": 300,
            "target": 817,
            "type": "cites",
            "value": 5
        },
        {
            "source": 300,
            "target": 1105,
            "type": "cites",
            "value": 5
        },
        {
            "source": 300,
            "target": 288,
            "type": "cites",
            "value": 8
        },
        {
            "source": 300,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 380,
            "type": "cites",
            "value": 3
        },
        {
            "source": 300,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 300,
            "target": 299,
            "type": "cites",
            "value": 4
        },
        {
            "source": 990,
            "target": 317,
            "type": "cites",
            "value": 3
        },
        {
            "source": 390,
            "target": 317,
            "type": "cites",
            "value": 3
        },
        {
            "source": 225,
            "target": 317,
            "type": "cites",
            "value": 4
        },
        {
            "source": 317,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 317,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 317,
            "target": 321,
            "type": "cites",
            "value": 4
        },
        {
            "source": 225,
            "target": 321,
            "type": "cites",
            "value": 5
        },
        {
            "source": 317,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 225,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 225,
            "target": 324,
            "type": "cites",
            "value": 3
        },
        {
            "source": 225,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 225,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 445,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 420,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1106,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1107,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1108,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1109,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 420,
            "type": "cites",
            "value": 5
        },
        {
            "source": 572,
            "target": 420,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 480,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 1110,
            "type": "cites",
            "value": 5
        },
        {
            "source": 30,
            "target": 1111,
            "type": "cites",
            "value": 5
        },
        {
            "source": 30,
            "target": 1112,
            "type": "cites",
            "value": 9
        },
        {
            "source": 30,
            "target": 326,
            "type": "cites",
            "value": 5
        },
        {
            "source": 30,
            "target": 1113,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 1112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 572,
            "target": 1112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 1110,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 1111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 1112,
            "type": "cites",
            "value": 5
        },
        {
            "source": 29,
            "target": 326,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 1113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 64,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 282,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 64,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 282,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 64,
            "target": 44,
            "type": "cites",
            "value": 10
        },
        {
            "source": 64,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 64,
            "type": "cites",
            "value": 3
        },
        {
            "source": 282,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 328,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 328,
            "target": 317,
            "type": "cites",
            "value": 3
        },
        {
            "source": 328,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 328,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 328,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 328,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 328,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 328,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 25,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 397,
            "type": "cites",
            "value": 3
        },
        {
            "source": 160,
            "target": 398,
            "type": "cites",
            "value": 3
        },
        {
            "source": 328,
            "target": 25,
            "type": "cites",
            "value": 4
        },
        {
            "source": 328,
            "target": 397,
            "type": "cites",
            "value": 4
        },
        {
            "source": 328,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 328,
            "target": 398,
            "type": "cites",
            "value": 4
        },
        {
            "source": 851,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 851,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 852,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 852,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 283,
            "target": 950,
            "type": "cites",
            "value": 3
        },
        {
            "source": 851,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 852,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 103,
            "target": 950,
            "type": "cites",
            "value": 3
        },
        {
            "source": 269,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 269,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 269,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 270,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 270,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 270,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 189,
            "target": 317,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 347,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 347,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 347,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 347,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 321,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 323,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 189,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 347,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 347,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 60,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 60,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 55,
            "type": "cites",
            "value": 6
        },
        {
            "source": 204,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 503,
            "target": 235,
            "type": "cites",
            "value": 3
        },
        {
            "source": 503,
            "target": 233,
            "type": "cites",
            "value": 3
        },
        {
            "source": 503,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 54,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 54,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 1114,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 1115,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 300,
            "type": "cites",
            "value": 7
        },
        {
            "source": 502,
            "target": 301,
            "type": "cites",
            "value": 3
        },
        {
            "source": 502,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 301,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 479,
            "type": "cites",
            "value": 7
        },
        {
            "source": 94,
            "target": 283,
            "type": "cites",
            "value": 4
        },
        {
            "source": 94,
            "target": 422,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 423,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 54,
            "target": 44,
            "type": "cites",
            "value": 11
        },
        {
            "source": 54,
            "target": 784,
            "type": "cites",
            "value": 9
        },
        {
            "source": 22,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 54,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 54,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 54,
            "target": 61,
            "type": "cites",
            "value": 9
        },
        {
            "source": 22,
            "target": 300,
            "type": "cites",
            "value": 5
        },
        {
            "source": 772,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 774,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 29,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 774,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 774,
            "target": 323,
            "type": "cites",
            "value": 4
        },
        {
            "source": 774,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 29,
            "target": 367,
            "type": "cites",
            "value": 3
        },
        {
            "source": 772,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 774,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 231,
            "type": "cites",
            "value": 9
        },
        {
            "source": 29,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 774,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 215,
            "type": "cites",
            "value": 6
        },
        {
            "source": 218,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 88,
            "target": 304,
            "type": "cites",
            "value": 6
        },
        {
            "source": 231,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 850,
            "type": "cites",
            "value": 5
        },
        {
            "source": 304,
            "target": 847,
            "type": "cites",
            "value": 4
        },
        {
            "source": 304,
            "target": 148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 1116,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 534,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 534,
            "target": 1117,
            "type": "cites",
            "value": 3
        },
        {
            "source": 534,
            "target": 265,
            "type": "cites",
            "value": 4
        },
        {
            "source": 320,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 363,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 363,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 407,
            "type": "cites",
            "value": 5
        },
        {
            "source": 276,
            "target": 410,
            "type": "cites",
            "value": 5
        },
        {
            "source": 276,
            "target": 86,
            "type": "cites",
            "value": 10
        },
        {
            "source": 276,
            "target": 87,
            "type": "cites",
            "value": 10
        },
        {
            "source": 244,
            "target": 86,
            "type": "cites",
            "value": 14
        },
        {
            "source": 244,
            "target": 87,
            "type": "cites",
            "value": 22
        },
        {
            "source": 320,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 367,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 363,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 320,
            "target": 1118,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 204,
            "type": "cites",
            "value": 7
        },
        {
            "source": 363,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 244,
            "target": 1118,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 25,
            "type": "cites",
            "value": 6
        },
        {
            "source": 276,
            "target": 397,
            "type": "cites",
            "value": 5
        },
        {
            "source": 276,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 276,
            "target": 398,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 367,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 322,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 83,
            "type": "cites",
            "value": 4
        },
        {
            "source": 322,
            "target": 14,
            "type": "cites",
            "value": 7
        },
        {
            "source": 320,
            "target": 235,
            "type": "cites",
            "value": 4
        },
        {
            "source": 320,
            "target": 233,
            "type": "cites",
            "value": 4
        },
        {
            "source": 367,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 363,
            "target": 328,
            "type": "cites",
            "value": 3
        },
        {
            "source": 363,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 363,
            "target": 233,
            "type": "cites",
            "value": 4
        },
        {
            "source": 363,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 322,
            "type": "cites",
            "value": 4
        },
        {
            "source": 367,
            "target": 276,
            "type": "cites",
            "value": 5
        },
        {
            "source": 322,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 322,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 322,
            "target": 244,
            "type": "cites",
            "value": 9
        },
        {
            "source": 367,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 367,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 367,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 367,
            "target": 244,
            "type": "cites",
            "value": 11
        },
        {
            "source": 363,
            "target": 320,
            "type": "cites",
            "value": 6
        },
        {
            "source": 363,
            "target": 321,
            "type": "cites",
            "value": 4
        },
        {
            "source": 363,
            "target": 323,
            "type": "cites",
            "value": 5
        },
        {
            "source": 363,
            "target": 244,
            "type": "cites",
            "value": 10
        },
        {
            "source": 276,
            "target": 320,
            "type": "cites",
            "value": 7
        },
        {
            "source": 276,
            "target": 321,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 323,
            "type": "cites",
            "value": 9
        },
        {
            "source": 276,
            "target": 363,
            "type": "cites",
            "value": 5
        },
        {
            "source": 367,
            "target": 167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 166,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 225,
            "type": "cites",
            "value": 6
        },
        {
            "source": 276,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 169,
            "type": "cites",
            "value": 4
        },
        {
            "source": 94,
            "target": 185,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 257,
            "type": "cites",
            "value": 4
        },
        {
            "source": 94,
            "target": 439,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 94,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 861,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 439,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 171,
            "type": "cites",
            "value": 14
        },
        {
            "source": 14,
            "target": 479,
            "type": "cites",
            "value": 14
        },
        {
            "source": 103,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 799,
            "target": 103,
            "type": "cites",
            "value": 5
        },
        {
            "source": 799,
            "target": 267,
            "type": "cites",
            "value": 3
        },
        {
            "source": 799,
            "target": 242,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 326,
            "target": 25,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1119,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1119,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1120,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1120,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1121,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1121,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 397,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 326,
            "target": 398,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 407,
            "type": "cites",
            "value": 3
        },
        {
            "source": 326,
            "target": 410,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 92,
            "type": "cites",
            "value": 12
        },
        {
            "source": 326,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 23,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 187,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 301,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 949,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1122,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1123,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1124,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1041,
            "target": 22,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1125,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1122,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1123,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1124,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1041,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1125,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 38,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 873,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 873,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 700,
            "type": "cites",
            "value": 5
        },
        {
            "source": 177,
            "target": 33,
            "type": "cites",
            "value": 8
        },
        {
            "source": 9,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 125,
            "type": "cites",
            "value": 7
        },
        {
            "source": 38,
            "target": 245,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 1126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1127,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 23,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 137,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 170,
            "target": 265,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 581,
            "type": "cites",
            "value": 9
        },
        {
            "source": 41,
            "target": 931,
            "type": "cites",
            "value": 4
        },
        {
            "source": 41,
            "target": 932,
            "type": "cites",
            "value": 4
        },
        {
            "source": 171,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 171,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 170,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 170,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 702,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 178,
            "target": 581,
            "type": "cites",
            "value": 10
        },
        {
            "source": 178,
            "target": 931,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 932,
            "type": "cites",
            "value": 5
        },
        {
            "source": 41,
            "target": 1128,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 1128,
            "type": "cites",
            "value": 4
        },
        {
            "source": 41,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 102,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 712,
            "type": "cites",
            "value": 3
        },
        {
            "source": 171,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 170,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 651,
            "type": "cites",
            "value": 7
        },
        {
            "source": 41,
            "target": 652,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 653,
            "type": "cites",
            "value": 3
        },
        {
            "source": 702,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 652,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 653,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 430,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 972,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 972,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 206,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1129,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1130,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1131,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1132,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1133,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 307,
            "target": 12,
            "type": "cites",
            "value": 3
        },
        {
            "source": 307,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 307,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 307,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 307,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1134,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1134,
            "target": 244,
            "type": "cites",
            "value": 11
        },
        {
            "source": 31,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 419,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 419,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 836,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 453,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 453,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 77,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1135,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1136,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1137,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1138,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1139,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 30,
            "target": 1140,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 1141,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 1142,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 1143,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 1140,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 420,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 1141,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 1142,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 1112,
            "type": "cites",
            "value": 8
        },
        {
            "source": 22,
            "target": 1143,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 572,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 440,
            "type": "cites",
            "value": 5
        },
        {
            "source": 22,
            "target": 1110,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 1111,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 326,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 1113,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1144,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 459,
            "type": "cites",
            "value": 3
        },
        {
            "source": 336,
            "target": 288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 206,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 305,
            "target": 310,
            "type": "cites",
            "value": 18
        },
        {
            "source": 310,
            "target": 1145,
            "type": "cites",
            "value": 5
        },
        {
            "source": 305,
            "target": 1146,
            "type": "cites",
            "value": 5
        },
        {
            "source": 305,
            "target": 308,
            "type": "cites",
            "value": 11
        },
        {
            "source": 305,
            "target": 309,
            "type": "cites",
            "value": 10
        },
        {
            "source": 310,
            "target": 1146,
            "type": "cites",
            "value": 6
        },
        {
            "source": 310,
            "target": 308,
            "type": "cites",
            "value": 15
        },
        {
            "source": 310,
            "target": 309,
            "type": "cites",
            "value": 14
        },
        {
            "source": 305,
            "target": 75,
            "type": "cites",
            "value": 3
        },
        {
            "source": 305,
            "target": 1147,
            "type": "cites",
            "value": 3
        },
        {
            "source": 305,
            "target": 1148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 305,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 75,
            "type": "cites",
            "value": 6
        },
        {
            "source": 310,
            "target": 1147,
            "type": "cites",
            "value": 6
        },
        {
            "source": 310,
            "target": 1148,
            "type": "cites",
            "value": 5
        },
        {
            "source": 310,
            "target": 177,
            "type": "cites",
            "value": 6
        },
        {
            "source": 305,
            "target": 167,
            "type": "cites",
            "value": 7
        },
        {
            "source": 305,
            "target": 446,
            "type": "cites",
            "value": 6
        },
        {
            "source": 310,
            "target": 167,
            "type": "cites",
            "value": 7
        },
        {
            "source": 310,
            "target": 446,
            "type": "cites",
            "value": 6
        },
        {
            "source": 305,
            "target": 38,
            "type": "cites",
            "value": 9
        },
        {
            "source": 310,
            "target": 38,
            "type": "cites",
            "value": 15
        },
        {
            "source": 310,
            "target": 1149,
            "type": "cites",
            "value": 4
        },
        {
            "source": 305,
            "target": 902,
            "type": "cites",
            "value": 3
        },
        {
            "source": 305,
            "target": 903,
            "type": "cites",
            "value": 4
        },
        {
            "source": 310,
            "target": 902,
            "type": "cites",
            "value": 6
        },
        {
            "source": 310,
            "target": 903,
            "type": "cites",
            "value": 8
        },
        {
            "source": 310,
            "target": 966,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1150,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 125,
            "target": 34,
            "type": "cites",
            "value": 18
        },
        {
            "source": 125,
            "target": 369,
            "type": "cites",
            "value": 7
        },
        {
            "source": 125,
            "target": 32,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 459,
            "target": 451,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 451,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 46,
            "type": "cites",
            "value": 8
        },
        {
            "source": 265,
            "target": 996,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 580,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 479,
            "type": "cites",
            "value": 9
        },
        {
            "source": 265,
            "target": 301,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 571,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 479,
            "type": "cites",
            "value": 8
        },
        {
            "source": 125,
            "target": 228,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1,
            "target": 1151,
            "type": "cites",
            "value": 5
        },
        {
            "source": 111,
            "target": 1151,
            "type": "cites",
            "value": 4
        },
        {
            "source": 111,
            "target": 1,
            "type": "cites",
            "value": 16
        },
        {
            "source": 1,
            "target": 288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 282,
            "target": 288,
            "type": "cites",
            "value": 5
        },
        {
            "source": 282,
            "target": 215,
            "type": "cites",
            "value": 8
        },
        {
            "source": 304,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1152,
            "target": 304,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1152,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 783,
            "type": "cites",
            "value": 5
        },
        {
            "source": 803,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 225,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 323,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 304,
            "type": "cites",
            "value": 6
        },
        {
            "source": 204,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1153,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1154,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 276,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1153,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1154,
            "target": 247,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1154,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1154,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 247,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 933,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1104,
            "type": "cites",
            "value": 3
        },
        {
            "source": 111,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 102,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 125,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 255,
            "target": 265,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 265,
            "type": "cites",
            "value": 11
        },
        {
            "source": 125,
            "target": 867,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 872,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 0,
            "type": "cites",
            "value": 11
        },
        {
            "source": 125,
            "target": 198,
            "type": "cites",
            "value": 7
        },
        {
            "source": 125,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 1155,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 126,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 50,
            "target": 281,
            "type": "cites",
            "value": 6
        },
        {
            "source": 50,
            "target": 132,
            "type": "cites",
            "value": 6
        },
        {
            "source": 52,
            "target": 132,
            "type": "cites",
            "value": 8
        },
        {
            "source": 235,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 235,
            "target": 125,
            "type": "cites",
            "value": 8
        },
        {
            "source": 233,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 649,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 304,
            "type": "cites",
            "value": 9
        },
        {
            "source": 233,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 44,
            "target": 1156,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 1157,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 1158,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 1159,
            "type": "cites",
            "value": 4
        },
        {
            "source": 44,
            "target": 784,
            "type": "cites",
            "value": 19
        },
        {
            "source": 214,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 1000,
            "type": "cites",
            "value": 6
        },
        {
            "source": 205,
            "target": 55,
            "type": "cites",
            "value": 7
        },
        {
            "source": 205,
            "target": 7,
            "type": "cites",
            "value": 13
        },
        {
            "source": 205,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 205,
            "target": 178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 83,
            "type": "cites",
            "value": 5
        },
        {
            "source": 205,
            "target": 77,
            "type": "cites",
            "value": 3
        },
        {
            "source": 205,
            "target": 71,
            "type": "cites",
            "value": 3
        },
        {
            "source": 205,
            "target": 188,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1160,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1161,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 151,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 61,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 62,
            "target": 46,
            "type": "cites",
            "value": 7
        },
        {
            "source": 52,
            "target": 996,
            "type": "cites",
            "value": 3
        },
        {
            "source": 61,
            "target": 44,
            "type": "cites",
            "value": 26
        },
        {
            "source": 61,
            "target": 63,
            "type": "cites",
            "value": 7
        },
        {
            "source": 61,
            "target": 1000,
            "type": "cites",
            "value": 5
        },
        {
            "source": 61,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 61,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 813,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 231,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1162,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 590,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 125,
            "type": "cites",
            "value": 11
        },
        {
            "source": 231,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 486,
            "type": "cites",
            "value": 4
        },
        {
            "source": 709,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 709,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 501,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 501,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 197,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1163,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 8,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 750,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1163,
            "target": 750,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 750,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 1164,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 60,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 1164,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 60,
            "type": "cites",
            "value": 7
        },
        {
            "source": 0,
            "target": 1164,
            "type": "cites",
            "value": 9
        },
        {
            "source": 228,
            "target": 812,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 204,
            "type": "cites",
            "value": 5
        },
        {
            "source": 220,
            "target": 649,
            "type": "cites",
            "value": 6
        },
        {
            "source": 228,
            "target": 649,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 450,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 450,
            "target": 244,
            "type": "cites",
            "value": 15
        },
        {
            "source": 1165,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1165,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 450,
            "target": 206,
            "type": "cites",
            "value": 5
        },
        {
            "source": 450,
            "target": 14,
            "type": "cites",
            "value": 8
        },
        {
            "source": 580,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 580,
            "target": 53,
            "type": "cites",
            "value": 5
        },
        {
            "source": 580,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 580,
            "target": 46,
            "type": "cites",
            "value": 5
        },
        {
            "source": 580,
            "target": 1166,
            "type": "cites",
            "value": 3
        },
        {
            "source": 580,
            "target": 1167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 32,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 34,
            "type": "cites",
            "value": 7
        },
        {
            "source": 178,
            "target": 369,
            "type": "cites",
            "value": 7
        },
        {
            "source": 450,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 229,
            "target": 220,
            "type": "cites",
            "value": 4
        },
        {
            "source": 221,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 478,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 221,
            "target": 0,
            "type": "cites",
            "value": 6
        },
        {
            "source": 229,
            "target": 221,
            "type": "cites",
            "value": 3
        },
        {
            "source": 229,
            "target": 228,
            "type": "cites",
            "value": 3
        },
        {
            "source": 420,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1168,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1056,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1169,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 30,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 29,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 420,
            "target": 22,
            "type": "cites",
            "value": 8
        },
        {
            "source": 420,
            "target": 1112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 420,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1168,
            "target": 22,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1168,
            "target": 1112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1170,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1056,
            "target": 22,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1056,
            "target": 1112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1169,
            "target": 22,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1169,
            "target": 1112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 420,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1168,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1056,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1169,
            "target": 29,
            "type": "cites",
            "value": 3
        },
        {
            "source": 420,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 420,
            "target": 220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1168,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1168,
            "target": 220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1056,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1169,
            "target": 30,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1169,
            "target": 220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 281,
            "target": 192,
            "type": "cites",
            "value": 7
        },
        {
            "source": 281,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 716,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 670,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 24,
            "type": "cites",
            "value": 6
        },
        {
            "source": 391,
            "target": 200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 391,
            "target": 215,
            "type": "cites",
            "value": 6
        },
        {
            "source": 391,
            "target": 282,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 654,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 186,
            "type": "cites",
            "value": 9
        },
        {
            "source": 26,
            "target": 381,
            "type": "cites",
            "value": 7
        },
        {
            "source": 26,
            "target": 301,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1171,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1172,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1173,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1174,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1175,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1176,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 712,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 501,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 220,
            "target": 111,
            "type": "cites",
            "value": 4
        },
        {
            "source": 758,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 758,
            "target": 853,
            "type": "cites",
            "value": 5
        },
        {
            "source": 758,
            "target": 111,
            "type": "cites",
            "value": 4
        },
        {
            "source": 229,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 229,
            "target": 853,
            "type": "cites",
            "value": 5
        },
        {
            "source": 229,
            "target": 111,
            "type": "cites",
            "value": 4
        },
        {
            "source": 221,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 221,
            "target": 853,
            "type": "cites",
            "value": 5
        },
        {
            "source": 221,
            "target": 111,
            "type": "cites",
            "value": 4
        },
        {
            "source": 220,
            "target": 1177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 758,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 758,
            "target": 649,
            "type": "cites",
            "value": 3
        },
        {
            "source": 229,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 229,
            "target": 649,
            "type": "cites",
            "value": 3
        },
        {
            "source": 221,
            "target": 1177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 221,
            "target": 257,
            "type": "cites",
            "value": 6
        },
        {
            "source": 221,
            "target": 649,
            "type": "cites",
            "value": 5
        },
        {
            "source": 257,
            "target": 1177,
            "type": "cites",
            "value": 8
        },
        {
            "source": 257,
            "target": 416,
            "type": "cites",
            "value": 4
        },
        {
            "source": 257,
            "target": 1178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 257,
            "target": 649,
            "type": "cites",
            "value": 13
        },
        {
            "source": 257,
            "target": 571,
            "type": "cites",
            "value": 4
        },
        {
            "source": 257,
            "target": 479,
            "type": "cites",
            "value": 7
        },
        {
            "source": 257,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 220,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1179,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1179,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1180,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1180,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 86,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 1177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1179,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1180,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 83,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1181,
            "target": 177,
            "type": "cites",
            "value": 6
        },
        {
            "source": 74,
            "target": 75,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 75,
            "type": "cites",
            "value": 16
        },
        {
            "source": 38,
            "target": 1147,
            "type": "cites",
            "value": 7
        },
        {
            "source": 38,
            "target": 1148,
            "type": "cites",
            "value": 7
        },
        {
            "source": 38,
            "target": 177,
            "type": "cites",
            "value": 13
        },
        {
            "source": 74,
            "target": 52,
            "type": "cites",
            "value": 5
        },
        {
            "source": 38,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 74,
            "target": 76,
            "type": "cites",
            "value": 3
        },
        {
            "source": 74,
            "target": 38,
            "type": "cites",
            "value": 6
        },
        {
            "source": 38,
            "target": 76,
            "type": "cites",
            "value": 10
        },
        {
            "source": 38,
            "target": 167,
            "type": "cites",
            "value": 12
        },
        {
            "source": 38,
            "target": 446,
            "type": "cites",
            "value": 11
        },
        {
            "source": 38,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 74,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 1182,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 62,
            "type": "cites",
            "value": 6
        },
        {
            "source": 74,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 74,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 83,
            "type": "cites",
            "value": 7
        },
        {
            "source": 38,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 112,
            "type": "cites",
            "value": 4
        },
        {
            "source": 49,
            "target": 192,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 74,
            "type": "cites",
            "value": 4
        },
        {
            "source": 38,
            "target": 308,
            "type": "cites",
            "value": 9
        },
        {
            "source": 38,
            "target": 309,
            "type": "cites",
            "value": 9
        },
        {
            "source": 38,
            "target": 310,
            "type": "cites",
            "value": 13
        },
        {
            "source": 347,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1183,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1183,
            "target": 740,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1183,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 1184,
            "type": "cites",
            "value": 7
        },
        {
            "source": 189,
            "target": 1185,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 1186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 1187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1188,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 312,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1188,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1189,
            "target": 486,
            "type": "cites",
            "value": 5
        },
        {
            "source": 549,
            "target": 554,
            "type": "cites",
            "value": 8
        },
        {
            "source": 549,
            "target": 555,
            "type": "cites",
            "value": 7
        },
        {
            "source": 549,
            "target": 556,
            "type": "cites",
            "value": 12
        },
        {
            "source": 549,
            "target": 130,
            "type": "cites",
            "value": 10
        },
        {
            "source": 549,
            "target": 487,
            "type": "cites",
            "value": 4
        },
        {
            "source": 601,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 547,
            "target": 204,
            "type": "cites",
            "value": 10
        },
        {
            "source": 547,
            "target": 412,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1190,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1191,
            "target": 591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1191,
            "target": 625,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1191,
            "target": 540,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1191,
            "target": 541,
            "type": "cites",
            "value": 6
        },
        {
            "source": 740,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 740,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 740,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 740,
            "target": 711,
            "type": "cites",
            "value": 4
        },
        {
            "source": 615,
            "target": 608,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 130,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1069,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1070,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 535,
            "type": "cites",
            "value": 10
        },
        {
            "source": 126,
            "target": 186,
            "type": "cites",
            "value": 8
        },
        {
            "source": 126,
            "target": 535,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 381,
            "type": "cites",
            "value": 23
        },
        {
            "source": 186,
            "target": 524,
            "type": "cites",
            "value": 4
        },
        {
            "source": 504,
            "target": 558,
            "type": "cites",
            "value": 3
        },
        {
            "source": 530,
            "target": 535,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 486,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1192,
            "target": 632,
            "type": "cites",
            "value": 3
        },
        {
            "source": 631,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 330,
            "target": 1193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 330,
            "target": 485,
            "type": "cites",
            "value": 5
        },
        {
            "source": 630,
            "target": 1193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 630,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 630,
            "target": 485,
            "type": "cites",
            "value": 4
        },
        {
            "source": 632,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 1193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 632,
            "target": 300,
            "type": "cites",
            "value": 7
        },
        {
            "source": 632,
            "target": 485,
            "type": "cites",
            "value": 5
        },
        {
            "source": 331,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 484,
            "type": "cites",
            "value": 5
        },
        {
            "source": 632,
            "target": 635,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 636,
            "type": "cites",
            "value": 5
        },
        {
            "source": 300,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1194,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 186,
            "type": "cites",
            "value": 28
        },
        {
            "source": 381,
            "target": 535,
            "type": "cites",
            "value": 6
        },
        {
            "source": 300,
            "target": 186,
            "type": "cites",
            "value": 20
        },
        {
            "source": 381,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 1195,
            "type": "cites",
            "value": 7
        },
        {
            "source": 381,
            "target": 1196,
            "type": "cites",
            "value": 6
        },
        {
            "source": 381,
            "target": 1197,
            "type": "cites",
            "value": 7
        },
        {
            "source": 300,
            "target": 1195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 1196,
            "type": "cites",
            "value": 7
        },
        {
            "source": 300,
            "target": 1197,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 189,
            "type": "cites",
            "value": 8
        },
        {
            "source": 381,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 1198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 380,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 412,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 558,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1199,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1199,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 811,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 811,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1199,
            "target": 811,
            "type": "cites",
            "value": 3
        },
        {
            "source": 811,
            "target": 1200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 87,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 87,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 87,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 87,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 87,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 86,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 86,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 86,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 86,
            "target": 323,
            "type": "cites",
            "value": 3
        },
        {
            "source": 86,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 317,
            "target": 195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1201,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1202,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 378,
            "target": 62,
            "type": "cites",
            "value": 9
        },
        {
            "source": 378,
            "target": 228,
            "type": "cites",
            "value": 5
        },
        {
            "source": 378,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1203,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 1050,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 1051,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 475,
            "type": "cites",
            "value": 6
        },
        {
            "source": 91,
            "target": 182,
            "type": "cites",
            "value": 9
        },
        {
            "source": 91,
            "target": 1052,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 693,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 645,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1204,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1205,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1206,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1207,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1165,
            "target": 22,
            "type": "cites",
            "value": 4
        },
        {
            "source": 645,
            "target": 22,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1208,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 31,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 304,
            "type": "cites",
            "value": 6
        },
        {
            "source": 235,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 235,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 850,
            "type": "cites",
            "value": 4
        },
        {
            "source": 233,
            "target": 850,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 376,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 526,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 526,
            "target": 336,
            "type": "cites",
            "value": 3
        },
        {
            "source": 526,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 526,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 895,
            "type": "cites",
            "value": 3
        },
        {
            "source": 116,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 195,
            "target": 940,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 940,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 195,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 25,
            "target": 288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 338,
            "type": "cites",
            "value": 5
        },
        {
            "source": 25,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 25,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 195,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 198,
            "target": 68,
            "type": "cites",
            "value": 6
        },
        {
            "source": 198,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 198,
            "target": 70,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 712,
            "type": "cites",
            "value": 3
        },
        {
            "source": 148,
            "target": 581,
            "type": "cites",
            "value": 4
        },
        {
            "source": 96,
            "target": 581,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 421,
            "type": "cites",
            "value": 10
        },
        {
            "source": 1128,
            "target": 80,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1128,
            "target": 581,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1128,
            "target": 244,
            "type": "cites",
            "value": 15
        },
        {
            "source": 1128,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1128,
            "target": 41,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1128,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1128,
            "target": 450,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 439,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1128,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1128,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1128,
            "target": 206,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 568,
            "type": "cites",
            "value": 3
        },
        {
            "source": 41,
            "target": 1209,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 568,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 1209,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 125,
            "target": 167,
            "type": "cites",
            "value": 7
        },
        {
            "source": 125,
            "target": 446,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 378,
            "target": 700,
            "type": "cites",
            "value": 5
        },
        {
            "source": 378,
            "target": 961,
            "type": "cites",
            "value": 5
        },
        {
            "source": 378,
            "target": 33,
            "type": "cites",
            "value": 5
        },
        {
            "source": 62,
            "target": 700,
            "type": "cites",
            "value": 7
        },
        {
            "source": 62,
            "target": 961,
            "type": "cites",
            "value": 7
        },
        {
            "source": 62,
            "target": 33,
            "type": "cites",
            "value": 7
        },
        {
            "source": 125,
            "target": 700,
            "type": "cites",
            "value": 6
        },
        {
            "source": 125,
            "target": 961,
            "type": "cites",
            "value": 6
        },
        {
            "source": 125,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 33,
            "type": "cites",
            "value": 8
        },
        {
            "source": 62,
            "target": 962,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 963,
            "type": "cites",
            "value": 3
        },
        {
            "source": 476,
            "target": 378,
            "type": "cites",
            "value": 5
        },
        {
            "source": 476,
            "target": 379,
            "type": "cites",
            "value": 3
        },
        {
            "source": 476,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 378,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 228,
            "target": 378,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 60,
            "type": "cites",
            "value": 5
        },
        {
            "source": 125,
            "target": 287,
            "type": "cites",
            "value": 9
        },
        {
            "source": 62,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 92,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 1210,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 232,
            "type": "cites",
            "value": 10
        },
        {
            "source": 92,
            "target": 125,
            "type": "cites",
            "value": 8
        },
        {
            "source": 92,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1211,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1154,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 92,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 206,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1212,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 420,
            "target": 151,
            "type": "cites",
            "value": 3
        },
        {
            "source": 30,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 198,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 1164,
            "type": "cites",
            "value": 4
        },
        {
            "source": 22,
            "target": 1213,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1214,
            "target": 189,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1215,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 200,
            "target": 654,
            "type": "cites",
            "value": 4
        },
        {
            "source": 200,
            "target": 515,
            "type": "cites",
            "value": 7
        },
        {
            "source": 92,
            "target": 245,
            "type": "cites",
            "value": 7
        },
        {
            "source": 92,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 1216,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 981,
            "type": "cites",
            "value": 6
        },
        {
            "source": 304,
            "target": 53,
            "type": "cites",
            "value": 7
        },
        {
            "source": 304,
            "target": 610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1217,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 1218,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 1219,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 1220,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 186,
            "type": "cites",
            "value": 9
        },
        {
            "source": 14,
            "target": 1221,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 1222,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 1223,
            "type": "cites",
            "value": 4
        },
        {
            "source": 391,
            "target": 265,
            "type": "cites",
            "value": 8
        },
        {
            "source": 391,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 338,
            "type": "cites",
            "value": 11
        },
        {
            "source": 391,
            "target": 137,
            "type": "cites",
            "value": 5
        },
        {
            "source": 391,
            "target": 125,
            "type": "cites",
            "value": 16
        },
        {
            "source": 265,
            "target": 588,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 590,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 590,
            "type": "cites",
            "value": 3
        },
        {
            "source": 318,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 318,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 91,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 379,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 691,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 184,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 76,
            "target": 167,
            "type": "cites",
            "value": 6
        },
        {
            "source": 76,
            "target": 446,
            "type": "cites",
            "value": 5
        },
        {
            "source": 446,
            "target": 167,
            "type": "cites",
            "value": 8
        },
        {
            "source": 76,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 446,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 76,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 76,
            "target": 310,
            "type": "cites",
            "value": 4
        },
        {
            "source": 446,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 446,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1224,
            "target": 540,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1225,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1226,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1227,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1228,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1229,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1230,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1231,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1232,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1233,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1234,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1235,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1236,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1224,
            "target": 541,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1225,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1226,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1227,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1228,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1229,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1230,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1231,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1232,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1233,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1234,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1235,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1236,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1224,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1224,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 1224,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 46,
            "type": "cites",
            "value": 7
        },
        {
            "source": 391,
            "target": 996,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 571,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1237,
            "target": 836,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 186,
            "type": "cites",
            "value": 8
        },
        {
            "source": 338,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 4,
            "type": "cites",
            "value": 10
        },
        {
            "source": 92,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 997,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 113,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1238,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1238,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1239,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1239,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 374,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1238,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1239,
            "target": 288,
            "type": "cites",
            "value": 4
        },
        {
            "source": 439,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1240,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1241,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1242,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 479,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 439,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 479,
            "target": 193,
            "type": "cites",
            "value": 7
        },
        {
            "source": 479,
            "target": 304,
            "type": "cites",
            "value": 5
        },
        {
            "source": 479,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1243,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1244,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1245,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1246,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1247,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1248,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1249,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 7,
            "target": 46,
            "type": "cites",
            "value": 7
        },
        {
            "source": 121,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1250,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1251,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1252,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 529,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 84,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 84,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 84,
            "target": 68,
            "type": "cites",
            "value": 6
        },
        {
            "source": 84,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 84,
            "target": 70,
            "type": "cites",
            "value": 6
        },
        {
            "source": 8,
            "target": 9,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 455,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 257,
            "type": "cites",
            "value": 5
        },
        {
            "source": 112,
            "target": 649,
            "type": "cites",
            "value": 3
        },
        {
            "source": 84,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1253,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 455,
            "target": 257,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 1254,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 1255,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 1132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 1256,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 170,
            "type": "cites",
            "value": 5
        },
        {
            "source": 257,
            "target": 171,
            "type": "cites",
            "value": 5
        },
        {
            "source": 112,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 111,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1257,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 577,
            "type": "cites",
            "value": 3
        },
        {
            "source": 132,
            "target": 187,
            "type": "cites",
            "value": 3
        },
        {
            "source": 64,
            "target": 784,
            "type": "cites",
            "value": 6
        },
        {
            "source": 64,
            "target": 61,
            "type": "cites",
            "value": 7
        },
        {
            "source": 65,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 63,
            "target": 44,
            "type": "cites",
            "value": 10
        },
        {
            "source": 63,
            "target": 784,
            "type": "cites",
            "value": 5
        },
        {
            "source": 63,
            "target": 61,
            "type": "cites",
            "value": 5
        },
        {
            "source": 63,
            "target": 1000,
            "type": "cites",
            "value": 4
        },
        {
            "source": 63,
            "target": 72,
            "type": "cites",
            "value": 14
        },
        {
            "source": 63,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 64,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 63,
            "target": 0,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1258,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 313,
            "target": 1259,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 1260,
            "type": "cites",
            "value": 3
        },
        {
            "source": 221,
            "target": 1261,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1262,
            "target": 132,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1263,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1262,
            "target": 52,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1263,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1264,
            "target": 192,
            "type": "cites",
            "value": 4
        },
        {
            "source": 50,
            "target": 201,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1265,
            "target": 192,
            "type": "cites",
            "value": 4
        },
        {
            "source": 46,
            "target": 192,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1264,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1265,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 187,
            "type": "cites",
            "value": 4
        },
        {
            "source": 192,
            "target": 46,
            "type": "cites",
            "value": 17
        },
        {
            "source": 46,
            "target": 49,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 341,
            "type": "cites",
            "value": 4
        },
        {
            "source": 192,
            "target": 52,
            "type": "cites",
            "value": 6
        },
        {
            "source": 50,
            "target": 455,
            "type": "cites",
            "value": 4
        },
        {
            "source": 192,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1159,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1159,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 61,
            "target": 784,
            "type": "cites",
            "value": 11
        },
        {
            "source": 61,
            "target": 54,
            "type": "cites",
            "value": 3
        },
        {
            "source": 61,
            "target": 7,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1134,
            "target": 320,
            "type": "cites",
            "value": 4
        },
        {
            "source": 281,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 132,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 682,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 682,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 63,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 63,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 862,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 102,
            "target": 244,
            "type": "cites",
            "value": 10
        },
        {
            "source": 1267,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 113,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 87,
            "type": "cites",
            "value": 10
        },
        {
            "source": 187,
            "target": 86,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1267,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1268,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1269,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1270,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 187,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 187,
            "target": 304,
            "type": "cites",
            "value": 6
        },
        {
            "source": 215,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 571,
            "type": "cites",
            "value": 4
        },
        {
            "source": 200,
            "target": 571,
            "type": "cites",
            "value": 4
        },
        {
            "source": 215,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 215,
            "target": 24,
            "type": "cites",
            "value": 4
        },
        {
            "source": 215,
            "target": 26,
            "type": "cites",
            "value": 7
        },
        {
            "source": 215,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 177,
            "target": 686,
            "type": "cites",
            "value": 5
        },
        {
            "source": 177,
            "target": 1271,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 1272,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 421,
            "type": "cites",
            "value": 3
        },
        {
            "source": 322,
            "target": 102,
            "type": "cites",
            "value": 5
        },
        {
            "source": 102,
            "target": 977,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 977,
            "type": "cites",
            "value": 8
        },
        {
            "source": 14,
            "target": 978,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 979,
            "type": "cites",
            "value": 5
        },
        {
            "source": 46,
            "target": 53,
            "type": "cites",
            "value": 5
        },
        {
            "source": 935,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 759,
            "target": 1,
            "type": "cites",
            "value": 5
        },
        {
            "source": 759,
            "target": 853,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 853,
            "type": "cites",
            "value": 5
        },
        {
            "source": 228,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 228,
            "target": 853,
            "type": "cites",
            "value": 5
        },
        {
            "source": 759,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 111,
            "type": "cites",
            "value": 4
        },
        {
            "source": 228,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 556,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1273,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1274,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1274,
            "target": 193,
            "type": "cites",
            "value": 11
        },
        {
            "source": 1274,
            "target": 195,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1275,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 912,
            "type": "cites",
            "value": 6
        },
        {
            "source": 231,
            "target": 1276,
            "type": "cites",
            "value": 5
        },
        {
            "source": 231,
            "target": 1277,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 1261,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 475,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 1278,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 595,
            "target": 591,
            "type": "cites",
            "value": 6
        },
        {
            "source": 595,
            "target": 625,
            "type": "cites",
            "value": 8
        },
        {
            "source": 595,
            "target": 540,
            "type": "cites",
            "value": 7
        },
        {
            "source": 595,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 595,
            "target": 541,
            "type": "cites",
            "value": 12
        },
        {
            "source": 1279,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 595,
            "target": 598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 608,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 608,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 608,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 190,
            "type": "cites",
            "value": 5
        },
        {
            "source": 186,
            "target": 816,
            "type": "cites",
            "value": 7
        },
        {
            "source": 186,
            "target": 817,
            "type": "cites",
            "value": 5
        },
        {
            "source": 186,
            "target": 300,
            "type": "cites",
            "value": 15
        },
        {
            "source": 1280,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1281,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 299,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1282,
            "target": 186,
            "type": "cites",
            "value": 6
        },
        {
            "source": 380,
            "target": 186,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1280,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 494,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1282,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 380,
            "target": 381,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1280,
            "target": 380,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 380,
            "type": "cites",
            "value": 7
        },
        {
            "source": 186,
            "target": 288,
            "type": "cites",
            "value": 5
        },
        {
            "source": 186,
            "target": 287,
            "type": "cites",
            "value": 14
        },
        {
            "source": 1282,
            "target": 380,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1282,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 380,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 380,
            "target": 287,
            "type": "cites",
            "value": 8
        },
        {
            "source": 883,
            "target": 26,
            "type": "cites",
            "value": 6
        },
        {
            "source": 883,
            "target": 316,
            "type": "cites",
            "value": 9
        },
        {
            "source": 312,
            "target": 1283,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1284,
            "target": 547,
            "type": "cites",
            "value": 4
        },
        {
            "source": 547,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 547,
            "target": 814,
            "type": "cites",
            "value": 3
        },
        {
            "source": 547,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1284,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1285,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1286,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1287,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1284,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 312,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 412,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 1288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 556,
            "type": "cites",
            "value": 21
        },
        {
            "source": 486,
            "target": 487,
            "type": "cites",
            "value": 13
        },
        {
            "source": 189,
            "target": 1289,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 1078,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1214,
            "target": 1184,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1184,
            "target": 189,
            "type": "cites",
            "value": 10
        },
        {
            "source": 1187,
            "target": 1184,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1187,
            "target": 189,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1184,
            "target": 814,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1184,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1184,
            "target": 1290,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 814,
            "type": "cites",
            "value": 7
        },
        {
            "source": 189,
            "target": 127,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 1290,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 1291,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 412,
            "type": "cites",
            "value": 5
        },
        {
            "source": 189,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1292,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 649,
            "target": 257,
            "type": "cites",
            "value": 5
        },
        {
            "source": 649,
            "target": 1177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 649,
            "target": 479,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1293,
            "target": 136,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1293,
            "target": 403,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1293,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1293,
            "target": 404,
            "type": "cites",
            "value": 5
        },
        {
            "source": 403,
            "target": 136,
            "type": "cites",
            "value": 14
        },
        {
            "source": 403,
            "target": 512,
            "type": "cites",
            "value": 6
        },
        {
            "source": 403,
            "target": 511,
            "type": "cites",
            "value": 6
        },
        {
            "source": 403,
            "target": 26,
            "type": "cites",
            "value": 15
        },
        {
            "source": 403,
            "target": 404,
            "type": "cites",
            "value": 14
        },
        {
            "source": 404,
            "target": 512,
            "type": "cites",
            "value": 6
        },
        {
            "source": 404,
            "target": 511,
            "type": "cites",
            "value": 6
        },
        {
            "source": 136,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 136,
            "target": 1294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 186,
            "type": "cites",
            "value": 5
        },
        {
            "source": 136,
            "target": 1295,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 1296,
            "type": "cites",
            "value": 6
        },
        {
            "source": 136,
            "target": 1297,
            "type": "cites",
            "value": 4
        },
        {
            "source": 136,
            "target": 1298,
            "type": "cites",
            "value": 4
        },
        {
            "source": 403,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 403,
            "target": 1294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 403,
            "target": 186,
            "type": "cites",
            "value": 5
        },
        {
            "source": 403,
            "target": 1295,
            "type": "cites",
            "value": 4
        },
        {
            "source": 403,
            "target": 1296,
            "type": "cites",
            "value": 5
        },
        {
            "source": 403,
            "target": 1297,
            "type": "cites",
            "value": 4
        },
        {
            "source": 403,
            "target": 1298,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 1294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 1295,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 1296,
            "type": "cites",
            "value": 5
        },
        {
            "source": 26,
            "target": 1297,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 1298,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 381,
            "type": "cites",
            "value": 6
        },
        {
            "source": 404,
            "target": 1294,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 186,
            "type": "cites",
            "value": 7
        },
        {
            "source": 404,
            "target": 1295,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 1296,
            "type": "cites",
            "value": 6
        },
        {
            "source": 404,
            "target": 1297,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 1298,
            "type": "cites",
            "value": 4
        },
        {
            "source": 403,
            "target": 529,
            "type": "cites",
            "value": 5
        },
        {
            "source": 403,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 1299,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 765,
            "type": "cites",
            "value": 5
        },
        {
            "source": 404,
            "target": 1078,
            "type": "cites",
            "value": 4
        },
        {
            "source": 641,
            "target": 1103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 641,
            "target": 349,
            "type": "cites",
            "value": 6
        },
        {
            "source": 641,
            "target": 1300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 641,
            "target": 1301,
            "type": "cites",
            "value": 3
        },
        {
            "source": 641,
            "target": 640,
            "type": "cites",
            "value": 6
        },
        {
            "source": 353,
            "target": 299,
            "type": "cites",
            "value": 3
        },
        {
            "source": 641,
            "target": 1302,
            "type": "cites",
            "value": 8
        },
        {
            "source": 641,
            "target": 1303,
            "type": "cites",
            "value": 8
        },
        {
            "source": 641,
            "target": 1304,
            "type": "cites",
            "value": 8
        },
        {
            "source": 641,
            "target": 1305,
            "type": "cites",
            "value": 6
        },
        {
            "source": 641,
            "target": 186,
            "type": "cites",
            "value": 12
        },
        {
            "source": 641,
            "target": 1306,
            "type": "cites",
            "value": 6
        },
        {
            "source": 641,
            "target": 299,
            "type": "cites",
            "value": 12
        },
        {
            "source": 349,
            "target": 1302,
            "type": "cites",
            "value": 4
        },
        {
            "source": 349,
            "target": 1303,
            "type": "cites",
            "value": 4
        },
        {
            "source": 349,
            "target": 1304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 349,
            "target": 1305,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 1306,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 549,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 553,
            "type": "cites",
            "value": 15
        },
        {
            "source": 486,
            "target": 554,
            "type": "cites",
            "value": 12
        },
        {
            "source": 486,
            "target": 483,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 555,
            "type": "cites",
            "value": 9
        },
        {
            "source": 486,
            "target": 130,
            "type": "cites",
            "value": 16
        },
        {
            "source": 1307,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1308,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 186,
            "type": "cites",
            "value": 7
        },
        {
            "source": 204,
            "target": 311,
            "type": "cites",
            "value": 8
        },
        {
            "source": 204,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1309,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 530,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 316,
            "type": "cites",
            "value": 8
        },
        {
            "source": 381,
            "target": 1310,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 1311,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 204,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1312,
            "target": 487,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1312,
            "target": 313,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1312,
            "target": 634,
            "type": "cites",
            "value": 5
        },
        {
            "source": 135,
            "target": 125,
            "type": "cites",
            "value": 10
        },
        {
            "source": 166,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 161,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 162,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 163,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 164,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 165,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 166,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 135,
            "target": 446,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 446,
            "type": "cites",
            "value": 11
        },
        {
            "source": 167,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 135,
            "target": 2,
            "type": "cites",
            "value": 6
        },
        {
            "source": 135,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 166,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 166,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 135,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 135,
            "target": 590,
            "type": "cites",
            "value": 6
        },
        {
            "source": 60,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 60,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 86,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 87,
            "target": 86,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 573,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 574,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 575,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 576,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 1313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 1313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 135,
            "type": "cites",
            "value": 6
        },
        {
            "source": 137,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 867,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 265,
            "type": "cites",
            "value": 5
        },
        {
            "source": 137,
            "target": 125,
            "type": "cites",
            "value": 10
        },
        {
            "source": 255,
            "target": 137,
            "type": "cites",
            "value": 4
        },
        {
            "source": 255,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 137,
            "target": 588,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 589,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 590,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 0,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 231,
            "type": "cites",
            "value": 5
        },
        {
            "source": 125,
            "target": 875,
            "type": "cites",
            "value": 4
        },
        {
            "source": 245,
            "target": 231,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 245,
            "type": "cites",
            "value": 3
        },
        {
            "source": 245,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 830,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 244,
            "type": "cites",
            "value": 8
        },
        {
            "source": 249,
            "target": 439,
            "type": "cites",
            "value": 3
        },
        {
            "source": 249,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 847,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 367,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1314,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1315,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 367,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 256,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 1316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 367,
            "target": 92,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1317,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 323,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 323,
            "target": 244,
            "type": "cites",
            "value": 12
        },
        {
            "source": 323,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 323,
            "target": 321,
            "type": "cites",
            "value": 3
        },
        {
            "source": 323,
            "target": 322,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 103,
            "type": "cites",
            "value": 12
        },
        {
            "source": 320,
            "target": 430,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 430,
            "type": "cites",
            "value": 11
        },
        {
            "source": 244,
            "target": 431,
            "type": "cites",
            "value": 4
        },
        {
            "source": 323,
            "target": 1118,
            "type": "cites",
            "value": 3
        },
        {
            "source": 323,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 323,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 379,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 379,
            "target": 961,
            "type": "cites",
            "value": 3
        },
        {
            "source": 379,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 62,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 282,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 215,
            "type": "cites",
            "value": 7
        },
        {
            "source": 141,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 22,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1318,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1318,
            "target": 581,
            "type": "cites",
            "value": 4
        },
        {
            "source": 684,
            "target": 938,
            "type": "cites",
            "value": 3
        },
        {
            "source": 684,
            "target": 939,
            "type": "cites",
            "value": 3
        },
        {
            "source": 201,
            "target": 938,
            "type": "cites",
            "value": 4
        },
        {
            "source": 201,
            "target": 939,
            "type": "cites",
            "value": 4
        },
        {
            "source": 309,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 308,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 4,
            "type": "cites",
            "value": 12
        },
        {
            "source": 1319,
            "target": 310,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1319,
            "target": 308,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1319,
            "target": 309,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1320,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 309,
            "target": 1146,
            "type": "cites",
            "value": 3
        },
        {
            "source": 309,
            "target": 310,
            "type": "cites",
            "value": 9
        },
        {
            "source": 309,
            "target": 308,
            "type": "cites",
            "value": 5
        },
        {
            "source": 308,
            "target": 1146,
            "type": "cites",
            "value": 4
        },
        {
            "source": 308,
            "target": 310,
            "type": "cites",
            "value": 14
        },
        {
            "source": 308,
            "target": 309,
            "type": "cites",
            "value": 7
        },
        {
            "source": 309,
            "target": 75,
            "type": "cites",
            "value": 4
        },
        {
            "source": 309,
            "target": 1147,
            "type": "cites",
            "value": 4
        },
        {
            "source": 309,
            "target": 1148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 309,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 308,
            "target": 75,
            "type": "cites",
            "value": 5
        },
        {
            "source": 308,
            "target": 1147,
            "type": "cites",
            "value": 5
        },
        {
            "source": 308,
            "target": 1148,
            "type": "cites",
            "value": 4
        },
        {
            "source": 308,
            "target": 177,
            "type": "cites",
            "value": 5
        },
        {
            "source": 309,
            "target": 167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 309,
            "target": 446,
            "type": "cites",
            "value": 3
        },
        {
            "source": 308,
            "target": 167,
            "type": "cites",
            "value": 4
        },
        {
            "source": 308,
            "target": 446,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1319,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 309,
            "target": 38,
            "type": "cites",
            "value": 4
        },
        {
            "source": 308,
            "target": 38,
            "type": "cites",
            "value": 6
        },
        {
            "source": 309,
            "target": 903,
            "type": "cites",
            "value": 3
        },
        {
            "source": 308,
            "target": 903,
            "type": "cites",
            "value": 4
        },
        {
            "source": 310,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 1164,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 26,
            "type": "cites",
            "value": 5
        },
        {
            "source": 215,
            "target": 451,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1321,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 215,
            "type": "cites",
            "value": 6
        },
        {
            "source": 288,
            "target": 285,
            "type": "cites",
            "value": 5
        },
        {
            "source": 288,
            "target": 287,
            "type": "cites",
            "value": 15
        },
        {
            "source": 215,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 215,
            "target": 288,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1321,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1322,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 4,
            "type": "cites",
            "value": 8
        },
        {
            "source": 288,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 69,
            "type": "cites",
            "value": 4
        },
        {
            "source": 282,
            "target": 200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 288,
            "target": 282,
            "type": "cites",
            "value": 4
        },
        {
            "source": 288,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 24,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 282,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 455,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1323,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 796,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1324,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1325,
            "target": 1210,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1325,
            "target": 276,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1325,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 367,
            "target": 1210,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1326,
            "target": 1210,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1326,
            "target": 276,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1326,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 952,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 1210,
            "type": "cites",
            "value": 7
        },
        {
            "source": 276,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 367,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 367,
            "target": 961,
            "type": "cites",
            "value": 3
        },
        {
            "source": 367,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 700,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 961,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 33,
            "type": "cites",
            "value": 4
        },
        {
            "source": 276,
            "target": 1327,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1328,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 740,
            "target": 287,
            "type": "cites",
            "value": 5
        },
        {
            "source": 820,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 346,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1329,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 820,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 820,
            "target": 810,
            "type": "cites",
            "value": 3
        },
        {
            "source": 820,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 820,
            "target": 811,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1330,
            "target": 53,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1330,
            "target": 193,
            "type": "cites",
            "value": 7
        },
        {
            "source": 892,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1274,
            "target": 53,
            "type": "cites",
            "value": 4
        },
        {
            "source": 113,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 113,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 171,
            "target": 170,
            "type": "cites",
            "value": 12
        },
        {
            "source": 170,
            "target": 171,
            "type": "cites",
            "value": 12
        },
        {
            "source": 1331,
            "target": 170,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1331,
            "target": 171,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1332,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1332,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 171,
            "target": 257,
            "type": "cites",
            "value": 4
        },
        {
            "source": 170,
            "target": 257,
            "type": "cites",
            "value": 4
        },
        {
            "source": 171,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 170,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1333,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1334,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 995,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1335,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1146,
            "target": 310,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1146,
            "target": 75,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1146,
            "target": 1147,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1146,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1146,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 308,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 981,
            "type": "cites",
            "value": 4
        },
        {
            "source": 310,
            "target": 1336,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 1337,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 1338,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 836,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 486,
            "type": "cites",
            "value": 6
        },
        {
            "source": 125,
            "target": 1261,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 1339,
            "type": "cites",
            "value": 9
        },
        {
            "source": 125,
            "target": 1340,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 166,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 391,
            "target": 670,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 686,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1341,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1342,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1343,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1344,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1345,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1346,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1347,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 24,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 391,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 1164,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 1164,
            "type": "cites",
            "value": 6
        },
        {
            "source": 125,
            "target": 1213,
            "type": "cites",
            "value": 5
        },
        {
            "source": 125,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 60,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 247,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1274,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 9,
            "target": 72,
            "type": "cites",
            "value": 7
        },
        {
            "source": 9,
            "target": 75,
            "type": "cites",
            "value": 4
        },
        {
            "source": 9,
            "target": 1147,
            "type": "cites",
            "value": 3
        },
        {
            "source": 9,
            "target": 1148,
            "type": "cites",
            "value": 3
        },
        {
            "source": 9,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 9,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 9,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 9,
            "target": 38,
            "type": "cites",
            "value": 3
        },
        {
            "source": 9,
            "target": 750,
            "type": "cites",
            "value": 3
        },
        {
            "source": 9,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 122,
            "target": 195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 191,
            "type": "cites",
            "value": 7
        },
        {
            "source": 4,
            "target": 195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 112,
            "type": "cites",
            "value": 5
        },
        {
            "source": 199,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 72,
            "target": 885,
            "type": "cites",
            "value": 5
        },
        {
            "source": 72,
            "target": 1057,
            "type": "cites",
            "value": 3
        },
        {
            "source": 199,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 72,
            "target": 158,
            "type": "cites",
            "value": 4
        },
        {
            "source": 72,
            "target": 1348,
            "type": "cites",
            "value": 5
        },
        {
            "source": 198,
            "target": 304,
            "type": "cites",
            "value": 4
        },
        {
            "source": 198,
            "target": 712,
            "type": "cites",
            "value": 4
        },
        {
            "source": 199,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 63,
            "type": "cites",
            "value": 14
        },
        {
            "source": 72,
            "target": 1349,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 1350,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 1351,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 46,
            "type": "cites",
            "value": 5
        },
        {
            "source": 198,
            "target": 32,
            "type": "cites",
            "value": 4
        },
        {
            "source": 72,
            "target": 886,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 887,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1054,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 885,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 1057,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 281,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 34,
            "type": "cites",
            "value": 6
        },
        {
            "source": 192,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 281,
            "target": 92,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 92,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 50,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 34,
            "type": "cites",
            "value": 10
        },
        {
            "source": 52,
            "target": 369,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1352,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1352,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 50,
            "target": 200,
            "type": "cites",
            "value": 4
        },
        {
            "source": 50,
            "target": 215,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1353,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1353,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1354,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1354,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 200,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 215,
            "type": "cites",
            "value": 4
        },
        {
            "source": 51,
            "target": 215,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 654,
            "type": "cites",
            "value": 5
        },
        {
            "source": 215,
            "target": 515,
            "type": "cites",
            "value": 7
        },
        {
            "source": 215,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 376,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1355,
            "target": 46,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1356,
            "target": 46,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1355,
            "target": 281,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1355,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1356,
            "target": 281,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1356,
            "target": 192,
            "type": "cites",
            "value": 5
        },
        {
            "source": 52,
            "target": 281,
            "type": "cites",
            "value": 4
        },
        {
            "source": 73,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 73,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 649,
            "target": 416,
            "type": "cites",
            "value": 3
        },
        {
            "source": 192,
            "target": 132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 341,
            "target": 46,
            "type": "cites",
            "value": 5
        },
        {
            "source": 341,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 1357,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 1358,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 1359,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 257,
            "target": 1360,
            "type": "cites",
            "value": 3
        },
        {
            "source": 649,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 455,
            "type": "cites",
            "value": 4
        },
        {
            "source": 455,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 599,
            "target": 540,
            "type": "cites",
            "value": 4
        },
        {
            "source": 599,
            "target": 541,
            "type": "cites",
            "value": 8
        },
        {
            "source": 600,
            "target": 540,
            "type": "cites",
            "value": 4
        },
        {
            "source": 600,
            "target": 541,
            "type": "cites",
            "value": 8
        },
        {
            "source": 599,
            "target": 591,
            "type": "cites",
            "value": 5
        },
        {
            "source": 600,
            "target": 591,
            "type": "cites",
            "value": 5
        },
        {
            "source": 599,
            "target": 625,
            "type": "cites",
            "value": 5
        },
        {
            "source": 600,
            "target": 625,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 179,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1361,
            "target": 169,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 34,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 369,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1361,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 440,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 951,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 1362,
            "type": "cites",
            "value": 4
        },
        {
            "source": 404,
            "target": 232,
            "type": "cites",
            "value": 7
        },
        {
            "source": 404,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 403,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 136,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1091,
            "target": 883,
            "type": "cites",
            "value": 4
        },
        {
            "source": 26,
            "target": 883,
            "type": "cites",
            "value": 5
        },
        {
            "source": 178,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1363,
            "target": 312,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1363,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 178,
            "target": 1283,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1363,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 312,
            "target": 1085,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1364,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 126,
            "target": 895,
            "type": "cites",
            "value": 3
        },
        {
            "source": 126,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 126,
            "target": 72,
            "type": "cites",
            "value": 5
        },
        {
            "source": 186,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1200,
            "target": 811,
            "type": "cites",
            "value": 3
        },
        {
            "source": 558,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 86,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 87,
            "target": 191,
            "type": "cites",
            "value": 6
        },
        {
            "source": 188,
            "target": 55,
            "type": "cites",
            "value": 4
        },
        {
            "source": 188,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 196,
            "type": "cites",
            "value": 6
        },
        {
            "source": 234,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 234,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 320,
            "target": 977,
            "type": "cites",
            "value": 3
        },
        {
            "source": 321,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 321,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 320,
            "target": 651,
            "type": "cites",
            "value": 6
        },
        {
            "source": 260,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 288,
            "type": "cites",
            "value": 6
        },
        {
            "source": 235,
            "target": 783,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 75,
            "type": "cites",
            "value": 5
        },
        {
            "source": 293,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 905,
            "target": 38,
            "type": "cites",
            "value": 6
        },
        {
            "source": 906,
            "target": 38,
            "type": "cites",
            "value": 8
        },
        {
            "source": 906,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 1365,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 902,
            "type": "cites",
            "value": 6
        },
        {
            "source": 38,
            "target": 903,
            "type": "cites",
            "value": 7
        },
        {
            "source": 293,
            "target": 76,
            "type": "cites",
            "value": 5
        },
        {
            "source": 293,
            "target": 446,
            "type": "cites",
            "value": 6
        },
        {
            "source": 293,
            "target": 135,
            "type": "cites",
            "value": 5
        },
        {
            "source": 38,
            "target": 135,
            "type": "cites",
            "value": 5
        },
        {
            "source": 293,
            "target": 137,
            "type": "cites",
            "value": 3
        },
        {
            "source": 293,
            "target": 590,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 590,
            "type": "cites",
            "value": 3
        },
        {
            "source": 38,
            "target": 3,
            "type": "cites",
            "value": 4
        },
        {
            "source": 80,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 581,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 307,
            "target": 310,
            "type": "cites",
            "value": 5
        },
        {
            "source": 307,
            "target": 308,
            "type": "cites",
            "value": 3
        },
        {
            "source": 307,
            "target": 309,
            "type": "cites",
            "value": 3
        },
        {
            "source": 307,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 3,
            "type": "cites",
            "value": 5
        },
        {
            "source": 773,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 773,
            "target": 244,
            "type": "cites",
            "value": 14
        },
        {
            "source": 750,
            "target": 197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 391,
            "target": 60,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 1164,
            "type": "cites",
            "value": 4
        },
        {
            "source": 265,
            "target": 80,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 265,
            "target": 200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1366,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 955,
            "type": "cites",
            "value": 4
        },
        {
            "source": 167,
            "target": 956,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1366,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 187,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 961,
            "type": "cites",
            "value": 3
        },
        {
            "source": 167,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 167,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1112,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 75,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1367,
            "target": 38,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1368,
            "target": 38,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1369,
            "target": 38,
            "type": "cites",
            "value": 7
        },
        {
            "source": 75,
            "target": 38,
            "type": "cites",
            "value": 10
        },
        {
            "source": 75,
            "target": 902,
            "type": "cites",
            "value": 3
        },
        {
            "source": 75,
            "target": 903,
            "type": "cites",
            "value": 3
        },
        {
            "source": 75,
            "target": 308,
            "type": "cites",
            "value": 3
        },
        {
            "source": 75,
            "target": 309,
            "type": "cites",
            "value": 3
        },
        {
            "source": 75,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 14,
            "type": "cites",
            "value": 5
        },
        {
            "source": 62,
            "target": 196,
            "type": "cites",
            "value": 4
        },
        {
            "source": 62,
            "target": 959,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1370,
            "target": 33,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1370,
            "target": 700,
            "type": "cites",
            "value": 4
        },
        {
            "source": 700,
            "target": 33,
            "type": "cites",
            "value": 14
        },
        {
            "source": 1370,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 700,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 132,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 938,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 938,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 201,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 939,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 939,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 938,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 132,
            "target": 3,
            "type": "cites",
            "value": 4
        },
        {
            "source": 939,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 938,
            "target": 280,
            "type": "cites",
            "value": 3
        },
        {
            "source": 938,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 201,
            "target": 280,
            "type": "cites",
            "value": 3
        },
        {
            "source": 201,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 939,
            "target": 280,
            "type": "cites",
            "value": 3
        },
        {
            "source": 939,
            "target": 198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 201,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 938,
            "target": 201,
            "type": "cites",
            "value": 4
        },
        {
            "source": 938,
            "target": 939,
            "type": "cites",
            "value": 3
        },
        {
            "source": 939,
            "target": 938,
            "type": "cites",
            "value": 3
        },
        {
            "source": 939,
            "target": 201,
            "type": "cites",
            "value": 4
        },
        {
            "source": 329,
            "target": 287,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1371,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 288,
            "target": 381,
            "type": "cites",
            "value": 7
        },
        {
            "source": 287,
            "target": 816,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 817,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 816,
            "type": "cites",
            "value": 4
        },
        {
            "source": 288,
            "target": 817,
            "type": "cites",
            "value": 4
        },
        {
            "source": 288,
            "target": 1105,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 300,
            "type": "cites",
            "value": 6
        },
        {
            "source": 329,
            "target": 288,
            "type": "cites",
            "value": 10
        },
        {
            "source": 329,
            "target": 0,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1371,
            "target": 288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 288,
            "type": "cites",
            "value": 7
        },
        {
            "source": 287,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 288,
            "target": 329,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 0,
            "type": "cites",
            "value": 7
        },
        {
            "source": 287,
            "target": 285,
            "type": "cites",
            "value": 3
        },
        {
            "source": 329,
            "target": 377,
            "type": "cites",
            "value": 3
        },
        {
            "source": 329,
            "target": 1372,
            "type": "cites",
            "value": 3
        },
        {
            "source": 329,
            "target": 1373,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 377,
            "type": "cites",
            "value": 8
        },
        {
            "source": 288,
            "target": 1372,
            "type": "cites",
            "value": 7
        },
        {
            "source": 288,
            "target": 1373,
            "type": "cites",
            "value": 7
        },
        {
            "source": 288,
            "target": 380,
            "type": "cites",
            "value": 4
        },
        {
            "source": 287,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 749,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 1374,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 1375,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 1376,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 749,
            "type": "cites",
            "value": 4
        },
        {
            "source": 329,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 288,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 190,
            "target": 381,
            "type": "cites",
            "value": 5
        },
        {
            "source": 190,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 1104,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 1377,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 1378,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 596,
            "type": "cites",
            "value": 6
        },
        {
            "source": 190,
            "target": 1078,
            "type": "cites",
            "value": 10
        },
        {
            "source": 190,
            "target": 186,
            "type": "cites",
            "value": 6
        },
        {
            "source": 190,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 810,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 811,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1128,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 56,
            "type": "cites",
            "value": 3
        },
        {
            "source": 450,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1379,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 450,
            "target": 80,
            "type": "cites",
            "value": 6
        },
        {
            "source": 450,
            "target": 581,
            "type": "cites",
            "value": 6
        },
        {
            "source": 450,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 450,
            "target": 320,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1379,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 450,
            "target": 103,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1134,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1128,
            "target": 651,
            "type": "cites",
            "value": 4
        },
        {
            "source": 450,
            "target": 651,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1128,
            "target": 972,
            "type": "cites",
            "value": 3
        },
        {
            "source": 450,
            "target": 972,
            "type": "cites",
            "value": 3
        },
        {
            "source": 938,
            "target": 72,
            "type": "cites",
            "value": 7
        },
        {
            "source": 939,
            "target": 72,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1380,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 201,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 995,
            "target": 654,
            "type": "cites",
            "value": 7
        },
        {
            "source": 995,
            "target": 515,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 700,
            "type": "cites",
            "value": 8
        },
        {
            "source": 33,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1049,
            "target": 336,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1049,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 833,
            "type": "cites",
            "value": 6
        },
        {
            "source": 125,
            "target": 834,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1381,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 125,
            "target": 1382,
            "type": "cites",
            "value": 5
        },
        {
            "source": 836,
            "target": 1383,
            "type": "cites",
            "value": 3
        },
        {
            "source": 836,
            "target": 1384,
            "type": "cites",
            "value": 3
        },
        {
            "source": 836,
            "target": 486,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1385,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1386,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 112,
            "target": 191,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1145,
            "target": 3,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1145,
            "target": 997,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1145,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1145,
            "target": 310,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1,
            "target": 853,
            "type": "cites",
            "value": 6
        },
        {
            "source": 111,
            "target": 853,
            "type": "cites",
            "value": 5
        },
        {
            "source": 66,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 66,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 67,
            "target": 135,
            "type": "cites",
            "value": 3
        },
        {
            "source": 67,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 588,
            "type": "cites",
            "value": 4
        },
        {
            "source": 66,
            "target": 2,
            "type": "cites",
            "value": 4
        },
        {
            "source": 66,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 67,
            "target": 2,
            "type": "cites",
            "value": 4
        },
        {
            "source": 67,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 4,
            "type": "cites",
            "value": 10
        },
        {
            "source": 135,
            "target": 1313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 711,
            "target": 7,
            "type": "cites",
            "value": 6
        },
        {
            "source": 26,
            "target": 675,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 208,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 209,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1387,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1388,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 1198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 287,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 287,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 288,
            "target": 72,
            "type": "cites",
            "value": 8
        },
        {
            "source": 76,
            "target": 903,
            "type": "cites",
            "value": 3
        },
        {
            "source": 76,
            "target": 308,
            "type": "cites",
            "value": 3
        },
        {
            "source": 76,
            "target": 309,
            "type": "cites",
            "value": 3
        },
        {
            "source": 80,
            "target": 571,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 571,
            "type": "cites",
            "value": 6
        },
        {
            "source": 14,
            "target": 191,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 590,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 1053,
            "type": "cites",
            "value": 7
        },
        {
            "source": 14,
            "target": 1216,
            "type": "cites",
            "value": 5
        },
        {
            "source": 112,
            "target": 996,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 996,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 1,
            "type": "cites",
            "value": 4
        },
        {
            "source": 112,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 4,
            "target": 86,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1389,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 783,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 1390,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 287,
            "type": "cites",
            "value": 4
        },
        {
            "source": 699,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 784,
            "target": 177,
            "type": "cites",
            "value": 3
        },
        {
            "source": 61,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 202,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 169,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 784,
            "target": 61,
            "type": "cites",
            "value": 15
        },
        {
            "source": 784,
            "target": 44,
            "type": "cites",
            "value": 19
        },
        {
            "source": 784,
            "target": 54,
            "type": "cites",
            "value": 4
        },
        {
            "source": 357,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 357,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 357,
            "target": 300,
            "type": "cites",
            "value": 5
        },
        {
            "source": 357,
            "target": 51,
            "type": "cites",
            "value": 4
        },
        {
            "source": 357,
            "target": 372,
            "type": "cites",
            "value": 4
        },
        {
            "source": 62,
            "target": 580,
            "type": "cites",
            "value": 4
        },
        {
            "source": 52,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1391,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 62,
            "target": 455,
            "type": "cites",
            "value": 3
        },
        {
            "source": 62,
            "target": 201,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1391,
            "target": 50,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 112,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 194,
            "type": "cites",
            "value": 7
        },
        {
            "source": 25,
            "target": 200,
            "type": "cites",
            "value": 5
        },
        {
            "source": 25,
            "target": 215,
            "type": "cites",
            "value": 5
        },
        {
            "source": 25,
            "target": 515,
            "type": "cites",
            "value": 3
        },
        {
            "source": 181,
            "target": 182,
            "type": "cites",
            "value": 3
        },
        {
            "source": 183,
            "target": 169,
            "type": "cites",
            "value": 3
        },
        {
            "source": 183,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 181,
            "target": 178,
            "type": "cites",
            "value": 5
        },
        {
            "source": 183,
            "target": 178,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 178,
            "type": "cites",
            "value": 7
        },
        {
            "source": 91,
            "target": 184,
            "type": "cites",
            "value": 4
        },
        {
            "source": 181,
            "target": 179,
            "type": "cites",
            "value": 3
        },
        {
            "source": 181,
            "target": 439,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 179,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 440,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 951,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 439,
            "type": "cites",
            "value": 4
        },
        {
            "source": 181,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 181,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 170,
            "type": "cites",
            "value": 5
        },
        {
            "source": 91,
            "target": 171,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1392,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 190,
            "target": 479,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1392,
            "target": 596,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1392,
            "target": 1078,
            "type": "cites",
            "value": 6
        },
        {
            "source": 895,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 287,
            "target": 63,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1393,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1393,
            "target": 126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1394,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1394,
            "target": 126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1395,
            "target": 132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1395,
            "target": 126,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 1085,
            "type": "cites",
            "value": 3
        },
        {
            "source": 912,
            "target": 541,
            "type": "cites",
            "value": 13
        },
        {
            "source": 912,
            "target": 765,
            "type": "cites",
            "value": 4
        },
        {
            "source": 912,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 912,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 912,
            "target": 920,
            "type": "cites",
            "value": 3
        },
        {
            "source": 912,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 912,
            "target": 231,
            "type": "cites",
            "value": 5
        },
        {
            "source": 642,
            "target": 641,
            "type": "cites",
            "value": 4
        },
        {
            "source": 640,
            "target": 1396,
            "type": "cites",
            "value": 3
        },
        {
            "source": 640,
            "target": 641,
            "type": "cites",
            "value": 5
        },
        {
            "source": 641,
            "target": 1396,
            "type": "cites",
            "value": 7
        },
        {
            "source": 641,
            "target": 1397,
            "type": "cites",
            "value": 3
        },
        {
            "source": 349,
            "target": 1396,
            "type": "cites",
            "value": 3
        },
        {
            "source": 640,
            "target": 1302,
            "type": "cites",
            "value": 3
        },
        {
            "source": 640,
            "target": 299,
            "type": "cites",
            "value": 4
        },
        {
            "source": 641,
            "target": 486,
            "type": "cites",
            "value": 5
        },
        {
            "source": 381,
            "target": 299,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 641,
            "type": "cites",
            "value": 4
        },
        {
            "source": 299,
            "target": 126,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 1398,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 1399,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1400,
            "target": 504,
            "type": "cites",
            "value": 4
        },
        {
            "source": 504,
            "target": 807,
            "type": "cites",
            "value": 3
        },
        {
            "source": 504,
            "target": 809,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1401,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 558,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1402,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 1403,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 1404,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 1405,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 479,
            "type": "cites",
            "value": 7
        },
        {
            "source": 486,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 883,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1406,
            "target": 883,
            "type": "cites",
            "value": 4
        },
        {
            "source": 884,
            "target": 1407,
            "type": "cites",
            "value": 4
        },
        {
            "source": 884,
            "target": 1408,
            "type": "cites",
            "value": 6
        },
        {
            "source": 884,
            "target": 883,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1408,
            "target": 1407,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1408,
            "target": 884,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1408,
            "target": 1409,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1408,
            "target": 883,
            "type": "cites",
            "value": 7
        },
        {
            "source": 883,
            "target": 1407,
            "type": "cites",
            "value": 5
        },
        {
            "source": 883,
            "target": 1409,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1408,
            "type": "cites",
            "value": 7
        },
        {
            "source": 884,
            "target": 1196,
            "type": "cites",
            "value": 4
        },
        {
            "source": 884,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1408,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1410,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1196,
            "type": "cites",
            "value": 5
        },
        {
            "source": 883,
            "target": 1411,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 186,
            "type": "cites",
            "value": 7
        },
        {
            "source": 381,
            "target": 1412,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1413,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 920,
            "target": 316,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1414,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 912,
            "target": 316,
            "type": "cites",
            "value": 7
        },
        {
            "source": 541,
            "target": 316,
            "type": "cites",
            "value": 6
        },
        {
            "source": 541,
            "target": 485,
            "type": "cites",
            "value": 4
        },
        {
            "source": 920,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 920,
            "target": 1266,
            "type": "cites",
            "value": 7
        },
        {
            "source": 541,
            "target": 912,
            "type": "cites",
            "value": 7
        },
        {
            "source": 912,
            "target": 560,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 318,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 189,
            "type": "cites",
            "value": 10
        },
        {
            "source": 381,
            "target": 245,
            "type": "cites",
            "value": 3
        },
        {
            "source": 535,
            "target": 189,
            "type": "cites",
            "value": 5
        },
        {
            "source": 381,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 535,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 287,
            "type": "cites",
            "value": 7
        },
        {
            "source": 381,
            "target": 72,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 627,
            "type": "cites",
            "value": 12
        },
        {
            "source": 1415,
            "target": 1408,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1416,
            "target": 1417,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1416,
            "target": 1408,
            "type": "cites",
            "value": 5
        },
        {
            "source": 320,
            "target": 681,
            "type": "cites",
            "value": 3
        },
        {
            "source": 324,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 1216,
            "type": "cites",
            "value": 4
        },
        {
            "source": 320,
            "target": 972,
            "type": "cites",
            "value": 5
        },
        {
            "source": 244,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 875,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 2,
            "type": "cites",
            "value": 3
        },
        {
            "source": 137,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 946,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 947,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 948,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 950,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 1047,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1418,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1419,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 179,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 440,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1418,
            "target": 179,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1418,
            "target": 440,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1418,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1419,
            "target": 179,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1419,
            "target": 440,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1419,
            "target": 178,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 439,
            "type": "cites",
            "value": 4
        },
        {
            "source": 843,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 34,
            "type": "cites",
            "value": 7
        },
        {
            "source": 189,
            "target": 369,
            "type": "cites",
            "value": 5
        },
        {
            "source": 54,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 300,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 300,
            "target": 54,
            "type": "cites",
            "value": 5
        },
        {
            "source": 713,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 232,
            "type": "cites",
            "value": 13
        },
        {
            "source": 193,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 1420,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 1421,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 1422,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 960,
            "type": "cites",
            "value": 5
        },
        {
            "source": 14,
            "target": 959,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1423,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 449,
            "target": 80,
            "type": "cites",
            "value": 4
        },
        {
            "source": 449,
            "target": 581,
            "type": "cites",
            "value": 4
        },
        {
            "source": 449,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 449,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1318,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1424,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 280,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 198,
            "target": 501,
            "type": "cites",
            "value": 6
        },
        {
            "source": 46,
            "target": 34,
            "type": "cites",
            "value": 6
        },
        {
            "source": 46,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 46,
            "target": 580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 873,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1425,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1425,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1426,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 86,
            "type": "cites",
            "value": 5
        },
        {
            "source": 336,
            "target": 87,
            "type": "cites",
            "value": 7
        },
        {
            "source": 232,
            "target": 86,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 87,
            "type": "cites",
            "value": 8
        },
        {
            "source": 873,
            "target": 336,
            "type": "cites",
            "value": 5
        },
        {
            "source": 873,
            "target": 874,
            "type": "cites",
            "value": 6
        },
        {
            "source": 873,
            "target": 232,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1425,
            "target": 336,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1425,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1425,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1426,
            "target": 336,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1426,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1426,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 873,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 997,
            "type": "cites",
            "value": 5
        },
        {
            "source": 336,
            "target": 113,
            "type": "cites",
            "value": 4
        },
        {
            "source": 336,
            "target": 1427,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 997,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 113,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 1427,
            "type": "cites",
            "value": 4
        },
        {
            "source": 873,
            "target": 952,
            "type": "cites",
            "value": 3
        },
        {
            "source": 873,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1425,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1425,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 1362,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 1428,
            "type": "cites",
            "value": 3
        },
        {
            "source": 31,
            "target": 712,
            "type": "cites",
            "value": 3
        },
        {
            "source": 651,
            "target": 244,
            "type": "cites",
            "value": 18
        },
        {
            "source": 651,
            "target": 977,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1429,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1430,
            "target": 244,
            "type": "cites",
            "value": 7
        },
        {
            "source": 976,
            "target": 244,
            "type": "cites",
            "value": 13
        },
        {
            "source": 651,
            "target": 320,
            "type": "cites",
            "value": 5
        },
        {
            "source": 773,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 976,
            "target": 102,
            "type": "cites",
            "value": 4
        },
        {
            "source": 773,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 976,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 773,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 31,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 651,
            "target": 430,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1429,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1430,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 976,
            "target": 103,
            "type": "cites",
            "value": 7
        },
        {
            "source": 244,
            "target": 432,
            "type": "cites",
            "value": 4
        },
        {
            "source": 773,
            "target": 651,
            "type": "cites",
            "value": 3
        },
        {
            "source": 773,
            "target": 972,
            "type": "cites",
            "value": 3
        },
        {
            "source": 976,
            "target": 651,
            "type": "cites",
            "value": 3
        },
        {
            "source": 976,
            "target": 972,
            "type": "cites",
            "value": 3
        },
        {
            "source": 651,
            "target": 421,
            "type": "cites",
            "value": 4
        },
        {
            "source": 989,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1431,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1432,
            "target": 304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 989,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 989,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 989,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1431,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1431,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1431,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1432,
            "target": 784,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1432,
            "target": 44,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1432,
            "target": 61,
            "type": "cites",
            "value": 3
        },
        {
            "source": 64,
            "target": 288,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 231,
            "type": "cites",
            "value": 8
        },
        {
            "source": 231,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 336,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1198,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 285,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1374,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1375,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1376,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 749,
            "type": "cites",
            "value": 3
        },
        {
            "source": 380,
            "target": 749,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1433,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1434,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1435,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1436,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1437,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 197,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 197,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 523,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 421,
            "target": 87,
            "type": "cites",
            "value": 7
        },
        {
            "source": 192,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 199,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 2,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 2,
            "target": 113,
            "type": "cites",
            "value": 5
        },
        {
            "source": 2,
            "target": 4,
            "type": "cites",
            "value": 10
        },
        {
            "source": 177,
            "target": 1438,
            "type": "cites",
            "value": 3
        },
        {
            "source": 177,
            "target": 1313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1390,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 861,
            "target": 170,
            "type": "cites",
            "value": 4
        },
        {
            "source": 861,
            "target": 171,
            "type": "cites",
            "value": 4
        },
        {
            "source": 439,
            "target": 170,
            "type": "cites",
            "value": 10
        },
        {
            "source": 439,
            "target": 171,
            "type": "cites",
            "value": 10
        },
        {
            "source": 479,
            "target": 170,
            "type": "cites",
            "value": 4
        },
        {
            "source": 479,
            "target": 171,
            "type": "cites",
            "value": 4
        },
        {
            "source": 479,
            "target": 439,
            "type": "cites",
            "value": 8
        },
        {
            "source": 479,
            "target": 187,
            "type": "cites",
            "value": 5
        },
        {
            "source": 479,
            "target": 192,
            "type": "cites",
            "value": 3
        },
        {
            "source": 479,
            "target": 46,
            "type": "cites",
            "value": 7
        },
        {
            "source": 479,
            "target": 300,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1439,
            "target": 1,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1439,
            "target": 111,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1440,
            "target": 1,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1440,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1,
            "target": 1440,
            "type": "cites",
            "value": 3
        },
        {
            "source": 52,
            "target": 193,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1441,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1441,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1442,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1442,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1443,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1443,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 46,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 486,
            "target": 1213,
            "type": "cites",
            "value": 3
        },
        {
            "source": 784,
            "target": 7,
            "type": "cites",
            "value": 5
        },
        {
            "source": 784,
            "target": 194,
            "type": "cites",
            "value": 6
        },
        {
            "source": 44,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 784,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 63,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 47,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 47,
            "target": 86,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1444,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1444,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1445,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1445,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 341,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1446,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1447,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 199,
            "type": "cites",
            "value": 6
        },
        {
            "source": 72,
            "target": 1000,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1448,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1448,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1449,
            "target": 1,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1449,
            "target": 111,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1450,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1338,
            "target": 34,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1304,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 197,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 577,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1451,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1451,
            "target": 1078,
            "type": "cites",
            "value": 5
        },
        {
            "source": 190,
            "target": 765,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1451,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 950,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 950,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 873,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 714,
            "target": 231,
            "type": "cites",
            "value": 5
        },
        {
            "source": 485,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 485,
            "target": 765,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1312,
            "target": 711,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 186,
            "type": "cites",
            "value": 11
        },
        {
            "source": 189,
            "target": 1452,
            "type": "cites",
            "value": 6
        },
        {
            "source": 189,
            "target": 1453,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 1454,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 1455,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1456,
            "target": 641,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1457,
            "target": 641,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1458,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 404,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1459,
            "target": 136,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1459,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 512,
            "target": 136,
            "type": "cites",
            "value": 8
        },
        {
            "source": 512,
            "target": 26,
            "type": "cites",
            "value": 9
        },
        {
            "source": 512,
            "target": 404,
            "type": "cites",
            "value": 6
        },
        {
            "source": 512,
            "target": 1296,
            "type": "cites",
            "value": 3
        },
        {
            "source": 511,
            "target": 1296,
            "type": "cites",
            "value": 3
        },
        {
            "source": 512,
            "target": 529,
            "type": "cites",
            "value": 4
        },
        {
            "source": 511,
            "target": 529,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1103,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1302,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1303,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1304,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1305,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1306,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 314,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 315,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 287,
            "type": "cites",
            "value": 5
        },
        {
            "source": 313,
            "target": 553,
            "type": "cites",
            "value": 5
        },
        {
            "source": 313,
            "target": 554,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 634,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 487,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 311,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 204,
            "type": "cites",
            "value": 11
        },
        {
            "source": 186,
            "target": 1196,
            "type": "cites",
            "value": 13
        },
        {
            "source": 740,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 641,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1460,
            "target": 1312,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1461,
            "target": 1312,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1462,
            "target": 1312,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1463,
            "target": 547,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1463,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 711,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 711,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1464,
            "target": 231,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1464,
            "target": 558,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1464,
            "target": 504,
            "type": "cites",
            "value": 3
        },
        {
            "source": 405,
            "target": 558,
            "type": "cites",
            "value": 4
        },
        {
            "source": 487,
            "target": 1290,
            "type": "cites",
            "value": 3
        },
        {
            "source": 487,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 487,
            "target": 485,
            "type": "cites",
            "value": 10
        },
        {
            "source": 556,
            "target": 1290,
            "type": "cites",
            "value": 3
        },
        {
            "source": 556,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 556,
            "target": 485,
            "type": "cites",
            "value": 11
        },
        {
            "source": 1465,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1466,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 912,
            "target": 1089,
            "type": "cites",
            "value": 4
        },
        {
            "source": 912,
            "target": 1467,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1468,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1469,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1470,
            "target": 541,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1398,
            "target": 541,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1398,
            "target": 912,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 193,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1471,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 421,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 179,
            "target": 170,
            "type": "cites",
            "value": 12
        },
        {
            "source": 179,
            "target": 171,
            "type": "cites",
            "value": 12
        },
        {
            "source": 179,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 179,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 195,
            "target": 193,
            "type": "cites",
            "value": 7
        },
        {
            "source": 850,
            "target": 193,
            "type": "cites",
            "value": 4
        },
        {
            "source": 195,
            "target": 53,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1472,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1473,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1472,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1473,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1390,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 711,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 1474,
            "type": "cites",
            "value": 5
        },
        {
            "source": 125,
            "target": 1475,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1476,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1477,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 681,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1478,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1476,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1477,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 681,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1478,
            "target": 103,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1479,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 961,
            "type": "cites",
            "value": 3
        },
        {
            "source": 158,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 961,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 33,
            "type": "cites",
            "value": 4
        },
        {
            "source": 158,
            "target": 3,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 3,
            "type": "cites",
            "value": 4
        },
        {
            "source": 338,
            "target": 318,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 32,
            "type": "cites",
            "value": 8
        },
        {
            "source": 700,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 700,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 700,
            "target": 977,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 33,
            "target": 977,
            "type": "cites",
            "value": 3
        },
        {
            "source": 700,
            "target": 961,
            "type": "cites",
            "value": 5
        },
        {
            "source": 700,
            "target": 712,
            "type": "cites",
            "value": 4
        },
        {
            "source": 33,
            "target": 961,
            "type": "cites",
            "value": 5
        },
        {
            "source": 33,
            "target": 712,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 1420,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 1421,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 1422,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 310,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 307,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1480,
            "target": 200,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1480,
            "target": 215,
            "type": "cites",
            "value": 4
        },
        {
            "source": 310,
            "target": 200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 310,
            "target": 215,
            "type": "cites",
            "value": 6
        },
        {
            "source": 307,
            "target": 200,
            "type": "cites",
            "value": 6
        },
        {
            "source": 307,
            "target": 215,
            "type": "cites",
            "value": 6
        },
        {
            "source": 310,
            "target": 997,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1480,
            "target": 515,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 654,
            "type": "cites",
            "value": 4
        },
        {
            "source": 310,
            "target": 515,
            "type": "cites",
            "value": 5
        },
        {
            "source": 307,
            "target": 654,
            "type": "cites",
            "value": 4
        },
        {
            "source": 307,
            "target": 515,
            "type": "cites",
            "value": 5
        },
        {
            "source": 185,
            "target": 169,
            "type": "cites",
            "value": 3
        },
        {
            "source": 185,
            "target": 91,
            "type": "cites",
            "value": 6
        },
        {
            "source": 185,
            "target": 182,
            "type": "cites",
            "value": 3
        },
        {
            "source": 276,
            "target": 1155,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1291,
            "target": 381,
            "type": "cites",
            "value": 4
        },
        {
            "source": 190,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 190,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 484,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 1481,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 1482,
            "type": "cites",
            "value": 4
        },
        {
            "source": 961,
            "target": 244,
            "type": "cites",
            "value": 4
        },
        {
            "source": 961,
            "target": 700,
            "type": "cites",
            "value": 4
        },
        {
            "source": 961,
            "target": 177,
            "type": "cites",
            "value": 4
        },
        {
            "source": 961,
            "target": 33,
            "type": "cites",
            "value": 6
        },
        {
            "source": 33,
            "target": 177,
            "type": "cites",
            "value": 9
        },
        {
            "source": 816,
            "target": 817,
            "type": "cites",
            "value": 5
        },
        {
            "source": 816,
            "target": 1105,
            "type": "cites",
            "value": 4
        },
        {
            "source": 816,
            "target": 300,
            "type": "cites",
            "value": 11
        },
        {
            "source": 817,
            "target": 816,
            "type": "cites",
            "value": 6
        },
        {
            "source": 817,
            "target": 1105,
            "type": "cites",
            "value": 4
        },
        {
            "source": 817,
            "target": 300,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1105,
            "target": 816,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1105,
            "target": 817,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1105,
            "target": 300,
            "type": "cites",
            "value": 6
        },
        {
            "source": 816,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 451,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 475,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 381,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 380,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 1483,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 288,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 34,
            "type": "cites",
            "value": 5
        },
        {
            "source": 287,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 0,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 92,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 957,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 958,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 479,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 1312,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 711,
            "type": "cites",
            "value": 7
        },
        {
            "source": 475,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 83,
            "type": "cites",
            "value": 3
        },
        {
            "source": 63,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 341,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 282,
            "target": 515,
            "type": "cites",
            "value": 4
        },
        {
            "source": 200,
            "target": 1484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 1484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 215,
            "target": 62,
            "type": "cites",
            "value": 4
        },
        {
            "source": 4,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 61,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 784,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 61,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 44,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 61,
            "target": 194,
            "type": "cites",
            "value": 5
        },
        {
            "source": 46,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 55,
            "type": "cites",
            "value": 4
        },
        {
            "source": 765,
            "target": 541,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1485,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1486,
            "target": 52,
            "type": "cites",
            "value": 4
        },
        {
            "source": 84,
            "target": 52,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1487,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 370,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 193,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1441,
            "target": 245,
            "type": "cites",
            "value": 3
        },
        {
            "source": 87,
            "target": 520,
            "type": "cites",
            "value": 4
        },
        {
            "source": 87,
            "target": 521,
            "type": "cites",
            "value": 4
        },
        {
            "source": 537,
            "target": 625,
            "type": "cites",
            "value": 4
        },
        {
            "source": 810,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 916,
            "target": 625,
            "type": "cites",
            "value": 5
        },
        {
            "source": 916,
            "target": 540,
            "type": "cites",
            "value": 5
        },
        {
            "source": 916,
            "target": 765,
            "type": "cites",
            "value": 4
        },
        {
            "source": 916,
            "target": 541,
            "type": "cites",
            "value": 8
        },
        {
            "source": 185,
            "target": 170,
            "type": "cites",
            "value": 3
        },
        {
            "source": 185,
            "target": 171,
            "type": "cites",
            "value": 3
        },
        {
            "source": 185,
            "target": 475,
            "type": "cites",
            "value": 3
        },
        {
            "source": 597,
            "target": 598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 598,
            "target": 912,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 883,
            "type": "cites",
            "value": 4
        },
        {
            "source": 381,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 316,
            "type": "cites",
            "value": 8
        },
        {
            "source": 186,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 483,
            "target": 558,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 558,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1464,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 560,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 316,
            "type": "cites",
            "value": 8
        },
        {
            "source": 231,
            "target": 92,
            "type": "cites",
            "value": 3
        },
        {
            "source": 487,
            "target": 627,
            "type": "cites",
            "value": 6
        },
        {
            "source": 487,
            "target": 484,
            "type": "cites",
            "value": 9
        },
        {
            "source": 556,
            "target": 627,
            "type": "cites",
            "value": 8
        },
        {
            "source": 556,
            "target": 484,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1283,
            "target": 189,
            "type": "cites",
            "value": 10
        },
        {
            "source": 204,
            "target": 1454,
            "type": "cites",
            "value": 6
        },
        {
            "source": 822,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1488,
            "target": 204,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1489,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 316,
            "target": 26,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1490,
            "target": 1491,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 113,
            "type": "cites",
            "value": 6
        },
        {
            "source": 590,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 135,
            "target": 997,
            "type": "cites",
            "value": 3
        },
        {
            "source": 206,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 195,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 1216,
            "type": "cites",
            "value": 4
        },
        {
            "source": 304,
            "target": 1492,
            "type": "cites",
            "value": 3
        },
        {
            "source": 304,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 652,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 652,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 653,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 653,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 60,
            "target": 32,
            "type": "cites",
            "value": 3
        },
        {
            "source": 706,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 706,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 972,
            "target": 244,
            "type": "cites",
            "value": 10
        },
        {
            "source": 972,
            "target": 14,
            "type": "cites",
            "value": 6
        },
        {
            "source": 973,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 973,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 1220,
            "type": "cites",
            "value": 3
        },
        {
            "source": 972,
            "target": 102,
            "type": "cites",
            "value": 3
        },
        {
            "source": 972,
            "target": 421,
            "type": "cites",
            "value": 3
        },
        {
            "source": 972,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1493,
            "target": 170,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1493,
            "target": 171,
            "type": "cites",
            "value": 4
        },
        {
            "source": 440,
            "target": 170,
            "type": "cites",
            "value": 10
        },
        {
            "source": 440,
            "target": 171,
            "type": "cites",
            "value": 10
        },
        {
            "source": 1494,
            "target": 170,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1494,
            "target": 171,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1495,
            "target": 170,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1495,
            "target": 171,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 232,
            "type": "cites",
            "value": 7
        },
        {
            "source": 300,
            "target": 372,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 812,
            "type": "cites",
            "value": 3
        },
        {
            "source": 198,
            "target": 966,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1327,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1327,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 974,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 974,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 975,
            "target": 244,
            "type": "cites",
            "value": 5
        },
        {
            "source": 975,
            "target": 14,
            "type": "cites",
            "value": 3
        },
        {
            "source": 974,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 975,
            "target": 103,
            "type": "cites",
            "value": 3
        },
        {
            "source": 310,
            "target": 113,
            "type": "cites",
            "value": 4
        },
        {
            "source": 310,
            "target": 1427,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1496,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 954,
            "target": 336,
            "type": "cites",
            "value": 3
        },
        {
            "source": 954,
            "target": 874,
            "type": "cites",
            "value": 4
        },
        {
            "source": 954,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1291,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 596,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 1497,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 1497,
            "type": "cites",
            "value": 3
        },
        {
            "source": 421,
            "target": 244,
            "type": "cites",
            "value": 3
        },
        {
            "source": 421,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1047,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1498,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1327,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1327,
            "target": 1155,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 1155,
            "type": "cites",
            "value": 3
        },
        {
            "source": 783,
            "target": 193,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1499,
            "target": 193,
            "type": "cites",
            "value": 3
        },
        {
            "source": 187,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 290,
            "target": 1500,
            "type": "cites",
            "value": 5
        },
        {
            "source": 290,
            "target": 289,
            "type": "cites",
            "value": 4
        },
        {
            "source": 290,
            "target": 1501,
            "type": "cites",
            "value": 4
        },
        {
            "source": 290,
            "target": 1502,
            "type": "cites",
            "value": 4
        },
        {
            "source": 290,
            "target": 186,
            "type": "cites",
            "value": 6
        },
        {
            "source": 186,
            "target": 290,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1500,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 289,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1502,
            "type": "cites",
            "value": 3
        },
        {
            "source": 290,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1503,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1504,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1505,
            "type": "cites",
            "value": 5
        },
        {
            "source": 92,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 961,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 287,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1506,
            "target": 69,
            "type": "cites",
            "value": 3
        },
        {
            "source": 202,
            "target": 70,
            "type": "cites",
            "value": 3
        },
        {
            "source": 244,
            "target": 194,
            "type": "cites",
            "value": 3
        },
        {
            "source": 430,
            "target": 244,
            "type": "cites",
            "value": 6
        },
        {
            "source": 430,
            "target": 14,
            "type": "cites",
            "value": 4
        },
        {
            "source": 244,
            "target": 1507,
            "type": "cites",
            "value": 3
        },
        {
            "source": 205,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 205,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 179,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 179,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 860,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 860,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 833,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 834,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 1216,
            "type": "cites",
            "value": 3
        },
        {
            "source": 178,
            "target": 1508,
            "type": "cites",
            "value": 3
        },
        {
            "source": 357,
            "target": 46,
            "type": "cites",
            "value": 3
        },
        {
            "source": 515,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 515,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 580,
            "target": 1509,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 0,
            "target": 44,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 338,
            "type": "cites",
            "value": 7
        },
        {
            "source": 314,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 189,
            "target": 1510,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1378,
            "target": 596,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1378,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 596,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1078,
            "target": 627,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1078,
            "target": 485,
            "type": "cites",
            "value": 7
        },
        {
            "source": 486,
            "target": 596,
            "type": "cites",
            "value": 7
        },
        {
            "source": 486,
            "target": 765,
            "type": "cites",
            "value": 6
        },
        {
            "source": 486,
            "target": 485,
            "type": "cites",
            "value": 15
        },
        {
            "source": 486,
            "target": 1078,
            "type": "cites",
            "value": 10
        },
        {
            "source": 26,
            "target": 314,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1085,
            "target": 547,
            "type": "cites",
            "value": 3
        },
        {
            "source": 598,
            "target": 540,
            "type": "cites",
            "value": 4
        },
        {
            "source": 598,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 1511,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 315,
            "type": "cites",
            "value": 3
        },
        {
            "source": 627,
            "target": 1078,
            "type": "cites",
            "value": 4
        },
        {
            "source": 627,
            "target": 553,
            "type": "cites",
            "value": 8
        },
        {
            "source": 627,
            "target": 485,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 1512,
            "type": "cites",
            "value": 4
        },
        {
            "source": 204,
            "target": 1513,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1407,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1503,
            "target": 1197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1514,
            "target": 186,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1514,
            "target": 1197,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1515,
            "target": 186,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1515,
            "target": 1197,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1516,
            "target": 186,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1516,
            "target": 1197,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1517,
            "target": 186,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1517,
            "target": 1197,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1504,
            "target": 1197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1517,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1518,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1197,
            "type": "cites",
            "value": 13
        },
        {
            "source": 1514,
            "target": 1195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1514,
            "target": 1196,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1515,
            "target": 1195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1515,
            "target": 1196,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1516,
            "target": 1195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1516,
            "target": 1196,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1517,
            "target": 1195,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1517,
            "target": 1196,
            "type": "cites",
            "value": 5
        },
        {
            "source": 186,
            "target": 1195,
            "type": "cites",
            "value": 14
        },
        {
            "source": 186,
            "target": 1310,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1311,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1517,
            "target": 1519,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1519,
            "type": "cites",
            "value": 9
        },
        {
            "source": 186,
            "target": 1520,
            "type": "cites",
            "value": 6
        },
        {
            "source": 186,
            "target": 1521,
            "type": "cites",
            "value": 3
        },
        {
            "source": 951,
            "target": 170,
            "type": "cites",
            "value": 5
        },
        {
            "source": 951,
            "target": 171,
            "type": "cites",
            "value": 5
        },
        {
            "source": 874,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 952,
            "target": 874,
            "type": "cites",
            "value": 3
        },
        {
            "source": 952,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 874,
            "target": 336,
            "type": "cites",
            "value": 3
        },
        {
            "source": 874,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 977,
            "type": "cites",
            "value": 3
        },
        {
            "source": 235,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 171,
            "target": 1132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 171,
            "target": 1256,
            "type": "cites",
            "value": 3
        },
        {
            "source": 170,
            "target": 1132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 170,
            "target": 1256,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1522,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1523,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 1505,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 1524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1525,
            "target": 245,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1526,
            "target": 245,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1527,
            "target": 245,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1528,
            "target": 245,
            "type": "cites",
            "value": 4
        },
        {
            "source": 524,
            "target": 113,
            "type": "cites",
            "value": 6
        },
        {
            "source": 524,
            "target": 4,
            "type": "cites",
            "value": 7
        },
        {
            "source": 524,
            "target": 997,
            "type": "cites",
            "value": 5
        },
        {
            "source": 524,
            "target": 1427,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1529,
            "target": 7,
            "type": "cites",
            "value": 4
        },
        {
            "source": 63,
            "target": 26,
            "type": "cites",
            "value": 4
        },
        {
            "source": 451,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 112,
            "target": 62,
            "type": "cites",
            "value": 3
        },
        {
            "source": 126,
            "target": 34,
            "type": "cites",
            "value": 6
        },
        {
            "source": 126,
            "target": 369,
            "type": "cites",
            "value": 5
        },
        {
            "source": 4,
            "target": 194,
            "type": "cites",
            "value": 3
        },
        {
            "source": 421,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 102,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 102,
            "target": 52,
            "type": "cites",
            "value": 3
        },
        {
            "source": 300,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1530,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1531,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 103,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1210,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 1524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1210,
            "target": 92,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1210,
            "target": 1505,
            "type": "cites",
            "value": 3
        },
        {
            "source": 92,
            "target": 1505,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1532,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1533,
            "target": 72,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 316,
            "type": "cites",
            "value": 7
        },
        {
            "source": 582,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 913,
            "type": "cites",
            "value": 5
        },
        {
            "source": 541,
            "target": 914,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1451,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 190,
            "target": 530,
            "type": "cites",
            "value": 6
        },
        {
            "source": 190,
            "target": 1534,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 1535,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 1536,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 543,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1537,
            "target": 1200,
            "type": "cites",
            "value": 3
        },
        {
            "source": 483,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1302,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1538,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1539,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1540,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1541,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 809,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1542,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1543,
            "target": 338,
            "type": "cites",
            "value": 4
        },
        {
            "source": 338,
            "target": 831,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 1544,
            "type": "cites",
            "value": 3
        },
        {
            "source": 289,
            "target": 290,
            "type": "cites",
            "value": 3
        },
        {
            "source": 289,
            "target": 1500,
            "type": "cites",
            "value": 3
        },
        {
            "source": 289,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 484,
            "type": "cites",
            "value": 5
        },
        {
            "source": 485,
            "target": 484,
            "type": "cites",
            "value": 5
        },
        {
            "source": 765,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 485,
            "target": 596,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1545,
            "target": 711,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1546,
            "target": 711,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1545,
            "target": 1547,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1546,
            "target": 1547,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1545,
            "target": 1524,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1546,
            "target": 1524,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1548,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1549,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1501,
            "target": 1500,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1501,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1500,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1501,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1500,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1378,
            "target": 1078,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 765,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1078,
            "target": 1550,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 1551,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 1552,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1078,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1078,
            "target": 486,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1078,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 627,
            "target": 486,
            "type": "cites",
            "value": 6
        },
        {
            "source": 627,
            "target": 487,
            "type": "cites",
            "value": 6
        },
        {
            "source": 627,
            "target": 484,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1553,
            "target": 313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 712,
            "target": 4,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1164,
            "target": 34,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1164,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1554,
            "target": 125,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 310,
            "type": "cites",
            "value": 3
        },
        {
            "source": 3,
            "target": 4,
            "type": "cites",
            "value": 6
        },
        {
            "source": 3,
            "target": 997,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 997,
            "type": "cites",
            "value": 10
        },
        {
            "source": 4,
            "target": 1427,
            "type": "cites",
            "value": 7
        },
        {
            "source": 415,
            "target": 318,
            "type": "cites",
            "value": 4
        },
        {
            "source": 318,
            "target": 1555,
            "type": "cites",
            "value": 4
        },
        {
            "source": 415,
            "target": 91,
            "type": "cites",
            "value": 3
        },
        {
            "source": 524,
            "target": 876,
            "type": "cites",
            "value": 3
        },
        {
            "source": 524,
            "target": 69,
            "type": "cites",
            "value": 5
        },
        {
            "source": 524,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1556,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1518,
            "target": 186,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1518,
            "target": 1195,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1412,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1518,
            "target": 1197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1518,
            "target": 1196,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1518,
            "target": 1519,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 883,
            "type": "cites",
            "value": 7
        },
        {
            "source": 186,
            "target": 313,
            "type": "cites",
            "value": 4
        },
        {
            "source": 55,
            "target": 7,
            "type": "cites",
            "value": 7
        },
        {
            "source": 56,
            "target": 7,
            "type": "cites",
            "value": 3
        },
        {
            "source": 55,
            "target": 501,
            "type": "cites",
            "value": 3
        },
        {
            "source": 55,
            "target": 87,
            "type": "cites",
            "value": 4
        },
        {
            "source": 7,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 977,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 977,
            "target": 87,
            "type": "cites",
            "value": 9
        },
        {
            "source": 406,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 978,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 979,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1164,
            "target": 125,
            "type": "cites",
            "value": 4
        },
        {
            "source": 125,
            "target": 1557,
            "type": "cites",
            "value": 4
        },
        {
            "source": 416,
            "target": 1555,
            "type": "cites",
            "value": 4
        },
        {
            "source": 14,
            "target": 833,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 834,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1558,
            "target": 1559,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1337,
            "target": 1338,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1337,
            "target": 486,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1337,
            "target": 1261,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1261,
            "target": 1337,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1261,
            "target": 1338,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1261,
            "target": 486,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1337,
            "target": 1213,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1337,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1261,
            "target": 1213,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1261,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1560,
            "target": 416,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 479,
            "target": 1561,
            "type": "cites",
            "value": 3
        },
        {
            "source": 479,
            "target": 1562,
            "type": "cites",
            "value": 3
        },
        {
            "source": 14,
            "target": 311,
            "type": "cites",
            "value": 3
        },
        {
            "source": 700,
            "target": 686,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 686,
            "type": "cites",
            "value": 6
        },
        {
            "source": 92,
            "target": 1563,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 1390,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1338,
            "target": 1213,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1338,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 113,
            "target": 997,
            "type": "cites",
            "value": 8
        },
        {
            "source": 113,
            "target": 1427,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1313,
            "target": 4,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1313,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1313,
            "target": 33,
            "type": "cites",
            "value": 4
        },
        {
            "source": 7,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 34,
            "target": 32,
            "type": "cites",
            "value": 6
        },
        {
            "source": 369,
            "target": 32,
            "type": "cites",
            "value": 5
        },
        {
            "source": 34,
            "target": 369,
            "type": "cites",
            "value": 12
        },
        {
            "source": 369,
            "target": 34,
            "type": "cites",
            "value": 21
        },
        {
            "source": 32,
            "target": 87,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 191,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1564,
            "target": 33,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 569,
            "type": "cites",
            "value": 3
        },
        {
            "source": 46,
            "target": 1167,
            "type": "cites",
            "value": 3
        },
        {
            "source": 749,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 130,
            "target": 479,
            "type": "cites",
            "value": 6
        },
        {
            "source": 178,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 912,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 484,
            "type": "cites",
            "value": 5
        },
        {
            "source": 541,
            "target": 484,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1565,
            "target": 484,
            "type": "cites",
            "value": 4
        },
        {
            "source": 765,
            "target": 912,
            "type": "cites",
            "value": 3
        },
        {
            "source": 130,
            "target": 627,
            "type": "cites",
            "value": 6
        },
        {
            "source": 130,
            "target": 484,
            "type": "cites",
            "value": 9
        },
        {
            "source": 130,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 636,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 1505,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1566,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1567,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1568,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1310,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1311,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 311,
            "target": 625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 311,
            "target": 541,
            "type": "cites",
            "value": 8
        },
        {
            "source": 311,
            "target": 540,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1500,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 290,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 289,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1500,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1501,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1502,
            "target": 524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1303,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1304,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 1467,
            "type": "cites",
            "value": 3
        },
        {
            "source": 300,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1569,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1570,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1571,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1572,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1569,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1570,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1571,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1572,
            "target": 486,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1569,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1570,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1571,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1572,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 485,
            "target": 313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 189,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1372,
            "target": 1104,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1372,
            "target": 300,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1104,
            "target": 300,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1372,
            "target": 1573,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1372,
            "target": 1511,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1372,
            "target": 232,
            "type": "cites",
            "value": 6
        },
        {
            "source": 404,
            "target": 1511,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1104,
            "target": 1573,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1104,
            "target": 1511,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1104,
            "target": 232,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1104,
            "target": 596,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1574,
            "target": 636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1575,
            "target": 636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 605,
            "target": 636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1576,
            "target": 636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 632,
            "target": 479,
            "type": "cites",
            "value": 7
        },
        {
            "source": 627,
            "target": 556,
            "type": "cites",
            "value": 4
        },
        {
            "source": 627,
            "target": 130,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1577,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1577,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1577,
            "target": 484,
            "type": "cites",
            "value": 3
        },
        {
            "source": 627,
            "target": 1578,
            "type": "cites",
            "value": 3
        },
        {
            "source": 627,
            "target": 1579,
            "type": "cites",
            "value": 4
        },
        {
            "source": 338,
            "target": 883,
            "type": "cites",
            "value": 5
        },
        {
            "source": 338,
            "target": 530,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 1580,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 550,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1086,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 1581,
            "type": "cites",
            "value": 3
        },
        {
            "source": 681,
            "target": 86,
            "type": "cites",
            "value": 3
        },
        {
            "source": 681,
            "target": 87,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1047,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1047,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 686,
            "target": 833,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1509,
            "target": 1582,
            "type": "cites",
            "value": 3
        },
        {
            "source": 311,
            "target": 204,
            "type": "cites",
            "value": 5
        },
        {
            "source": 204,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1362,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1428,
            "target": 87,
            "type": "cites",
            "value": 5
        },
        {
            "source": 712,
            "target": 87,
            "type": "cites",
            "value": 7
        },
        {
            "source": 712,
            "target": 86,
            "type": "cites",
            "value": 4
        },
        {
            "source": 32,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 484,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 524,
            "target": 46,
            "type": "cites",
            "value": 4
        },
        {
            "source": 524,
            "target": 68,
            "type": "cites",
            "value": 3
        },
        {
            "source": 524,
            "target": 70,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 1583,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 1584,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 1585,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 1586,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 1587,
            "type": "cites",
            "value": 3
        },
        {
            "source": 313,
            "target": 1588,
            "type": "cites",
            "value": 4
        },
        {
            "source": 313,
            "target": 1579,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1589,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 314,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 314,
            "target": 765,
            "type": "cites",
            "value": 3
        },
        {
            "source": 556,
            "target": 1078,
            "type": "cites",
            "value": 4
        },
        {
            "source": 556,
            "target": 1552,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 1552,
            "type": "cites",
            "value": 4
        },
        {
            "source": 190,
            "target": 316,
            "type": "cites",
            "value": 9
        },
        {
            "source": 190,
            "target": 232,
            "type": "cites",
            "value": 6
        },
        {
            "source": 311,
            "target": 316,
            "type": "cites",
            "value": 5
        },
        {
            "source": 596,
            "target": 1078,
            "type": "cites",
            "value": 6
        },
        {
            "source": 765,
            "target": 1078,
            "type": "cites",
            "value": 6
        },
        {
            "source": 485,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 485,
            "target": 1078,
            "type": "cites",
            "value": 3
        },
        {
            "source": 596,
            "target": 1552,
            "type": "cites",
            "value": 3
        },
        {
            "source": 485,
            "target": 1552,
            "type": "cites",
            "value": 3
        },
        {
            "source": 485,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 127,
            "target": 596,
            "type": "cites",
            "value": 5
        },
        {
            "source": 127,
            "target": 1078,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 596,
            "type": "cites",
            "value": 6
        },
        {
            "source": 484,
            "target": 1078,
            "type": "cites",
            "value": 3
        },
        {
            "source": 127,
            "target": 627,
            "type": "cites",
            "value": 4
        },
        {
            "source": 484,
            "target": 627,
            "type": "cites",
            "value": 6
        },
        {
            "source": 127,
            "target": 484,
            "type": "cites",
            "value": 8
        },
        {
            "source": 484,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 127,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 127,
            "target": 486,
            "type": "cites",
            "value": 4
        },
        {
            "source": 127,
            "target": 487,
            "type": "cites",
            "value": 4
        },
        {
            "source": 484,
            "target": 486,
            "type": "cites",
            "value": 5
        },
        {
            "source": 484,
            "target": 556,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 487,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1590,
            "target": 1591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1592,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1591,
            "target": 1593,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1591,
            "target": 1594,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1595,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1596,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1597,
            "type": "cites",
            "value": 6
        },
        {
            "source": 821,
            "target": 1598,
            "type": "cites",
            "value": 4
        },
        {
            "source": 627,
            "target": 554,
            "type": "cites",
            "value": 5
        },
        {
            "source": 287,
            "target": 315,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 221,
            "target": 1491,
            "type": "cites",
            "value": 11
        },
        {
            "source": 1392,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1451,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 484,
            "type": "cites",
            "value": 11
        },
        {
            "source": 190,
            "target": 627,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 1599,
            "type": "cites",
            "value": 4
        },
        {
            "source": 486,
            "target": 1600,
            "type": "cites",
            "value": 4
        },
        {
            "source": 486,
            "target": 1290,
            "type": "cites",
            "value": 4
        },
        {
            "source": 190,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1283,
            "target": 560,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1283,
            "target": 316,
            "type": "cites",
            "value": 13
        },
        {
            "source": 912,
            "target": 1598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 1087,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1601,
            "target": 1591,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1602,
            "target": 1591,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1591,
            "target": 1603,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1591,
            "target": 1604,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1591,
            "target": 1605,
            "type": "cites",
            "value": 8
        },
        {
            "source": 287,
            "target": 1606,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 1607,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 1088,
            "type": "cites",
            "value": 6
        },
        {
            "source": 338,
            "target": 1088,
            "type": "cites",
            "value": 6
        },
        {
            "source": 191,
            "target": 479,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1608,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1608,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1609,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1609,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 125,
            "target": 1610,
            "type": "cites",
            "value": 3
        },
        {
            "source": 997,
            "target": 4,
            "type": "cites",
            "value": 3
        },
        {
            "source": 997,
            "target": 113,
            "type": "cites",
            "value": 3
        },
        {
            "source": 34,
            "target": 833,
            "type": "cites",
            "value": 7
        },
        {
            "source": 34,
            "target": 834,
            "type": "cites",
            "value": 6
        },
        {
            "source": 369,
            "target": 833,
            "type": "cites",
            "value": 4
        },
        {
            "source": 369,
            "target": 834,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1277,
            "target": 300,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1216,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1438,
            "target": 33,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 300,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 311,
            "target": 1611,
            "type": "cites",
            "value": 3
        },
        {
            "source": 204,
            "target": 1611,
            "type": "cites",
            "value": 5
        },
        {
            "source": 33,
            "target": 34,
            "type": "cites",
            "value": 5
        },
        {
            "source": 189,
            "target": 316,
            "type": "cites",
            "value": 10
        },
        {
            "source": 186,
            "target": 560,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1612,
            "target": 1591,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1613,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1614,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1591,
            "target": 1615,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1591,
            "target": 1616,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1617,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1591,
            "target": 1618,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1619,
            "target": 381,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1619,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1620,
            "target": 381,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1620,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 484,
            "target": 1598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 316,
            "target": 190,
            "type": "cites",
            "value": 4
        },
        {
            "source": 485,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 316,
            "target": 1266,
            "type": "cites",
            "value": 10
        },
        {
            "source": 316,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 91,
            "target": 1621,
            "type": "cites",
            "value": 3
        },
        {
            "source": 311,
            "target": 1195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 311,
            "target": 186,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1512,
            "target": 541,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1622,
            "target": 1491,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1491,
            "target": 1623,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1623,
            "target": 1491,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1491,
            "target": 1624,
            "type": "cites",
            "value": 3
        },
        {
            "source": 186,
            "target": 530,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1625,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1626,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1625,
            "target": 1266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1625,
            "target": 316,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1626,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1625,
            "target": 189,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1625,
            "target": 545,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1627,
            "target": 92,
            "type": "cites",
            "value": 3
        },
        {
            "source": 553,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 553,
            "target": 554,
            "type": "cites",
            "value": 4
        },
        {
            "source": 187,
            "target": 34,
            "type": "cites",
            "value": 8
        },
        {
            "source": 187,
            "target": 369,
            "type": "cites",
            "value": 4
        },
        {
            "source": 300,
            "target": 833,
            "type": "cites",
            "value": 3
        },
        {
            "source": 300,
            "target": 834,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 980,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 982,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 983,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 984,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 985,
            "type": "cites",
            "value": 3
        },
        {
            "source": 33,
            "target": 986,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 1339,
            "type": "cites",
            "value": 5
        },
        {
            "source": 32,
            "target": 1382,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1256,
            "target": 1132,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1628,
            "target": 1132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1628,
            "target": 1256,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1629,
            "target": 1132,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1629,
            "target": 1256,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1630,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1631,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1630,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 913,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 1632,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 1633,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 1634,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 1635,
            "type": "cites",
            "value": 3
        },
        {
            "source": 625,
            "target": 1636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1632,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1633,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1634,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1635,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1632,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1633,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1634,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1635,
            "type": "cites",
            "value": 3
        },
        {
            "source": 541,
            "target": 1636,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1637,
            "type": "cites",
            "value": 4
        },
        {
            "source": 765,
            "target": 1079,
            "type": "cites",
            "value": 4
        },
        {
            "source": 541,
            "target": 1637,
            "type": "cites",
            "value": 4
        },
        {
            "source": 311,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 311,
            "target": 1505,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 1089,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 1467,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 1598,
            "type": "cites",
            "value": 4
        },
        {
            "source": 487,
            "target": 479,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1638,
            "target": 560,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1638,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 300,
            "target": 632,
            "type": "cites",
            "value": 3
        },
        {
            "source": 627,
            "target": 1403,
            "type": "cites",
            "value": 4
        },
        {
            "source": 627,
            "target": 1404,
            "type": "cites",
            "value": 4
        },
        {
            "source": 627,
            "target": 1405,
            "type": "cites",
            "value": 4
        },
        {
            "source": 186,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1283,
            "target": 1639,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1640,
            "target": 316,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1641,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1641,
            "target": 1505,
            "type": "cites",
            "value": 3
        },
        {
            "source": 530,
            "target": 92,
            "type": "cites",
            "value": 5
        },
        {
            "source": 530,
            "target": 1505,
            "type": "cites",
            "value": 3
        },
        {
            "source": 530,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 190,
            "target": 92,
            "type": "cites",
            "value": 6
        },
        {
            "source": 190,
            "target": 1505,
            "type": "cites",
            "value": 3
        },
        {
            "source": 530,
            "target": 186,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1642,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1087,
            "target": 553,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1197,
            "type": "cites",
            "value": 3
        },
        {
            "source": 299,
            "target": 1196,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1643,
            "target": 1491,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1643,
            "target": 1261,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1491,
            "target": 1261,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1491,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1644,
            "target": 381,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1644,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 590,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 590,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1053,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1053,
            "target": 369,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1524,
            "target": 711,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 980,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 981,
            "type": "cites",
            "value": 4
        },
        {
            "source": 32,
            "target": 982,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 983,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 984,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 985,
            "type": "cites",
            "value": 3
        },
        {
            "source": 32,
            "target": 986,
            "type": "cites",
            "value": 3
        },
        {
            "source": 4,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1645,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 540,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 486,
            "target": 1579,
            "type": "cites",
            "value": 3
        },
        {
            "source": 487,
            "target": 1579,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1505,
            "target": 92,
            "type": "cites",
            "value": 5
        },
        {
            "source": 484,
            "target": 553,
            "type": "cites",
            "value": 5
        },
        {
            "source": 484,
            "target": 554,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1646,
            "target": 537,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1646,
            "target": 542,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1454,
            "target": 186,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1647,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1454,
            "target": 92,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1454,
            "target": 1505,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1454,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1454,
            "target": 1195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1592,
            "target": 1604,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1592,
            "target": 1605,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 530,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 190,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1195,
            "target": 186,
            "type": "cites",
            "value": 3
        },
        {
            "source": 765,
            "target": 1598,
            "type": "cites",
            "value": 4
        },
        {
            "source": 34,
            "target": 1648,
            "type": "cites",
            "value": 3
        },
        {
            "source": 369,
            "target": 1648,
            "type": "cites",
            "value": 3
        },
        {
            "source": 34,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 34,
            "target": 125,
            "type": "cites",
            "value": 6
        },
        {
            "source": 34,
            "target": 1339,
            "type": "cites",
            "value": 6
        },
        {
            "source": 34,
            "target": 1382,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1216,
            "target": 981,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1492,
            "target": 1216,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1492,
            "target": 981,
            "type": "cites",
            "value": 5
        },
        {
            "source": 981,
            "target": 1216,
            "type": "cites",
            "value": 4
        },
        {
            "source": 543,
            "target": 92,
            "type": "cites",
            "value": 3
        },
        {
            "source": 189,
            "target": 1491,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 1266,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1649,
            "type": "cites",
            "value": 7
        },
        {
            "source": 883,
            "target": 1650,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1651,
            "target": 232,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 1652,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1651,
            "target": 1511,
            "type": "cites",
            "value": 4
        },
        {
            "source": 92,
            "target": 1511,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1653,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1654,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1051,
            "target": 1511,
            "type": "cites",
            "value": 4
        },
        {
            "source": 91,
            "target": 1511,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1283,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1639,
            "target": 189,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1639,
            "target": 204,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1283,
            "target": 1266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1639,
            "target": 316,
            "type": "cites",
            "value": 10
        },
        {
            "source": 1639,
            "target": 1266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1655,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1656,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1266,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1657,
            "target": 34,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1657,
            "target": 369,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1421,
            "target": 981,
            "type": "cites",
            "value": 4
        },
        {
            "source": 537,
            "target": 1625,
            "type": "cites",
            "value": 3
        },
        {
            "source": 537,
            "target": 91,
            "type": "cites",
            "value": 5
        },
        {
            "source": 553,
            "target": 1524,
            "type": "cites",
            "value": 3
        },
        {
            "source": 487,
            "target": 130,
            "type": "cites",
            "value": 3
        },
        {
            "source": 596,
            "target": 553,
            "type": "cites",
            "value": 4
        },
        {
            "source": 485,
            "target": 553,
            "type": "cites",
            "value": 4
        },
        {
            "source": 130,
            "target": 485,
            "type": "cites",
            "value": 7
        },
        {
            "source": 92,
            "target": 711,
            "type": "cites",
            "value": 3
        },
        {
            "source": 231,
            "target": 1658,
            "type": "cites",
            "value": 4
        },
        {
            "source": 231,
            "target": 1652,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 1658,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 1652,
            "type": "cites",
            "value": 3
        },
        {
            "source": 635,
            "target": 1659,
            "type": "cites",
            "value": 4
        },
        {
            "source": 635,
            "target": 313,
            "type": "cites",
            "value": 5
        },
        {
            "source": 636,
            "target": 1659,
            "type": "cites",
            "value": 5
        },
        {
            "source": 636,
            "target": 313,
            "type": "cites",
            "value": 6
        },
        {
            "source": 635,
            "target": 1588,
            "type": "cites",
            "value": 4
        },
        {
            "source": 635,
            "target": 1579,
            "type": "cites",
            "value": 5
        },
        {
            "source": 636,
            "target": 1588,
            "type": "cites",
            "value": 6
        },
        {
            "source": 636,
            "target": 1579,
            "type": "cites",
            "value": 10
        },
        {
            "source": 1660,
            "target": 313,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1661,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1652,
            "target": 1658,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1652,
            "target": 1662,
            "type": "cites",
            "value": 3
        },
        {
            "source": 636,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 636,
            "target": 1663,
            "type": "cites",
            "value": 3
        },
        {
            "source": 636,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 635,
            "target": 711,
            "type": "cites",
            "value": 4
        },
        {
            "source": 636,
            "target": 711,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 1088,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1079,
            "target": 537,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1664,
            "target": 189,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1664,
            "target": 1266,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1664,
            "target": 316,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1630,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1665,
            "target": 1588,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1665,
            "target": 1579,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1666,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1660,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1667,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1668,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 34,
            "target": 1216,
            "type": "cites",
            "value": 3
        },
        {
            "source": 369,
            "target": 1216,
            "type": "cites",
            "value": 3
        },
        {
            "source": 674,
            "target": 34,
            "type": "cites",
            "value": 3
        },
        {
            "source": 369,
            "target": 1339,
            "type": "cites",
            "value": 5
        },
        {
            "source": 369,
            "target": 125,
            "type": "cites",
            "value": 5
        },
        {
            "source": 34,
            "target": 1669,
            "type": "cites",
            "value": 4
        },
        {
            "source": 369,
            "target": 1669,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1670,
            "target": 1555,
            "type": "cites",
            "value": 3
        },
        {
            "source": 72,
            "target": 1555,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1216,
            "target": 980,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1216,
            "target": 982,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1216,
            "target": 983,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1216,
            "target": 984,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1216,
            "target": 985,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1216,
            "target": 986,
            "type": "cites",
            "value": 4
        },
        {
            "source": 981,
            "target": 980,
            "type": "cites",
            "value": 4
        },
        {
            "source": 981,
            "target": 982,
            "type": "cites",
            "value": 4
        },
        {
            "source": 981,
            "target": 983,
            "type": "cites",
            "value": 4
        },
        {
            "source": 981,
            "target": 984,
            "type": "cites",
            "value": 4
        },
        {
            "source": 981,
            "target": 985,
            "type": "cites",
            "value": 4
        },
        {
            "source": 981,
            "target": 986,
            "type": "cites",
            "value": 4
        },
        {
            "source": 484,
            "target": 1403,
            "type": "cites",
            "value": 4
        },
        {
            "source": 484,
            "target": 1404,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 1405,
            "type": "cites",
            "value": 3
        },
        {
            "source": 484,
            "target": 1671,
            "type": "cites",
            "value": 3
        },
        {
            "source": 636,
            "target": 1672,
            "type": "cites",
            "value": 3
        },
        {
            "source": 485,
            "target": 1672,
            "type": "cites",
            "value": 3
        },
        {
            "source": 130,
            "target": 1672,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1197,
            "target": 1195,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1197,
            "target": 1196,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1197,
            "target": 1519,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1511,
            "target": 232,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1673,
            "target": 1536,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1088,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1674,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1675,
            "target": 479,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1266,
            "target": 1676,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1266,
            "target": 1677,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1266,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1678,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1078,
            "target": 766,
            "type": "cites",
            "value": 3
        },
        {
            "source": 766,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 22,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 233,
            "target": 338,
            "type": "cites",
            "value": 5
        },
        {
            "source": 233,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1679,
            "target": 338,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1679,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1679,
            "target": 1452,
            "type": "cites",
            "value": 6
        },
        {
            "source": 1679,
            "target": 1088,
            "type": "cites",
            "value": 3
        },
        {
            "source": 26,
            "target": 1452,
            "type": "cites",
            "value": 6
        },
        {
            "source": 26,
            "target": 1088,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1680,
            "type": "cites",
            "value": 7
        },
        {
            "source": 883,
            "target": 1681,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1682,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1683,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1684,
            "type": "cites",
            "value": 3
        },
        {
            "source": 883,
            "target": 1685,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1686,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1687,
            "target": 1452,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1452,
            "target": 1453,
            "type": "cites",
            "value": 8
        },
        {
            "source": 1452,
            "target": 338,
            "type": "cites",
            "value": 7
        },
        {
            "source": 1452,
            "target": 287,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1452,
            "target": 1598,
            "type": "cites",
            "value": 3
        },
        {
            "source": 338,
            "target": 1452,
            "type": "cites",
            "value": 8
        },
        {
            "source": 338,
            "target": 1453,
            "type": "cites",
            "value": 5
        },
        {
            "source": 189,
            "target": 1266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1216,
            "target": 1508,
            "type": "cites",
            "value": 9
        },
        {
            "source": 1492,
            "target": 1508,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1507,
            "target": 981,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1216,
            "target": 34,
            "type": "cites",
            "value": 5
        },
        {
            "source": 232,
            "target": 1452,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1688,
            "target": 1630,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1688,
            "target": 1689,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1688,
            "target": 1631,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1680,
            "target": 316,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1680,
            "target": 883,
            "type": "cites",
            "value": 6
        },
        {
            "source": 91,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1690,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1630,
            "target": 338,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1614,
            "target": 1605,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1614,
            "target": 1604,
            "type": "cites",
            "value": 3
        },
        {
            "source": 287,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1452,
            "target": 1088,
            "type": "cites",
            "value": 5
        },
        {
            "source": 883,
            "target": 1691,
            "type": "cites",
            "value": 5
        },
        {
            "source": 883,
            "target": 1692,
            "type": "cites",
            "value": 5
        },
        {
            "source": 883,
            "target": 91,
            "type": "cites",
            "value": 4
        },
        {
            "source": 883,
            "target": 1693,
            "type": "cites",
            "value": 5
        },
        {
            "source": 876,
            "target": 1694,
            "type": "cites",
            "value": 4
        },
        {
            "source": 232,
            "target": 545,
            "type": "cites",
            "value": 6
        },
        {
            "source": 232,
            "target": 1266,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1695,
            "target": 485,
            "type": "cites",
            "value": 3
        },
        {
            "source": 545,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 545,
            "target": 1453,
            "type": "cites",
            "value": 3
        },
        {
            "source": 232,
            "target": 1453,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1669,
            "target": 34,
            "type": "cites",
            "value": 5
        },
        {
            "source": 316,
            "target": 189,
            "type": "cites",
            "value": 3
        },
        {
            "source": 316,
            "target": 1676,
            "type": "cites",
            "value": 3
        },
        {
            "source": 316,
            "target": 1677,
            "type": "cites",
            "value": 3
        },
        {
            "source": 636,
            "target": 1695,
            "type": "cites",
            "value": 3
        },
        {
            "source": 636,
            "target": 1696,
            "type": "cites",
            "value": 3
        },
        {
            "source": 636,
            "target": 1697,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1671,
            "target": 1404,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1698,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1698,
            "target": 1453,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1508,
            "target": 34,
            "type": "cites",
            "value": 4
        },
        {
            "source": 1508,
            "target": 1216,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1699,
            "target": 1700,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1699,
            "target": 1701,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1650,
            "target": 883,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1649,
            "target": 883,
            "type": "cites",
            "value": 5
        },
        {
            "source": 1702,
            "target": 1453,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1702,
            "target": 1452,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1692,
            "target": 883,
            "type": "cites",
            "value": 3
        },
        {
            "source": 1624,
            "target": 1491,
            "type": "cites",
            "value": 3
        }
    ],
    "nodes": [
        {
            "name": "Michael Sedlmair",
            "value": 1600,
            "numPapers": 258,
            "cluster": "5",
            "visible": 1,
            "index": 0,
            "x": 7.0710678118654755,
            "y": 0,
            "vy": 0,
            "vx": 0,
            "r": 2.842256764536557,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Design Patterns for Situated Visualization in Augmented Reality",
                "DOI": "10.1109/tvcg.2023.3327398",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327398",
                "FirstPage": 1324,
                "LastPage": 1335,
                "PaperType": "J",
                "Abstract": "Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.",
                "AuthorNamesDeduped": "Benjamin Lee;Michael Sedlmair;Dieter Schmalstieg",
                "AuthorNames": "Benjamin Lee;Michael Sedlmair;Dieter Schmalstieg",
                "AuthorAffiliation": "University of Stuttgart, Germany;University of Stuttgart, Germany;Graz University of Technology and University of Stuttgart, Austria",
                "InternalReferences": "10.1109/tvcg.2021.3114835;10.1109/tvcg.2020.3030334;10.1109/tvcg.2020.3030450;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209386;10.1109/tvcg.2016.2598608;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Augmented reality,immersive analytics,situated visualization,design patterns,design space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 124,
                "DownloadsXplore": 736,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 0,
                "i": [
                    0
                ]
            }
        },
        {
            "name": "Tim Dwyer",
            "value": 744,
            "numPapers": 137,
            "cluster": "2",
            "visible": 1,
            "index": 1,
            "x": -9.03088751750192,
            "y": 8.273032735715967,
            "vy": 0,
            "vx": 0,
            "r": 1.8566493955094991,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics",
                "DOI": "10.1109/tvcg.2020.3030334",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030334",
                "FirstPage": 1193,
                "LastPage": 1203,
                "PaperType": "J",
                "Abstract": "Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative’ scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use’ interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.",
                "AuthorNamesDeduped": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorNames": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2019.2934803;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2599107;10.1109/vast.2007.4389011;10.1109/vast.2010.5652880;10.1109/tvcg.2018.2865241;10.1109/vast.2007.4389006;10.1109/tvcg.2009.162;10.1109/tvcg.2007.70577;10.1109/tvcg.2019.2934538;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Data visualisation,tangible and embedded interaction,augmented reality,immersive analytics",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1968,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 460,
                "i": [
                    460
                ]
            }
        },
        {
            "name": "Ji Soo Yi",
            "value": 452,
            "numPapers": 62,
            "cluster": "5",
            "visible": 1,
            "index": 2,
            "x": 1.3823220809823638,
            "y": -15.750847141167634,
            "vy": 0,
            "vx": 0,
            "r": 1.5204375359815774,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Toward a Deeper Understanding of the Role of Interaction in Information Visualization",
                "DOI": "10.1109/tvcg.2007.70515",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70515",
                "FirstPage": 1224,
                "LastPage": 1231,
                "PaperType": "J",
                "Abstract": "Even though interaction is an important part of information visualization (Infovis), it has garnered a relatively low level of attention from the Infovis community. A few frameworks and taxonomies of Infovis interaction techniques exist, but they typically focus on low-level operations and do not address the variety of benefits interaction provides. After conducting an extensive review of Infovis systems and their interactive capabilities, we propose seven general categories of interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3) Reconfigure, 4) Encode, 5) Abstract/Elaborate, 6) Filter, and 7) Connect. These categories are organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system. The categories can act as a framework to help discuss and evaluate interaction techniques and hopefully lay an initial foundation toward a deeper understanding and a science of interaction.",
                "AuthorNamesDeduped": "Ji Soo Yi;Youn ah Kang;John T. Stasko;Julie A. Jacko",
                "AuthorNames": "Ji Soo Yi;Youn ah Kang;John Stasko",
                "AuthorAffiliation": "Health Systems Institute, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA and The Wallace H. Coulter Department of Biomedical Engineering, Emory University",
                "InternalReferences": "0.1109/visual.1994.346302;10.1109/infvis.2005.1532136;10.1109/infvis.1996.559213;10.1109/visual.1991.175794;10.1109/infvis.2005.1532126;10.1109/infvis.2000.885091;10.1109/infvis.1999.801860;10.1109/infvis.2000.885086",
                "AuthorKeywords": "Information visualization, interaction, interaction techniques, taxonomy, visual analytics",
                "AminerCitationCount": 1149,
                "CitationCountCrossRef": 654,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 11768,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2085,
                "i": [
                    2085
                ]
            }
        },
        {
            "name": "Youn ah Kang",
            "value": 419,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 3,
            "x": 11.382848792909423,
            "y": 14.846910566099618,
            "vy": 0,
            "vx": 0,
            "r": 1.482440990213011,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Toward a Deeper Understanding of the Role of Interaction in Information Visualization",
                "DOI": "10.1109/tvcg.2007.70515",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70515",
                "FirstPage": 1224,
                "LastPage": 1231,
                "PaperType": "J",
                "Abstract": "Even though interaction is an important part of information visualization (Infovis), it has garnered a relatively low level of attention from the Infovis community. A few frameworks and taxonomies of Infovis interaction techniques exist, but they typically focus on low-level operations and do not address the variety of benefits interaction provides. After conducting an extensive review of Infovis systems and their interactive capabilities, we propose seven general categories of interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3) Reconfigure, 4) Encode, 5) Abstract/Elaborate, 6) Filter, and 7) Connect. These categories are organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system. The categories can act as a framework to help discuss and evaluate interaction techniques and hopefully lay an initial foundation toward a deeper understanding and a science of interaction.",
                "AuthorNamesDeduped": "Ji Soo Yi;Youn ah Kang;John T. Stasko;Julie A. Jacko",
                "AuthorNames": "Ji Soo Yi;Youn ah Kang;John Stasko",
                "AuthorAffiliation": "Health Systems Institute, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA and The Wallace H. Coulter Department of Biomedical Engineering, Emory University",
                "InternalReferences": "0.1109/visual.1994.346302;10.1109/infvis.2005.1532136;10.1109/infvis.1996.559213;10.1109/visual.1991.175794;10.1109/infvis.2005.1532126;10.1109/infvis.2000.885091;10.1109/infvis.1999.801860;10.1109/infvis.2000.885086",
                "AuthorKeywords": "Information visualization, interaction, interaction techniques, taxonomy, visual analytics",
                "AminerCitationCount": 1149,
                "CitationCountCrossRef": 654,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 11768,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2085,
                "i": [
                    2085
                ]
            }
        },
        {
            "name": "John T. Stasko",
            "value": 2152,
            "numPapers": 258,
            "cluster": "5",
            "visible": 1,
            "index": 4,
            "x": -20.88892748977138,
            "y": -3.694957148205299,
            "vy": 0,
            "vx": 0,
            "r": 3.4778353483016695,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Toward a Deeper Understanding of the Role of Interaction in Information Visualization",
                "DOI": "10.1109/tvcg.2007.70515",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70515",
                "FirstPage": 1224,
                "LastPage": 1231,
                "PaperType": "J",
                "Abstract": "Even though interaction is an important part of information visualization (Infovis), it has garnered a relatively low level of attention from the Infovis community. A few frameworks and taxonomies of Infovis interaction techniques exist, but they typically focus on low-level operations and do not address the variety of benefits interaction provides. After conducting an extensive review of Infovis systems and their interactive capabilities, we propose seven general categories of interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3) Reconfigure, 4) Encode, 5) Abstract/Elaborate, 6) Filter, and 7) Connect. These categories are organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system. The categories can act as a framework to help discuss and evaluate interaction techniques and hopefully lay an initial foundation toward a deeper understanding and a science of interaction.",
                "AuthorNamesDeduped": "Ji Soo Yi;Youn ah Kang;John T. Stasko;Julie A. Jacko",
                "AuthorNames": "Ji Soo Yi;Youn ah Kang;John Stasko",
                "AuthorAffiliation": "Health Systems Institute, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA and The Wallace H. Coulter Department of Biomedical Engineering, Emory University",
                "InternalReferences": "0.1109/visual.1994.346302;10.1109/infvis.2005.1532136;10.1109/infvis.1996.559213;10.1109/visual.1991.175794;10.1109/infvis.2005.1532126;10.1109/infvis.2000.885091;10.1109/infvis.1999.801860;10.1109/infvis.2000.885086",
                "AuthorKeywords": "Information visualization, interaction, interaction techniques, taxonomy, visual analytics",
                "AminerCitationCount": 1149,
                "CitationCountCrossRef": 654,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 11768,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2085,
                "i": [
                    2085
                ]
            }
        },
        {
            "name": "Julie A. Jacko",
            "value": 288,
            "numPapers": 7,
            "cluster": "5",
            "visible": 1,
            "index": 5,
            "x": 19.78781566111266,
            "y": -12.587388583889217,
            "vy": 0,
            "vx": 0,
            "r": 1.3316062176165804,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Toward a Deeper Understanding of the Role of Interaction in Information Visualization",
                "DOI": "10.1109/tvcg.2007.70515",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70515",
                "FirstPage": 1224,
                "LastPage": 1231,
                "PaperType": "J",
                "Abstract": "Even though interaction is an important part of information visualization (Infovis), it has garnered a relatively low level of attention from the Infovis community. A few frameworks and taxonomies of Infovis interaction techniques exist, but they typically focus on low-level operations and do not address the variety of benefits interaction provides. After conducting an extensive review of Infovis systems and their interactive capabilities, we propose seven general categories of interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3) Reconfigure, 4) Encode, 5) Abstract/Elaborate, 6) Filter, and 7) Connect. These categories are organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system. The categories can act as a framework to help discuss and evaluate interaction techniques and hopefully lay an initial foundation toward a deeper understanding and a science of interaction.",
                "AuthorNamesDeduped": "Ji Soo Yi;Youn ah Kang;John T. Stasko;Julie A. Jacko",
                "AuthorNames": "Ji Soo Yi;Youn ah Kang;John Stasko",
                "AuthorAffiliation": "Health Systems Institute, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA and The Wallace H. Coulter Department of Biomedical Engineering, Emory University",
                "InternalReferences": "0.1109/visual.1994.346302;10.1109/infvis.2005.1532136;10.1109/infvis.1996.559213;10.1109/visual.1991.175794;10.1109/infvis.2005.1532126;10.1109/infvis.2000.885091;10.1109/infvis.1999.801860;10.1109/infvis.2000.885086",
                "AuthorKeywords": "Information visualization, interaction, interaction techniques, taxonomy, visual analytics",
                "AminerCitationCount": 1149,
                "CitationCountCrossRef": 654,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 11768,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2085,
                "i": [
                    2085
                ]
            }
        },
        {
            "name": "Matthew Kay 0001",
            "value": 345,
            "numPapers": 59,
            "cluster": "5",
            "visible": 1,
            "index": 6,
            "x": -6.618637082526906,
            "y": 24.621000044064004,
            "vy": 0,
            "vx": 0,
            "r": 1.3972366148531952,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "ggdist: Visualizations of Distributions and Uncertainty in the Grammar of Graphics",
                "DOI": "10.1109/tvcg.2023.3327195",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327195",
                "FirstPage": 414,
                "LastPage": 424,
                "PaperType": "J",
                "Abstract": "The grammar of graphics is ubiquitous, providing the foundation for a variety of popular visualization tools and toolkits. Yet support for uncertainty visualization in the grammar graphics—beyond simple variations of error bars, uncertainty bands, and density plots—remains rudimentary. Research in uncertainty visualization has developed a rich variety of improved uncertainty visualizations, most of which are difficult to create in existing grammar of graphics implementations. ggdist, an extension to the popular ggplot2 grammar of graphics toolkit, is an attempt to rectify this situation. ggdist unifies a variety of uncertainty visualization types through the lens of distributional visualization, allowing functions of distributions to be mapped to directly to visual channels (aesthetics), making it straightforward to express a variety of (sometimes weird!) uncertainty visualization types. This distributional lens also offers a way to unify Bayesian and frequentist uncertainty visualization by formalizing the latter with the help of confidence distributions. In this paper, I offer a description of this uncertainty visualization paradigm and lessons learned from its development and adoption: ggdist has existed in some form for about six years (originally as part of the tidybayes R package for post-processing Bayesian models), and it has evolved substantially over that time, with several rewrites and API re-organizations as it changed in response to user feedback and expanded to cover increasing varieties of uncertainty visualization types. Ultimately, given the huge expressive power of the grammar of graphics and the popularity of tools built on it, I hope a catalog of my experience with ggdist will provide a catalyst for further improvements to formalizations and implementations of uncertainty visualization in grammar of graphics ecosystems. A free copy of this paper is available at https://osf.io/2gsz6. All supplemental materials are available at https://github.com/mjskay/ggdist-paper and are archived on Zenodo at doi:10.5281/zenodo.7770984.",
                "AuthorNamesDeduped": "Matthew Kay 0001",
                "AuthorNames": "Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346298;10.1109/tvcg.2013.227;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865193;10.1109/tvcg.2014.2346455;10.1109/tvcg.2009.111;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2011.227",
                "AuthorKeywords": "Uncertainty visualization,probability distributions,confidence distributions,grammar of graphics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1,
                "i": [
                    1
                ]
            }
        },
        {
            "name": "Jeffrey Heer",
            "value": 3474,
            "numPapers": 199,
            "cluster": "5",
            "visible": 1,
            "index": 7,
            "x": -12.62245871740517,
            "y": -24.303776166007665,
            "vy": 0,
            "vx": 0,
            "r": 5,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "D³ Data-Driven Documents",
                "DOI": "10.1109/tvcg.2011.185",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.185",
                "FirstPage": 2301,
                "LastPage": 2309,
                "PaperType": "J",
                "Abstract": "Data-Driven Documents (D3) is a novel representation-transparent approach to visualization for the web. Rather than hide the underlying scenegraph within a toolkit-specific abstraction, D3 enables direct inspection and manipulation of a native representation: the standard document object model (DOM). With D3, designers selectively bind input data to arbitrary document elements, applying dynamic transforms to both generate and modify content. We show how representational transparency improves expressiveness and better integrates with developer tools than prior approaches, while offering comparable notational efficiency and retaining powerful declarative components. Immediate evaluation of operators further simplifies debugging and allows iterative development. Additionally, we demonstrate how D3 transforms naturally enable animation and interaction with dramatic performance improvements over intermediate representations.",
                "AuthorNamesDeduped": "Michael Bostock;Vadim Ogievetsky;Jeffrey Heer",
                "AuthorNames": "Michael Bostock;Vadim Ogievetsky;Jeffrey Heer",
                "AuthorAffiliation": "Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA",
                "InternalReferences": "0.1109/infvis.2000.885091;10.1109/infvis.2000.885098;10.1109/tvcg.2010.144;10.1109/tvcg.2009.174;10.1109/infvis.2004.12;10.1109/tvcg.2006.178;10.1109/infvis.2005.1532122;10.1109/tvcg.2008.166;10.1109/infvis.2004.64;10.1109/tvcg.2007.70539",
                "AuthorKeywords": "Information visualization, user interfaces, toolkits, 2D graphics",
                "AminerCitationCount": 3795,
                "CitationCountCrossRef": 2061,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 10871,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1535,
                "i": [
                    1535
                ]
            }
        },
        {
            "name": "Michael Correll",
            "value": 380,
            "numPapers": 90,
            "cluster": "5",
            "visible": 1,
            "index": 8,
            "x": 27.3856864633483,
            "y": 10.001208773502414,
            "vy": 0,
            "vx": 0,
            "r": 1.4375359815774322,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "Error Bars Considered Harmful: Exploring Alternate Encodings for Mean and Error",
                "DOI": "10.1109/tvcg.2014.2346298",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346298",
                "FirstPage": 2142,
                "LastPage": 2151,
                "PaperType": "J",
                "Abstract": "When making an inference or comparison with uncertain, noisy, or incomplete data, measurement error and confidence intervals can be as important for judgment as the actual mean values of different groups. These often misunderstood statistical quantities are frequently represented by bar charts with error bars. This paper investigates drawbacks with this standard encoding, and considers a set of alternatives designed to more effectively communicate the implications of mean and error data to a general audience, drawing from lessons learned from the use of visual statistics in the information visualization community. We present a series of crowd-sourced experiments that confirm that the encoding of mean and error significantly changes how viewers make decisions about uncertain data. Careful consideration of design tradeoffs in the visual presentation of data results in human reasoning that is more consistently aligned with statistical inferences. We suggest the use of gradient plots (which use transparency to encode uncertainty) and violin plots (which use width) as better alternatives for inferential tasks than bar charts with error bars.",
                "AuthorNamesDeduped": "Michael Correll;Michael Gleicher",
                "AuthorNames": "Michael Correll;Michael Gleicher",
                "AuthorAffiliation": "Department of Computer Sciences, University of Wisconsin-Madison;Department of Computer Sciences, University of Wisconsin-Madison",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2012.199;10.1109/tvcg.2012.262;10.1109/tvcg.2011.175;10.1109/tvcg.2012.279",
                "AuthorKeywords": "Visual statistics, information visualization, crowd-sourcing, empirical evaluation",
                "AminerCitationCount": 237,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2758,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1165,
                "i": [
                    1165
                ]
            }
        },
        {
            "name": "Michael Gleicher",
            "value": 599,
            "numPapers": 109,
            "cluster": "5",
            "visible": 1,
            "index": 9,
            "x": -28.490243449190146,
            "y": 11.760358336627238,
            "vy": 0,
            "vx": 0,
            "r": 1.6896948762233737,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "Error Bars Considered Harmful: Exploring Alternate Encodings for Mean and Error",
                "DOI": "10.1109/tvcg.2014.2346298",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346298",
                "FirstPage": 2142,
                "LastPage": 2151,
                "PaperType": "J",
                "Abstract": "When making an inference or comparison with uncertain, noisy, or incomplete data, measurement error and confidence intervals can be as important for judgment as the actual mean values of different groups. These often misunderstood statistical quantities are frequently represented by bar charts with error bars. This paper investigates drawbacks with this standard encoding, and considers a set of alternatives designed to more effectively communicate the implications of mean and error data to a general audience, drawing from lessons learned from the use of visual statistics in the information visualization community. We present a series of crowd-sourced experiments that confirm that the encoding of mean and error significantly changes how viewers make decisions about uncertain data. Careful consideration of design tradeoffs in the visual presentation of data results in human reasoning that is more consistently aligned with statistical inferences. We suggest the use of gradient plots (which use transparency to encode uncertainty) and violin plots (which use width) as better alternatives for inferential tasks than bar charts with error bars.",
                "AuthorNamesDeduped": "Michael Correll;Michael Gleicher",
                "AuthorNames": "Michael Correll;Michael Gleicher",
                "AuthorAffiliation": "Department of Computer Sciences, University of Wisconsin-Madison;Department of Computer Sciences, University of Wisconsin-Madison",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2012.199;10.1109/tvcg.2012.262;10.1109/tvcg.2011.175;10.1109/tvcg.2012.279",
                "AuthorKeywords": "Visual statistics, information visualization, crowd-sourcing, empirical evaluation",
                "AminerCitationCount": 237,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2758,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1165,
                "i": [
                    1165
                ]
            }
        },
        {
            "name": "Alex Kale",
            "value": 223,
            "numPapers": 37,
            "cluster": "5",
            "visible": 1,
            "index": 10,
            "x": 13.734179949820856,
            "y": -29.34914481047001,
            "vy": 0,
            "vx": 0,
            "r": 1.2567645365572826,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data",
                "DOI": "10.1109/tvcg.2018.2864909",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864909",
                "FirstPage": 892,
                "LastPage": 902,
                "PaperType": "J",
                "Abstract": "Animated representations of outcomes drawn from distributions (hypothetical outcome plots, or HOPs) are used in the media and other public venues to communicate uncertainty. HOPs greatly improve multivariate probability estimation over conventional static uncertainty visualizations and leverage the ability of the visual system to quickly, accurately, and automatically process the summary statistical properties of ensembles. However, it is unclear how well HOPs support applied tasks resembling real world judgments posed in uncertainty communication. We identify and motivate an appropriate task to investigate realistic judgments of uncertainty in the public domain through a qualitative analysis of uncertainty visualizations in the news. We contribute two crowdsourced experiments comparing the effectiveness of HOPs, error bars, and line ensembles for supporting perceptual decision-making from visualized uncertainty. Participants infer which of two possible underlying trends is more likely to have produced a sample of time series data by referencing uncertainty visualizations which depict the two trends with variability due to sampling error. By modeling each participant's accuracy as a function of the level of evidence presented over many repeated judgments, we find that observers are able to correctly infer the underlying trend in samples conveying a lower level of evidence when using HOPs rather than static aggregate uncertainty visualizations as a decision aid. Modeling approaches like ours contribute theoretically grounded and richly descriptive accounts of user perceptions to visualization evaluation.",
                "AuthorNamesDeduped": "Alex Kale;Francis Nguyen;Matthew Kay 0001;Jessica Hullman",
                "AuthorNames": "Alex Kale;Francis Nguyen;Matthew Kay;Jessica Hullman",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Michigan;Northwestern University",
                "InternalReferences": "0.1109/tvcg.2017.2743898;10.1109/tvcg.2007.70518;10.1109/tvcg.2017.2744359;10.1109/tvcg.2011.175;10.1109/tvcg.2014.2346298",
                "AuthorKeywords": "uncertainty visualization,hypothetical outcome plots,psychometric functions",
                "AminerCitationCount": 98,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1155,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 656,
                "i": [
                    656
                ]
            }
        },
        {
            "name": "Francis Nguyen",
            "value": 89,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 11,
            "x": 10.149209636450301,
            "y": 32.35727960993298,
            "vy": 0,
            "vx": 0,
            "r": 1.102475532527346,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data",
                "DOI": "10.1109/tvcg.2018.2864909",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864909",
                "FirstPage": 892,
                "LastPage": 902,
                "PaperType": "J",
                "Abstract": "Animated representations of outcomes drawn from distributions (hypothetical outcome plots, or HOPs) are used in the media and other public venues to communicate uncertainty. HOPs greatly improve multivariate probability estimation over conventional static uncertainty visualizations and leverage the ability of the visual system to quickly, accurately, and automatically process the summary statistical properties of ensembles. However, it is unclear how well HOPs support applied tasks resembling real world judgments posed in uncertainty communication. We identify and motivate an appropriate task to investigate realistic judgments of uncertainty in the public domain through a qualitative analysis of uncertainty visualizations in the news. We contribute two crowdsourced experiments comparing the effectiveness of HOPs, error bars, and line ensembles for supporting perceptual decision-making from visualized uncertainty. Participants infer which of two possible underlying trends is more likely to have produced a sample of time series data by referencing uncertainty visualizations which depict the two trends with variability due to sampling error. By modeling each participant's accuracy as a function of the level of evidence presented over many repeated judgments, we find that observers are able to correctly infer the underlying trend in samples conveying a lower level of evidence when using HOPs rather than static aggregate uncertainty visualizations as a decision aid. Modeling approaches like ours contribute theoretically grounded and richly descriptive accounts of user perceptions to visualization evaluation.",
                "AuthorNamesDeduped": "Alex Kale;Francis Nguyen;Matthew Kay 0001;Jessica Hullman",
                "AuthorNames": "Alex Kale;Francis Nguyen;Matthew Kay;Jessica Hullman",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Michigan;Northwestern University",
                "InternalReferences": "0.1109/tvcg.2017.2743898;10.1109/tvcg.2007.70518;10.1109/tvcg.2017.2744359;10.1109/tvcg.2011.175;10.1109/tvcg.2014.2346298",
                "AuthorKeywords": "uncertainty visualization,hypothetical outcome plots,psychometric functions",
                "AminerCitationCount": 98,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1155,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 656,
                "i": [
                    656
                ]
            }
        },
        {
            "name": "Jessica Hullman",
            "value": 936,
            "numPapers": 196,
            "cluster": "5",
            "visible": 1,
            "index": 12,
            "x": -30.589835678756263,
            "y": -17.727435041389672,
            "vy": 0,
            "vx": 0,
            "r": 2.0777202072538863,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data",
                "DOI": "10.1109/tvcg.2018.2864909",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864909",
                "FirstPage": 892,
                "LastPage": 902,
                "PaperType": "J",
                "Abstract": "Animated representations of outcomes drawn from distributions (hypothetical outcome plots, or HOPs) are used in the media and other public venues to communicate uncertainty. HOPs greatly improve multivariate probability estimation over conventional static uncertainty visualizations and leverage the ability of the visual system to quickly, accurately, and automatically process the summary statistical properties of ensembles. However, it is unclear how well HOPs support applied tasks resembling real world judgments posed in uncertainty communication. We identify and motivate an appropriate task to investigate realistic judgments of uncertainty in the public domain through a qualitative analysis of uncertainty visualizations in the news. We contribute two crowdsourced experiments comparing the effectiveness of HOPs, error bars, and line ensembles for supporting perceptual decision-making from visualized uncertainty. Participants infer which of two possible underlying trends is more likely to have produced a sample of time series data by referencing uncertainty visualizations which depict the two trends with variability due to sampling error. By modeling each participant's accuracy as a function of the level of evidence presented over many repeated judgments, we find that observers are able to correctly infer the underlying trend in samples conveying a lower level of evidence when using HOPs rather than static aggregate uncertainty visualizations as a decision aid. Modeling approaches like ours contribute theoretically grounded and richly descriptive accounts of user perceptions to visualization evaluation.",
                "AuthorNamesDeduped": "Alex Kale;Francis Nguyen;Matthew Kay 0001;Jessica Hullman",
                "AuthorNames": "Alex Kale;Francis Nguyen;Matthew Kay;Jessica Hullman",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Michigan;Northwestern University",
                "InternalReferences": "0.1109/tvcg.2017.2743898;10.1109/tvcg.2007.70518;10.1109/tvcg.2017.2744359;10.1109/tvcg.2011.175;10.1109/tvcg.2014.2346298",
                "AuthorKeywords": "uncertainty visualization,hypothetical outcome plots,psychometric functions",
                "AminerCitationCount": 98,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1155,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 656,
                "i": [
                    656
                ]
            }
        },
        {
            "name": "Yingchaojie Feng",
            "value": 9,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 13,
            "x": 35.88535934290564,
            "y": -7.889295585192323,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Huamin Qu",
            "value": 2754,
            "numPapers": 749,
            "cluster": "1",
            "visible": 1,
            "index": 14,
            "x": -21.900276194166445,
            "y": 31.150889274934457,
            "vy": 0,
            "vx": 0,
            "r": 4.17098445595855,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Exploring Interactions with Printed Data Visualizations in Augmented Reality",
                "DOI": "10.1109/tvcg.2022.3209386",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209386",
                "FirstPage": 418,
                "LastPage": 428,
                "PaperType": "J",
                "Abstract": "This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ($\\mathrm{N}=20$) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ($\\mathrm{N}=12$, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement “point” for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.",
                "AuthorNamesDeduped": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorNames": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, USA;Harvard University, USA;Carnegie Mellon University, USA;Hong Kong University of Science and Technology, Hong Kong, USA;Hong Kong University of Science and Technology, Hong Kong, USA;University of Edinburgh, United Kingdom;Hong Kong University of Science and Technology, Hong Kong, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467201;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114806;10.1109/tvcg.2021.3114861;10.1109/tvcg.2019.2934283;10.1109/tvcg.2020.3030334;10.1109/tvcg.2013.121;10.1109/tvcg.2013.134;10.1109/tvcg.2017.2744319;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2020.3028948;10.1109/tvcg.2010.177;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.237;10.1109/tvcg.2020.3030392;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2599211",
                "AuthorKeywords": "Interaction design,augmented reality,paper interaction,tangible user interface,printed data visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 1055,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 147,
                "i": [
                    147
                ]
            }
        },
        {
            "name": "Xingbo Wang 0001",
            "value": 146,
            "numPapers": 62,
            "cluster": "3",
            "visible": 1,
            "index": 15,
            "x": -5.05947091685981,
            "y": -39.04358787357342,
            "vy": 0,
            "vx": 0,
            "r": 1.1681059297639609,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Jianben He",
            "value": 69,
            "numPapers": 32,
            "cluster": "1",
            "visible": 1,
            "index": 16,
            "x": 31.060189025756014,
            "y": 26.17756019349981,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "VideoPro: A Visual Analytics Approach for Interactive Video Programming",
                "DOI": "10.1109/tvcg.2023.3326586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326586",
                "FirstPage": 87,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.",
                "AuthorNamesDeduped": "Jianben He;Xingbo Wang 0001;Kamkwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorNames": "Jianben He;Xingbo Wang;Kam Kwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Tsinghua University, Beijing, China;Hong Kong University of Science and Technology, Hong Kong, China;Sichuang University, Chengdu, China;Sichuang University, Chengdu, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "10.1109/vast.2016.7883520;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114806;10.1109/tvcg.2023.3327168;10.1109/tvcg.2022.3209466;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114793;10.1109/tvcg.2019.2934266;10.1109/tvcg.2016.2598695;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114789;10.1109/tvcg.2011.208;10.1109/tvcg.2021.3114822;10.1109/vast47406.2019.8986917;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209452;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Interactive machine learning,data programming,video exploration and analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 381,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 31,
                "i": [
                    31
                ]
            }
        },
        {
            "name": "Kamkwai Wong",
            "value": 47,
            "numPapers": 41,
            "cluster": "3",
            "visible": 1,
            "index": 17,
            "x": -41.7972782028575,
            "y": 1.7284486781313184,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Sijia Wang",
            "value": 9,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 18,
            "x": 30.487905903426476,
            "y": -30.339538454363694,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Yuhong Lu",
            "value": 9,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 19,
            "x": -2.039759023075995,
            "y": 44.11166946656837,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Minfeng Zhu",
            "value": 25,
            "numPapers": 87,
            "cluster": "2",
            "visible": 1,
            "index": 20,
            "x": -29.009340345864786,
            "y": -34.762884988127524,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Baicheng Wang",
            "value": 9,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 21,
            "x": 45.95399818121157,
            "y": 6.183045460062862,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Wei Chen 0001",
            "value": 1339,
            "numPapers": 538,
            "cluster": "3",
            "visible": 1,
            "index": 22,
            "x": -38.93672971725283,
            "y": 27.091162376789978,
            "vy": 0,
            "vx": 0,
            "r": 2.5417386298215314,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation",
                "DOI": "10.1109/tvcg.2023.3327168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327168",
                "FirstPage": 295,
                "LastPage": 305,
                "PaperType": "J",
                "Abstract": "Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.",
                "AuthorNamesDeduped": "Yingchaojie Feng;Xingbo Wang 0001;Kamkwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen 0001",
                "AuthorNames": "Yingchaojie Feng;Xingbo Wang;Kam Kwai Wong;Sijia Wang;Yuhong Lu;Minfeng Zhu;Baicheng Wang;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2006.187;10.1109/tvcg.2023.3326586;10.1109/tvcg.2020.3030370;10.1109/tvcg.2021.3114876;10.1109/tvcg.2022.3209479;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209423;10.1109/vast.2006.261425;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Prompt engineering,text-to-image generation,image visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 1065,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2,
                "i": [
                    2
                ]
            }
        },
        {
            "name": "Yong Wang 0021",
            "value": 486,
            "numPapers": 94,
            "cluster": "3",
            "visible": 1,
            "index": 23,
            "x": 10.639754127738415,
            "y": -47.294773834973284,
            "vy": 0,
            "vx": 0,
            "r": 1.5595854922279793,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "TaxThemis: Interactive Mining and Exploration of Suspicious Tax Evasion Groups",
                "DOI": "10.1109/tvcg.2020.3030370",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030370",
                "FirstPage": 849,
                "LastPage": 859,
                "PaperType": "J",
                "Abstract": "Tax evasion is a serious economic problem for many countries, as it can undermine the government's tax system and lead to an unfair business competition environment. Recent research has applied data analytics techniques to analyze and detect tax evasion behaviors of individual taxpayers. However, they have failed to support the analysis and exploration of the related party transaction tax evasion (RPTTE) behaviors (e.g., transfer pricing), where a group of taxpayers is involved. In this paper, we present TaxThemis, an interactive visual analytics system to help tax officers mine and explore suspicious tax evasion groups through analyzing heterogeneous tax-related data. A taxpayer network is constructed and fused with the respective trade network to detect suspicious RPTTE groups. Rich visualizations are designed to facilitate the exploration and investigation of suspicious transactions between related taxpayers with profit and topological data analysis. Specifically, we propose a calendar heatmap with a carefully-designed encoding scheme to intuitively show the evidence of transferring revenue through related party transactions. We demonstrate the usefulness and effectiveness of TaxThemis through two case studies on real-world tax-related data and interviews with domain experts.",
                "AuthorNamesDeduped": "Yating Lin;Kamkwai Wong;Yong Wang 0021;Rong Zhang;Bo Dong 0001;Huamin Qu;Qinghua Zheng",
                "AuthorNames": "Yating Lin;Kamkwai Wong;Yong Wang;Rong Zhang;Bo Dong;Huamin Qu;Qinghua Zheng",
                "AuthorAffiliation": "MOEKLINNS Lab, Xi 'an Jiaotong University;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;National Engineering Lab of Big Data Analytics, Xi'an Jiaotong University;Hong Kong University of Science and Technology;MOEKLINNS Lab, Xi 'an Jiaotong University",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/vast.2007.4389009;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2744758;10.1109/vast.2012.6400491;10.1109/tvcg.2017.2744898;10.1109/tvcg.2014.2346441;10.1109/tvcg.2018.2864844;10.1109/tvcg.2014.2346913;10.1109/tvcg.2018.2864814",
                "AuthorKeywords": "Visual Analytics,Tax Network,Tax Evasion Detection,Anomaly detection,Multidimensional data",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 786,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 482,
                "i": [
                    482
                ]
            }
        },
        {
            "name": "Nils Gehlenborg",
            "value": 497,
            "numPapers": 127,
            "cluster": "5",
            "visible": 1,
            "index": 24,
            "x": 24.609197771495488,
            "y": 42.94633145035117,
            "vy": 0,
            "vx": 0,
            "r": 1.572251007484168,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Gosling: A Grammar-based Toolkit for Scalable and Interactive Genomics Data Visualization",
                "DOI": "10.1109/tvcg.2021.3114876",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114876",
                "FirstPage": 140,
                "LastPage": 150,
                "PaperType": "J",
                "Abstract": "The combination of diverse data types and analysis tasks in genomics has resulted in the development of a wide range of visualization techniques and tools. However, most existing tools are tailored to a specific problem or data type and offer limited customization, making it challenging to optimize visualizations for new analysis tasks or datasets. To address this challenge, we designed Gosling-a grammar for interactive and scalable genomics data visualization. Gosling balances expressiveness for comprehensive multi-scale genomics data visualizations with accessibility for domain scientists. Our accompanying JavaScript toolkit called Gosling.js provides scalable and interactive rendering. Gosling.js is built on top of an existing platform for web-based genomics data visualization to further simplify the visualization of common genomics data formats. We demonstrate the expressiveness of the grammar through a variety of real-world examples. Furthermore, we show how Gosling supports the design of novel genomics visualizations. An online editor and examples of Gosling.js, its source code, and documentation are available at <uri>https://gosling.js.org</uri>.",
                "AuthorNamesDeduped": "Sehi L'Yi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorNames": "Sehi LYi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorAffiliation": "Harvard Medical School, Boston, MA, USA;Harvard Medical School, Boston, MA, USA;Harvard School of Engineering and Applied Sciences, Boston, MA, USA;Harvard Medical School, Boston, MA, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2013.214;10.1109/tvcg.2018.2865141;10.1109/tvcg.2017.2745978;10.1109/tvcg.2013.179;10.1109/tvcg.2009.167;10.1109/tvcg.2010.163;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2016.2598796;10.1109/tvcg.2020.3030372;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934555",
                "AuthorKeywords": "Genomics,declarative specification,visualization grammar",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 90,
                "DownloadsXplore": 1426,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 266,
                "i": [
                    266
                ]
            }
        },
        {
            "name": "Hendrik Strobelt",
            "value": 626,
            "numPapers": 91,
            "cluster": "1",
            "visible": 1,
            "index": 25,
            "x": -48.108627040199295,
            "y": -15.347964174671652,
            "vy": 0,
            "vx": 0,
            "r": 1.7207829591249282,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models",
                "DOI": "10.1109/tvcg.2022.3209479",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209479",
                "FirstPage": 1146,
                "LastPage": 1156,
                "PaperType": "J",
                "Abstract": "State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorNames": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorAffiliation": "IBM Research, China;Brown University, USA;Huggingface, USA;IBM Research, China;Harvard SEAS, USA;Harvard SEAS, USA;Huggingface, USA",
                "InternalReferences": "0.1109/tvcg.2020.3028976;10.1109/tvcg.2021.3114683;10.1109/tvcg.2018.2865230;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Natural language processing,language modeling,zero-shot models",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 3637,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 133,
                "i": [
                    133
                ]
            }
        },
        {
            "name": "Hanspeter Pfister",
            "value": 2190,
            "numPapers": 426,
            "cluster": "6",
            "visible": 1,
            "index": 26,
            "x": 46.73140878326114,
            "y": -21.591096154010888,
            "vy": 0,
            "vx": 0,
            "r": 3.5215889464594126,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models",
                "DOI": "10.1109/tvcg.2022.3209479",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209479",
                "FirstPage": 1146,
                "LastPage": 1156,
                "PaperType": "J",
                "Abstract": "State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorNames": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorAffiliation": "IBM Research, China;Brown University, USA;Huggingface, USA;IBM Research, China;Harvard SEAS, USA;Harvard SEAS, USA;Huggingface, USA",
                "InternalReferences": "0.1109/tvcg.2020.3028976;10.1109/tvcg.2021.3114683;10.1109/tvcg.2018.2865230;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Natural language processing,language modeling,zero-shot models",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 3637,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 133,
                "i": [
                    133
                ]
            }
        },
        {
            "name": "Zhihua Jin",
            "value": 104,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 27,
            "x": -20.24521394750863,
            "y": 48.374903743776095,
            "vy": 0,
            "vx": 0,
            "r": 1.1197466896948762,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis",
                "DOI": "10.1109/tvcg.2021.3114794",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114794",
                "FirstPage": 802,
                "LastPage": 812,
                "PaperType": "J",
                "Abstract": "Multimodal sentiment analysis aims to recognize people's attitudes from multiple communication channels such as verbal content (i.e., text), voice, and facial expressions. It has become a vibrant and important research topic in natural language processing. Much research focuses on modeling the complex intra- and inter-modal interactions between different communication channels. However, current multimodal models with strong performance are often deep-learning-based techniques and work like black boxes. It is not clear how models utilize multimodal information for sentiment predictions. Despite recent advances in techniques for enhancing the explainability of machine learning models, they often target unimodal scenarios (e.g., images, sentences), and little research has been done on explaining multimodal models. In this paper, we present an interactive visual analytics system, M2 Lens, to visualize and explain multimodal models for sentiment analysis. M2 Lens provides explanations on intra- and inter-modal interactions at the global, subset, and local levels. Specifically, it summarizes the influence of three typical interaction types (i.e., dominance, complement, and conflict) on the model predictions. Moreover, M2 Lens identifies frequent and influential multimodal features and supports the multi-faceted exploration of model behaviors from language, acoustic, and visual modalities. Through two case studies and expert interviews, we demonstrate our system can help users gain deep insights into the multimodal models for sentiment analysis.",
                "AuthorNamesDeduped": "Xingbo Wang 0001;Jianben He;Zhihua Jin;Muqiao Yang;Yong Wang 0021;Huamin Qu",
                "AuthorNames": "Xingbo Wang;Jianben He;Zhihua Jin;Muqiao Yang;Yong Wang;Huamin Qu",
                "AuthorAffiliation": "University of Science and Technology, United States;University of Science and Technology, United States;University of Science and Technology, United States;Carnegie Mellon University, United States;Carnegie Mellon University, United States;University of Science and Technology, United States",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/tvcg.2017.2744683;10.1109/vast.2015.7347637;10.1109/vast47406.2019.8986948;10.1109/tvcg.2017.2744718;10.1109/tvcg.2014.2346482;10.1109/tvcg.2015.2467622;10.1109/visual.1998.745348;10.1109/tvcg.2016.2598828;10.1109/tvcg.2017.2744158;10.1109/tvcg.2019.2934619;10.1109/tvcg.2019.2934656;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Multimodal models,sentiment analysis,explainable machine learning",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 2860,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 258,
                "i": [
                    258
                ]
            }
        },
        {
            "name": "Muqiao Yang",
            "value": 61,
            "numPapers": 12,
            "cluster": "3",
            "visible": 1,
            "index": 28,
            "x": -18.06840736265379,
            "y": -50.23477535907967,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis",
                "DOI": "10.1109/tvcg.2021.3114794",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114794",
                "FirstPage": 802,
                "LastPage": 812,
                "PaperType": "J",
                "Abstract": "Multimodal sentiment analysis aims to recognize people's attitudes from multiple communication channels such as verbal content (i.e., text), voice, and facial expressions. It has become a vibrant and important research topic in natural language processing. Much research focuses on modeling the complex intra- and inter-modal interactions between different communication channels. However, current multimodal models with strong performance are often deep-learning-based techniques and work like black boxes. It is not clear how models utilize multimodal information for sentiment predictions. Despite recent advances in techniques for enhancing the explainability of machine learning models, they often target unimodal scenarios (e.g., images, sentences), and little research has been done on explaining multimodal models. In this paper, we present an interactive visual analytics system, M2 Lens, to visualize and explain multimodal models for sentiment analysis. M2 Lens provides explanations on intra- and inter-modal interactions at the global, subset, and local levels. Specifically, it summarizes the influence of three typical interaction types (i.e., dominance, complement, and conflict) on the model predictions. Moreover, M2 Lens identifies frequent and influential multimodal features and supports the multi-faceted exploration of model behaviors from language, acoustic, and visual modalities. Through two case studies and expert interviews, we demonstrate our system can help users gain deep insights into the multimodal models for sentiment analysis.",
                "AuthorNamesDeduped": "Xingbo Wang 0001;Jianben He;Zhihua Jin;Muqiao Yang;Yong Wang 0021;Huamin Qu",
                "AuthorNames": "Xingbo Wang;Jianben He;Zhihua Jin;Muqiao Yang;Yong Wang;Huamin Qu",
                "AuthorAffiliation": "University of Science and Technology, United States;University of Science and Technology, United States;University of Science and Technology, United States;Carnegie Mellon University, United States;Carnegie Mellon University, United States;University of Science and Technology, United States",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/tvcg.2017.2744683;10.1109/vast.2015.7347637;10.1109/vast47406.2019.8986948;10.1109/tvcg.2017.2744718;10.1109/tvcg.2014.2346482;10.1109/tvcg.2015.2467622;10.1109/visual.1998.745348;10.1109/tvcg.2016.2598828;10.1109/tvcg.2017.2744158;10.1109/tvcg.2019.2934619;10.1109/tvcg.2019.2934656;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Multimodal models,sentiment analysis,explainable machine learning",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 2860,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 258,
                "i": [
                    258
                ]
            }
        },
        {
            "name": "Jiazhi Xia",
            "value": 469,
            "numPapers": 173,
            "cluster": "1",
            "visible": 1,
            "index": 29,
            "x": 48.07809275820085,
            "y": 25.268498109975482,
            "vy": 0,
            "vx": 0,
            "r": 1.5400115141047783,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive Visual Cluster Analysis by Contrastive Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2022.3209423",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209423",
                "FirstPage": 734,
                "LastPage": 744,
                "PaperType": "J",
                "Abstract": "We propose a contrastive dimensionality reduction approach (CDR) for interactive visual cluster analysis. Although dimensionality reduction of high-dimensional data is widely used in visual cluster analysis in conjunction with scatterplots, there are several limitations on effective visual cluster analysis. First, it is non-trivial for an embedding to present clear visual cluster separation when keeping neighborhood structures. Second, as cluster analysis is a subjective task, user steering is required. However, it is also non-trivial to enable interactions in dimensionality reduction. To tackle these problems, we introduce contrastive learning into dimensionality reduction for high-quality embedding. We then redefine the gradient of the loss function to the negative pairs to enhance the visual cluster separation of embedding results. Based on the contrastive learning scheme, we employ link-based interactions to steer embeddings. After that, we implement a prototype visual interface that integrates the proposed algorithms and a set of visualizations. Quantitative experiments demonstrate that CDR outperforms existing techniques in terms of preserving correct neighborhood structures and improving visual cluster separation. The ablation experiment demonstrates the effectiveness of gradient redefinition. The user study verifies that CDR outperforms t-SNE and UMAP in the task of cluster identification. We also showcase two use cases on real-world datasets to present the effectiveness of link-based interactions.",
                "AuthorNamesDeduped": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu 0004;Yang Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu;Yang Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;Cardiff University, UK;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/tvcg.2018.2864477;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/tvcg.2015.2467615;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2010.138;10.1109/tvcg.2012.207;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2745258;10.1109/vast50239.2020.00015;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Dimensionality reduction,visual cluster analysis,contrastive learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 1384,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 141,
                "i": [
                    141
                ]
            }
        },
        {
            "name": "Ying Zhao 0001",
            "value": 402,
            "numPapers": 163,
            "cluster": "1",
            "visible": 1,
            "index": 30,
            "x": -53.40266400194313,
            "y": 14.076770847590282,
            "vy": 0,
            "vx": 0,
            "r": 1.46286701208981,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive Visual Cluster Analysis by Contrastive Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2022.3209423",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209423",
                "FirstPage": 734,
                "LastPage": 744,
                "PaperType": "J",
                "Abstract": "We propose a contrastive dimensionality reduction approach (CDR) for interactive visual cluster analysis. Although dimensionality reduction of high-dimensional data is widely used in visual cluster analysis in conjunction with scatterplots, there are several limitations on effective visual cluster analysis. First, it is non-trivial for an embedding to present clear visual cluster separation when keeping neighborhood structures. Second, as cluster analysis is a subjective task, user steering is required. However, it is also non-trivial to enable interactions in dimensionality reduction. To tackle these problems, we introduce contrastive learning into dimensionality reduction for high-quality embedding. We then redefine the gradient of the loss function to the negative pairs to enhance the visual cluster separation of embedding results. Based on the contrastive learning scheme, we employ link-based interactions to steer embeddings. After that, we implement a prototype visual interface that integrates the proposed algorithms and a set of visualizations. Quantitative experiments demonstrate that CDR outperforms existing techniques in terms of preserving correct neighborhood structures and improving visual cluster separation. The ablation experiment demonstrates the effectiveness of gradient redefinition. The user study verifies that CDR outperforms t-SNE and UMAP in the task of cluster identification. We also showcase two use cases on real-world datasets to present the effectiveness of link-based interactions.",
                "AuthorNamesDeduped": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu 0004;Yang Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu;Yang Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;Cardiff University, UK;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/tvcg.2018.2864477;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/tvcg.2015.2467615;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2010.138;10.1109/tvcg.2012.207;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2745258;10.1109/vast50239.2020.00015;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Dimensionality reduction,visual cluster analysis,contrastive learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 1384,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 141,
                "i": [
                    141
                ]
            }
        },
        {
            "name": "Yang Chen",
            "value": 247,
            "numPapers": 72,
            "cluster": "1",
            "visible": 1,
            "index": 31,
            "x": 30.354442781599538,
            "y": -47.20813281012711,
            "vy": 0,
            "vx": 0,
            "r": 1.284398388025331,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive Visual Cluster Analysis by Contrastive Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2022.3209423",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209423",
                "FirstPage": 734,
                "LastPage": 744,
                "PaperType": "J",
                "Abstract": "We propose a contrastive dimensionality reduction approach (CDR) for interactive visual cluster analysis. Although dimensionality reduction of high-dimensional data is widely used in visual cluster analysis in conjunction with scatterplots, there are several limitations on effective visual cluster analysis. First, it is non-trivial for an embedding to present clear visual cluster separation when keeping neighborhood structures. Second, as cluster analysis is a subjective task, user steering is required. However, it is also non-trivial to enable interactions in dimensionality reduction. To tackle these problems, we introduce contrastive learning into dimensionality reduction for high-quality embedding. We then redefine the gradient of the loss function to the negative pairs to enhance the visual cluster separation of embedding results. Based on the contrastive learning scheme, we employ link-based interactions to steer embeddings. After that, we implement a prototype visual interface that integrates the proposed algorithms and a set of visualizations. Quantitative experiments demonstrate that CDR outperforms existing techniques in terms of preserving correct neighborhood structures and improving visual cluster separation. The ablation experiment demonstrates the effectiveness of gradient redefinition. The user study verifies that CDR outperforms t-SNE and UMAP in the task of cluster identification. We also showcase two use cases on real-world datasets to present the effectiveness of link-based interactions.",
                "AuthorNamesDeduped": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu 0004;Yang Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu;Yang Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;Cardiff University, UK;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/tvcg.2018.2864477;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/tvcg.2015.2467615;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2010.138;10.1109/tvcg.2012.207;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2745258;10.1109/vast50239.2020.00015;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Dimensionality reduction,visual cluster analysis,contrastive learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 1384,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 141,
                "i": [
                    141
                ]
            }
        },
        {
            "name": "Jing Yang 0001",
            "value": 692,
            "numPapers": 101,
            "cluster": "3",
            "visible": 1,
            "index": 32,
            "x": 9.655927474313346,
            "y": 56.18507866516519,
            "vy": 0,
            "vx": 0,
            "r": 1.7967760506620611,
            "node": {
                "Conference": "VAST",
                "Year": 2006,
                "Title": "Semantic Image Browser: Bridging Information Visualization with Automated Intelligent Image Analysis",
                "DOI": "10.1109/vast.2006.261425",
                "Link": "http://dx.doi.org/10.1109/VAST.2006.261425",
                "FirstPage": 191,
                "LastPage": 198,
                "PaperType": "C",
                "Abstract": "Browsing and retrieving images from large image collections are becoming common and important activities. Semantic image analysis techniques, which automatically detect high level semantic contents of images for annotation, are promising solutions toward this problem. However, few efforts have been made to convey the annotation results to users in an intuitive manner to enable effective image browsing and retrieval. There is also a lack of methods to monitor and evaluate the automatic image analysis algorithms due to the high dimensional nature of image data, features, and contents. In this paper, we propose a novel, scalable semantic image browser by applying existing information visualization techniques to semantic image analysis. This browser not only allows users to effectively browse and search in large image databases according to the semantic content of images, but also allows analysts to evaluate their annotation process through interactive visual exploration. The major visualization components of this browser are multi-dimensional scaling (MDS) based image layout, the value and relation (VaR) display that allows effective high dimensional visualization without dimension reduction, and a rich set of interaction tools such as search by sample images and content relationship detection. Our preliminary user study showed that the browser was easy to use and understand, and effective in supporting image browsing and retrieval tasks",
                "AuthorNamesDeduped": "Jing Yang 0001;Jianping Fan 0001;Daniel Hubball;Yuli Gao;Hangzai Luo;William Ribarsky;Matthew O. Ward",
                "AuthorNames": "Jing Yang;Jianping Fan;Daniel Hubball;Yuli Gao;Hangzai Luo;William Ribarsky;Matthew Ward",
                "AuthorAffiliation": "Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, Worcester Polytechnic Institute, USA",
                "InternalReferences": "0.1109/infvis.1999.801855;10.1109/infvis.1995.528686;10.1109/infvis.2003.1249009;10.1109/visual.1995.485140;10.1109/infvis.2004.71;10.1109/infvis.1996.559223",
                "AuthorKeywords": "Image retrieval, image layout, semantic image classification,multi-dimensional visualization, visual analytics",
                "AminerCitationCount": 98,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2242,
                "i": [
                    2242
                ]
            }
        },
        {
            "name": "William Ribarsky",
            "value": 697,
            "numPapers": 103,
            "cluster": "5",
            "visible": 1,
            "index": 33,
            "x": -45.76062462351757,
            "y": -35.439599801147835,
            "vy": 0,
            "vx": 0,
            "r": 1.8025331030512377,
            "node": {
                "Conference": "VAST",
                "Year": 2006,
                "Title": "Semantic Image Browser: Bridging Information Visualization with Automated Intelligent Image Analysis",
                "DOI": "10.1109/vast.2006.261425",
                "Link": "http://dx.doi.org/10.1109/VAST.2006.261425",
                "FirstPage": 191,
                "LastPage": 198,
                "PaperType": "C",
                "Abstract": "Browsing and retrieving images from large image collections are becoming common and important activities. Semantic image analysis techniques, which automatically detect high level semantic contents of images for annotation, are promising solutions toward this problem. However, few efforts have been made to convey the annotation results to users in an intuitive manner to enable effective image browsing and retrieval. There is also a lack of methods to monitor and evaluate the automatic image analysis algorithms due to the high dimensional nature of image data, features, and contents. In this paper, we propose a novel, scalable semantic image browser by applying existing information visualization techniques to semantic image analysis. This browser not only allows users to effectively browse and search in large image databases according to the semantic content of images, but also allows analysts to evaluate their annotation process through interactive visual exploration. The major visualization components of this browser are multi-dimensional scaling (MDS) based image layout, the value and relation (VaR) display that allows effective high dimensional visualization without dimension reduction, and a rich set of interaction tools such as search by sample images and content relationship detection. Our preliminary user study showed that the browser was easy to use and understand, and effective in supporting image browsing and retrieval tasks",
                "AuthorNamesDeduped": "Jing Yang 0001;Jianping Fan 0001;Daniel Hubball;Yuli Gao;Hangzai Luo;William Ribarsky;Matthew O. Ward",
                "AuthorNames": "Jing Yang;Jianping Fan;Daniel Hubball;Yuli Gao;Hangzai Luo;William Ribarsky;Matthew Ward",
                "AuthorAffiliation": "Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, Worcester Polytechnic Institute, USA",
                "InternalReferences": "0.1109/infvis.1999.801855;10.1109/infvis.1995.528686;10.1109/infvis.2003.1249009;10.1109/visual.1995.485140;10.1109/infvis.2004.71;10.1109/infvis.1996.559223",
                "AuthorKeywords": "Image retrieval, image layout, semantic image classification,multi-dimensional visualization, visual analytics",
                "AminerCitationCount": 98,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2242,
                "i": [
                    2242
                ]
            }
        },
        {
            "name": "Matthew O. Ward",
            "value": 1007,
            "numPapers": 85,
            "cluster": "6",
            "visible": 1,
            "index": 34,
            "x": 58.536154406261204,
            "y": -4.849600738859532,
            "vy": 0,
            "vx": 0,
            "r": 2.159470351180196,
            "node": {
                "Conference": "VAST",
                "Year": 2006,
                "Title": "Semantic Image Browser: Bridging Information Visualization with Automated Intelligent Image Analysis",
                "DOI": "10.1109/vast.2006.261425",
                "Link": "http://dx.doi.org/10.1109/VAST.2006.261425",
                "FirstPage": 191,
                "LastPage": 198,
                "PaperType": "C",
                "Abstract": "Browsing and retrieving images from large image collections are becoming common and important activities. Semantic image analysis techniques, which automatically detect high level semantic contents of images for annotation, are promising solutions toward this problem. However, few efforts have been made to convey the annotation results to users in an intuitive manner to enable effective image browsing and retrieval. There is also a lack of methods to monitor and evaluate the automatic image analysis algorithms due to the high dimensional nature of image data, features, and contents. In this paper, we propose a novel, scalable semantic image browser by applying existing information visualization techniques to semantic image analysis. This browser not only allows users to effectively browse and search in large image databases according to the semantic content of images, but also allows analysts to evaluate their annotation process through interactive visual exploration. The major visualization components of this browser are multi-dimensional scaling (MDS) based image layout, the value and relation (VaR) display that allows effective high dimensional visualization without dimension reduction, and a rich set of interaction tools such as search by sample images and content relationship detection. Our preliminary user study showed that the browser was easy to use and understand, and effective in supporting image browsing and retrieval tasks",
                "AuthorNamesDeduped": "Jing Yang 0001;Jianping Fan 0001;Daniel Hubball;Yuli Gao;Hangzai Luo;William Ribarsky;Matthew O. Ward",
                "AuthorNames": "Jing Yang;Jianping Fan;Daniel Hubball;Yuli Gao;Hangzai Luo;William Ribarsky;Matthew Ward",
                "AuthorAffiliation": "Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, University of North Carolina, Charlotte, Charlotte, USA;Department of Computer Science, Worcester Polytechnic Institute, USA",
                "InternalReferences": "0.1109/infvis.1999.801855;10.1109/infvis.1995.528686;10.1109/infvis.2003.1249009;10.1109/visual.1995.485140;10.1109/infvis.2004.71;10.1109/infvis.1996.559223",
                "AuthorKeywords": "Image retrieval, image layout, semantic image classification,multi-dimensional visualization, visual analytics",
                "AminerCitationCount": 98,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2242,
                "i": [
                    2242
                ]
            }
        },
        {
            "name": "Haipeng Zeng",
            "value": 96,
            "numPapers": 19,
            "cluster": "3",
            "visible": 1,
            "index": 35,
            "x": -40.46082171267107,
            "y": 43.73696270130614,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos",
                "DOI": "10.1109/tvcg.2019.2934656",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934656",
                "FirstPage": 927,
                "LastPage": 937,
                "PaperType": "J",
                "Abstract": "Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.",
                "AuthorNamesDeduped": "Haipeng Zeng;Xingbo Wang 0001;Aoyu Wu;Yong Wang 0021;Quan Li;Alex Endert;Huamin Qu",
                "AuthorNames": "Haipeng Zeng;Xingbo Wang;Aoyu Wu;Yong Wang;Quan Li;Alex Endert;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;WeBank, AI Group, China;Georgia Institute of Technology;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/vast.2006.261431;10.1109/tvcg.2013.168;10.1109/vast.2009.5333919;10.1109/tvcg.2017.2745181;10.1109/tvcg.2010.183;10.1109/vast.2014.7042496",
                "AuthorKeywords": "Emotion,coherence,video analysis,visual analysis",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1627,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 615,
                "i": [
                    615
                ]
            }
        },
        {
            "name": "Aoyu Wu",
            "value": 253,
            "numPapers": 143,
            "cluster": "5",
            "visible": 1,
            "index": 36,
            "x": 0.2947222361892172,
            "y": -60.41451099531879,
            "vy": 0,
            "vx": 0,
            "r": 1.291306850892343,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos",
                "DOI": "10.1109/tvcg.2019.2934656",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934656",
                "FirstPage": 927,
                "LastPage": 937,
                "PaperType": "J",
                "Abstract": "Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.",
                "AuthorNamesDeduped": "Haipeng Zeng;Xingbo Wang 0001;Aoyu Wu;Yong Wang 0021;Quan Li;Alex Endert;Huamin Qu",
                "AuthorNames": "Haipeng Zeng;Xingbo Wang;Aoyu Wu;Yong Wang;Quan Li;Alex Endert;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;WeBank, AI Group, China;Georgia Institute of Technology;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/vast.2006.261431;10.1109/tvcg.2013.168;10.1109/vast.2009.5333919;10.1109/tvcg.2017.2745181;10.1109/tvcg.2010.183;10.1109/vast.2014.7042496",
                "AuthorKeywords": "Emotion,coherence,video analysis,visual analysis",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1627,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 615,
                "i": [
                    615
                ]
            }
        },
        {
            "name": "Quan Li",
            "value": 143,
            "numPapers": 84,
            "cluster": "4",
            "visible": 1,
            "index": 37,
            "x": 41.144395618376954,
            "y": 45.3556910342956,
            "vy": 0,
            "vx": 0,
            "r": 1.164651698330455,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos",
                "DOI": "10.1109/tvcg.2019.2934656",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934656",
                "FirstPage": 927,
                "LastPage": 937,
                "PaperType": "J",
                "Abstract": "Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.",
                "AuthorNamesDeduped": "Haipeng Zeng;Xingbo Wang 0001;Aoyu Wu;Yong Wang 0021;Quan Li;Alex Endert;Huamin Qu",
                "AuthorNames": "Haipeng Zeng;Xingbo Wang;Aoyu Wu;Yong Wang;Quan Li;Alex Endert;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;WeBank, AI Group, China;Georgia Institute of Technology;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/vast.2006.261431;10.1109/tvcg.2013.168;10.1109/vast.2009.5333919;10.1109/tvcg.2017.2745181;10.1109/tvcg.2010.183;10.1109/vast.2014.7042496",
                "AuthorKeywords": "Emotion,coherence,video analysis,visual analysis",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1627,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 615,
                "i": [
                    615
                ]
            }
        },
        {
            "name": "Alex Endert",
            "value": 1146,
            "numPapers": 196,
            "cluster": "5",
            "visible": 1,
            "index": 38,
            "x": -61.78358926815812,
            "y": -5.726089166572359,
            "vy": 0,
            "vx": 0,
            "r": 2.319516407599309,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos",
                "DOI": "10.1109/tvcg.2019.2934656",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934656",
                "FirstPage": 927,
                "LastPage": 937,
                "PaperType": "J",
                "Abstract": "Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.",
                "AuthorNamesDeduped": "Haipeng Zeng;Xingbo Wang 0001;Aoyu Wu;Yong Wang 0021;Quan Li;Alex Endert;Huamin Qu",
                "AuthorNames": "Haipeng Zeng;Xingbo Wang;Aoyu Wu;Yong Wang;Quan Li;Alex Endert;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;WeBank, AI Group, China;Georgia Institute of Technology;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/vast.2006.261431;10.1109/tvcg.2013.168;10.1109/vast.2009.5333919;10.1109/tvcg.2017.2745181;10.1109/tvcg.2010.183;10.1109/vast.2014.7042496",
                "AuthorKeywords": "Emotion,coherence,video analysis,visual analysis",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1627,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 615,
                "i": [
                    615
                ]
            }
        },
        {
            "name": "Jason K. Wong",
            "value": 38,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 39,
            "x": 50.06298462654409,
            "y": -37.99602045323181,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "CohortVA: A Visual Analytic System for Interactive Exploration of Cohorts based on Historical Data",
                "DOI": "10.1109/tvcg.2022.3209483",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209483",
                "FirstPage": 756,
                "LastPage": 766,
                "PaperType": "J",
                "Abstract": "In history research, cohort analysis seeks to identify social structures and figure mobilities by studying the group-based behavior of historical figures. Prior works mainly employ automatic data mining approaches, lacking effective visual explanation. In this paper, we present CohortVA, an interactive visual analytic approach that enables historians to incorporate expertise and insight into the iterative exploration process. The kernel of CohortVA is a novel identification model that generates candidate cohorts and constructs cohort features by means of pre-built knowledge graphs constructed from large-scale history databases. We propose a set of coordinated views to illustrate identified cohorts and features coupled with historical events and figure profiles. Two case studies and interviews with historians demonstrate that CohortVA can greatly enhance the capabilities of cohort identifications, figure authentications, and hypothesis generation.",
                "AuthorNamesDeduped": "Wei Zhang;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen 0001;Wei Chen 0001",
                "AuthorNames": "Wei Zhang;Wei Chen;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;TMCC, CS, Nankai University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Fudan University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2010.159;10.1109/tvcg.2018.2865049;10.1109/tvcg.2021.3114836;10.1109/tvcg.2015.2467971;10.1109/tvcg.2016.2598469;10.1109/tvcg.2015.2467620;10.1109/tvcg.2020.3030370;10.1109/tvcg.2020.3030347;10.1109/tvcg.2021.3114773;10.1109/tvcg.2021.3114790",
                "AuthorKeywords": "Historical cohort analysis,machine learning,interpretability,visual analytic",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 881,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 167,
                "i": [
                    167
                ]
            }
        },
        {
            "name": "Xumeng Wang",
            "value": 58,
            "numPapers": 39,
            "cluster": "3",
            "visible": 1,
            "index": 40,
            "x": -11.390445481056078,
            "y": 62.61196173051192,
            "vy": 0,
            "vx": 0,
            "r": 1.0667818077144502,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "CohortVA: A Visual Analytic System for Interactive Exploration of Cohorts based on Historical Data",
                "DOI": "10.1109/tvcg.2022.3209483",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209483",
                "FirstPage": 756,
                "LastPage": 766,
                "PaperType": "J",
                "Abstract": "In history research, cohort analysis seeks to identify social structures and figure mobilities by studying the group-based behavior of historical figures. Prior works mainly employ automatic data mining approaches, lacking effective visual explanation. In this paper, we present CohortVA, an interactive visual analytic approach that enables historians to incorporate expertise and insight into the iterative exploration process. The kernel of CohortVA is a novel identification model that generates candidate cohorts and constructs cohort features by means of pre-built knowledge graphs constructed from large-scale history databases. We propose a set of coordinated views to illustrate identified cohorts and features coupled with historical events and figure profiles. Two case studies and interviews with historians demonstrate that CohortVA can greatly enhance the capabilities of cohort identifications, figure authentications, and hypothesis generation.",
                "AuthorNamesDeduped": "Wei Zhang;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen 0001;Wei Chen 0001",
                "AuthorNames": "Wei Zhang;Wei Chen;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;TMCC, CS, Nankai University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Fudan University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2010.159;10.1109/tvcg.2018.2865049;10.1109/tvcg.2021.3114836;10.1109/tvcg.2015.2467971;10.1109/tvcg.2016.2598469;10.1109/tvcg.2015.2467620;10.1109/tvcg.2020.3030370;10.1109/tvcg.2020.3030347;10.1109/tvcg.2021.3114773;10.1109/tvcg.2021.3114790",
                "AuthorKeywords": "Historical cohort analysis,machine learning,interpretability,visual analytic",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 881,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 167,
                "i": [
                    167
                ]
            }
        },
        {
            "name": "Siming Chen 0001",
            "value": 138,
            "numPapers": 140,
            "cluster": "1",
            "visible": 1,
            "index": 41,
            "x": -34.31071527657914,
            "y": -54.523158540289266,
            "vy": 0,
            "vx": 0,
            "r": 1.1588946459412781,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "CohortVA: A Visual Analytic System for Interactive Exploration of Cohorts based on Historical Data",
                "DOI": "10.1109/tvcg.2022.3209483",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209483",
                "FirstPage": 756,
                "LastPage": 766,
                "PaperType": "J",
                "Abstract": "In history research, cohort analysis seeks to identify social structures and figure mobilities by studying the group-based behavior of historical figures. Prior works mainly employ automatic data mining approaches, lacking effective visual explanation. In this paper, we present CohortVA, an interactive visual analytic approach that enables historians to incorporate expertise and insight into the iterative exploration process. The kernel of CohortVA is a novel identification model that generates candidate cohorts and constructs cohort features by means of pre-built knowledge graphs constructed from large-scale history databases. We propose a set of coordinated views to illustrate identified cohorts and features coupled with historical events and figure profiles. Two case studies and interviews with historians demonstrate that CohortVA can greatly enhance the capabilities of cohort identifications, figure authentications, and hypothesis generation.",
                "AuthorNamesDeduped": "Wei Zhang;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen 0001;Wei Chen 0001",
                "AuthorNames": "Wei Zhang;Wei Chen;Jason K. Wong;Xumeng Wang;Youcheng Gong;Rongchen Zhu;Kai Liu;Zihan Yan;Siwei Tan;Huamin Qu;Siming Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;TMCC, CS, Nankai University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Hong Kong University of Science and Technology, China;Fudan University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2010.159;10.1109/tvcg.2018.2865049;10.1109/tvcg.2021.3114836;10.1109/tvcg.2015.2467971;10.1109/tvcg.2016.2598469;10.1109/tvcg.2015.2467620;10.1109/tvcg.2020.3030370;10.1109/tvcg.2020.3030347;10.1109/tvcg.2021.3114773;10.1109/tvcg.2021.3114790",
                "AuthorKeywords": "Historical cohort analysis,machine learning,interpretability,visual analytic",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 881,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 167,
                "i": [
                    167
                ]
            }
        },
        {
            "name": "Benjamin Bach",
            "value": 442,
            "numPapers": 189,
            "cluster": "5",
            "visible": 1,
            "index": 42,
            "x": 62.87361064289961,
            "y": 17.231050018065154,
            "vy": 0,
            "vx": 0,
            "r": 1.508923431203224,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Exploring Interactions with Printed Data Visualizations in Augmented Reality",
                "DOI": "10.1109/tvcg.2022.3209386",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209386",
                "FirstPage": 418,
                "LastPage": 428,
                "PaperType": "J",
                "Abstract": "This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ($\\mathrm{N}=20$) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ($\\mathrm{N}=12$, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement “point” for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.",
                "AuthorNamesDeduped": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorNames": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, USA;Harvard University, USA;Carnegie Mellon University, USA;Hong Kong University of Science and Technology, Hong Kong, USA;Hong Kong University of Science and Technology, Hong Kong, USA;University of Edinburgh, United Kingdom;Hong Kong University of Science and Technology, Hong Kong, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467201;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114806;10.1109/tvcg.2021.3114861;10.1109/tvcg.2019.2934283;10.1109/tvcg.2020.3030334;10.1109/tvcg.2013.121;10.1109/tvcg.2013.134;10.1109/tvcg.2017.2744319;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2020.3028948;10.1109/tvcg.2010.177;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.237;10.1109/tvcg.2020.3030392;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2599211",
                "AuthorKeywords": "Interaction design,augmented reality,paper interaction,tangible user interface,printed data visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 1055,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 147,
                "i": [
                    147
                ]
            }
        },
        {
            "name": "Eytan Adar",
            "value": 344,
            "numPapers": 82,
            "cluster": "5",
            "visible": 1,
            "index": 43,
            "x": -58.67884169898138,
            "y": 30.113012749738004,
            "vy": 0,
            "vx": 0,
            "r": 1.3960852043753598,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Roboviz: A Game-Centered Project for Information Visualization Education",
                "DOI": "10.1109/tvcg.2022.3209402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209402",
                "FirstPage": 268,
                "LastPage": 277,
                "PaperType": "J",
                "Abstract": "Due to their pedagogical advantages, large final projects in information visualization courses have become standard practice. Students take on a client–real or simulated–a dataset, and a vague set of goals to create a complete visualization or visual analytics product. Unfortunately, many projects suffer from ambiguous goals, over or under-constrained client expectations, and data constraints that have students spending their time on non-visualization problems (e.g., data cleaning). These are important skills, but are often secondary course objectives, and unforeseen problems can majorly hinder students. We created an alternative for our information visualization course: Roboviz, a real-time game for students to play by building a visualization-focused interface. By designing the game mechanics around four different data types, the project allows students to create a wide array of interactive visualizations. Student teams play against their classmates with the objective to collect the most (good) robots. The flexibility of the strategies encourages variability, a range of approaches, and solving wicked design constraints. We describe the construction of this game and report on student projects over two years. We further show how the game mechanics can be extended or adapted to other game-based projects.",
                "AuthorNamesDeduped": "Eytan Adar;Elsie Lee-Robbins",
                "AuthorNames": "Eytan Adar;Elsie Lee-Robbins",
                "AuthorAffiliation": "University of Michigan, School of Information, USA;University of Michigan, School of Information, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030375;10.1109/visual.1998.745348;10.1109/tvcg.2016.2599338;10.1109/tvcg.2020.3030464;10.1109/infvis.2004.27;10.1109/vast.2009.5333245;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "pedagogy,final project,game interfaces",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 438,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 217,
                "i": [
                    217
                ]
            }
        },
        {
            "name": "Jason Dykes",
            "value": 781,
            "numPapers": 165,
            "cluster": "5",
            "visible": 1,
            "index": 44,
            "x": 23.18893409286655,
            "y": -62.548168123748376,
            "vy": 0,
            "vx": 0,
            "r": 1.899251583189407,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Jonathan C. Roberts",
            "value": 80,
            "numPapers": 66,
            "cluster": "5",
            "visible": 1,
            "index": 45,
            "x": 25.432917514684455,
            "y": 62.475328784178934,
            "vy": 0,
            "vx": 0,
            "r": 1.092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Jean-Daniel Fekete",
            "value": 1520,
            "numPapers": 158,
            "cluster": "3",
            "visible": 1,
            "index": 46,
            "x": -61.62111385388559,
            "y": -29.203395819775295,
            "vy": 0,
            "vx": 0,
            "r": 2.7501439263097294,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Visual Sedimentation",
                "DOI": "10.1109/tvcg.2013.227",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.227",
                "FirstPage": 2446,
                "LastPage": 2455,
                "PaperType": "J",
                "Abstract": "We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.",
                "AuthorNamesDeduped": "Samuel Huron;Romain Vuillemot;Jean-Daniel Fekete",
                "AuthorNames": "Samuel Huron;Romain Vuillemot;Jean-Daniel Fekete",
                "AuthorAffiliation": "IRI, INRIA, France;INRIA, France;INRIA, France",
                "InternalReferences": "0.1109/vast.2012.6400552;10.1109/tvcg.2012.291;10.1109/tvcg.2011.179;10.1109/infvis.2003.1249014;10.1109/tvcg.2011.185;10.1109/tvcg.2008.166;10.1109/tvcg.2008.171;10.1109/infvis.2004.65;10.1109/tvcg.2007.70539;10.1109/tvcg.2013.227",
                "AuthorKeywords": "Design, Information Visualization, Dynamic visualization, Dynamic data, Data stream, Real time, Metaphor",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 1132,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1312,
                "i": [
                    1312
                ]
            }
        },
        {
            "name": "Uta Hinrichs",
            "value": 169,
            "numPapers": 62,
            "cluster": "5",
            "visible": 1,
            "index": 47,
            "x": 65.86106411715086,
            "y": -20.305669980489306,
            "vy": 0,
            "vx": 0,
            "r": 1.1945883707541738,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Mashael AlKadi",
            "value": 21,
            "numPapers": 33,
            "cluster": "5",
            "visible": 1,
            "index": 48,
            "x": -35.212522419896,
            "y": 60.08392684261177,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Samuel Huron",
            "value": 216,
            "numPapers": 88,
            "cluster": "5",
            "visible": 1,
            "index": 49,
            "x": -14.77145920919575,
            "y": -68.78810938549674,
            "vy": 0,
            "vx": 0,
            "r": 1.2487046632124352,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Visual Sedimentation",
                "DOI": "10.1109/tvcg.2013.227",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.227",
                "FirstPage": 2446,
                "LastPage": 2455,
                "PaperType": "J",
                "Abstract": "We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.",
                "AuthorNamesDeduped": "Samuel Huron;Romain Vuillemot;Jean-Daniel Fekete",
                "AuthorNames": "Samuel Huron;Romain Vuillemot;Jean-Daniel Fekete",
                "AuthorAffiliation": "IRI, INRIA, France;INRIA, France;INRIA, France",
                "InternalReferences": "0.1109/vast.2012.6400552;10.1109/tvcg.2012.291;10.1109/tvcg.2011.179;10.1109/infvis.2003.1249014;10.1109/tvcg.2011.185;10.1109/tvcg.2008.166;10.1109/tvcg.2008.171;10.1109/infvis.2004.65;10.1109/tvcg.2007.70539;10.1109/tvcg.2013.227",
                "AuthorKeywords": "Design, Information Visualization, Dynamic visualization, Dynamic data, Data stream, Real time, Metaphor",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 1132,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1312,
                "i": [
                    1312
                ]
            }
        },
        {
            "name": "Charles Perin",
            "value": 423,
            "numPapers": 178,
            "cluster": "3",
            "visible": 1,
            "index": 50,
            "x": 57.93418848256449,
            "y": 41.15373379010332,
            "vy": 0,
            "vx": 0,
            "r": 1.4870466321243523,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Wolfgang Aigner",
            "value": 124,
            "numPapers": 71,
            "cluster": "5",
            "visible": 1,
            "index": 51,
            "x": -71.21258889021408,
            "y": 8.87508780538854,
            "vy": 0,
            "vx": 0,
            "r": 1.1427748992515832,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Sheelagh Carpendale",
            "value": 1226,
            "numPapers": 246,
            "cluster": "5",
            "visible": 1,
            "index": 52,
            "x": 46.96434097121082,
            "y": -55.17563481410839,
            "vy": 0,
            "vx": 0,
            "r": 2.411629245826137,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Catherine Plaisant",
            "value": 615,
            "numPapers": 46,
            "cluster": "1",
            "visible": 1,
            "index": 53,
            "x": 2.665591971922959,
            "y": 73.09510667232944,
            "vy": 0,
            "vx": 0,
            "r": 1.7081174438687392,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Understanding Barriers to Network Exploration with Visualization: A Report from the Trenches",
                "DOI": "10.1109/tvcg.2022.3209487",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209487",
                "FirstPage": 907,
                "LastPage": 917,
                "PaperType": "J",
                "Abstract": "This article reports on an in-depth study that investigates barriers to network exploration with visualizations. Network visualization tools are becoming increasingly popular, but little is known about how analysts plan and engage in the visual exploration of network data—which exploration strategies they employ, and how they prepare their data, define questions, and decide on visual mappings. Our study involved a series of workshops, interaction logging, and observations from a 6-week network exploration course. Our findings shed light on the stages that define analysts' approaches to network visualization and barriers experienced by some analysts during their network visualization processes. These barriers mainly appear before using a specific tool and include defining exploration goals, identifying relevant network structures and abstractions, or creating appropriate visual mappings for their network data. Our findings inform future work in visualization education and analyst-centered network visualization tool design.",
                "AuthorNamesDeduped": "Mashael AlKadi;Vanessa Serrano;James Scott-Brown;Catherine Plaisant;Jean-Daniel Fekete;Uta Hinrichs;Benjamin Bach",
                "AuthorNames": "Mashael AlKadi;Vanessa Serrano;James Scott-Brown;Catherine Plaisant;Jean-Daniel Fekete;Uta Hinrichs;Benjamin Bach",
                "AuthorAffiliation": "University of Edinburgh, Scotland;Ramon Llull University, Spain;University of Edinburgh, Scotland;University of Maryland, USA;Université Paris-Saclay, CNRS, Inria, France;University of Edinburgh, Scotland;University of Edinburgh, Scotland",
                "InternalReferences": "0.1109/tvcg.2021.3114830;10.1109/vast47406.2019.8986909;10.1109/tvcg.2020.3030355;10.1109/tvcg.2014.2346984;10.1109/infvis.2004.1;10.1109/vast.2011.6102441;10.1109/tvcg.2020.3030462;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934538;10.1109/tvcg.2017.2743990",
                "AuthorKeywords": "Network Exploration,Network Visualization,Qualitative Study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 499,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 181,
                "i": [
                    181
                ]
            }
        },
        {
            "name": "Cagatay Turkay",
            "value": 329,
            "numPapers": 126,
            "cluster": "5",
            "visible": 1,
            "index": 54,
            "x": -51.81815656295746,
            "y": -52.58211340766771,
            "vy": 0,
            "vx": 0,
            "r": 1.3788140472078296,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Dashboard Design Patterns",
                "DOI": "10.1109/tvcg.2022.3209448",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209448",
                "FirstPage": 342,
                "LastPage": 352,
                "PaperType": "J",
                "Abstract": "This paper introduces design patterns for dashboards to inform dashboard design processes. Despite a growing number of public examples, case studies, and general guidelines there is surprisingly little design guidance for dashboards. Such guidance is necessary to inspire designs and discuss tradeoffs in, e.g., screenspace, interaction, or information shown. Based on a systematic review of 144 dashboards, we report on eight groups of design patterns that provide common solutions in dashboard design. We discuss combinations of these patterns in “dashboard genres” such as narrative, analytical, or embedded dashboard. We ran a 2-week dashboard design workshop with 23 participants of varying expertise working on their own data and dashboards. We discuss the application of patterns for the dashboard design processes, as well as general design tradeoffs and common challenges. Our work complements previous surveys and aims to support dashboard designers and researchers in co-creation, structured design decisions, as well as future user evaluations about dashboard design guidelines. Detailed pattern descriptions and workshop material can be found online: https://dashboarddesignpatterns.github.io",
                "AuthorNamesDeduped": "Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen 0001",
                "AuthorNames": "Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen",
                "AuthorAffiliation": "University of Edinburgh, Scotland;University of Glasgow, Scotland;King's College London, England;University of Warwick, England;University of Oxford, England;University of Oxford, England;University of Oxford, England",
                "InternalReferences": "0.1109/visual.1991.175794;10.1109/infvis.1997.636792;10.1109/tvcg.2020.3030424;10.1109/tvcg.2016.2599338;10.1109/tvcg.2021.3114828;10.1109/tvcg.2018.2864903;10.1109/tvcg.2013.120;10.1109/tvcg.2010.179;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Dashboards,Design Patterns,Data Visualization,Storytelling,Visual Analytics,Qualitative Evaluation,Education",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 4205,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 134,
                "i": [
                    134
                ]
            }
        },
        {
            "name": "Michael Bostock",
            "value": 851,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 55,
            "x": 74.40113047457001,
            "y": 3.8041798204093706,
            "vy": 0,
            "vx": 0,
            "r": 1.9798503166378814,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "D³ Data-Driven Documents",
                "DOI": "10.1109/tvcg.2011.185",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.185",
                "FirstPage": 2301,
                "LastPage": 2309,
                "PaperType": "J",
                "Abstract": "Data-Driven Documents (D3) is a novel representation-transparent approach to visualization for the web. Rather than hide the underlying scenegraph within a toolkit-specific abstraction, D3 enables direct inspection and manipulation of a native representation: the standard document object model (DOM). With D3, designers selectively bind input data to arbitrary document elements, applying dynamic transforms to both generate and modify content. We show how representational transparency improves expressiveness and better integrates with developer tools than prior approaches, while offering comparable notational efficiency and retaining powerful declarative components. Immediate evaluation of operators further simplifies debugging and allows iterative development. Additionally, we demonstrate how D3 transforms naturally enable animation and interaction with dramatic performance improvements over intermediate representations.",
                "AuthorNamesDeduped": "Michael Bostock;Vadim Ogievetsky;Jeffrey Heer",
                "AuthorNames": "Michael Bostock;Vadim Ogievetsky;Jeffrey Heer",
                "AuthorAffiliation": "Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA",
                "InternalReferences": "0.1109/infvis.2000.885091;10.1109/infvis.2000.885098;10.1109/tvcg.2010.144;10.1109/tvcg.2009.174;10.1109/infvis.2004.12;10.1109/tvcg.2006.178;10.1109/infvis.2005.1532122;10.1109/tvcg.2008.166;10.1109/infvis.2004.64;10.1109/tvcg.2007.70539",
                "AuthorKeywords": "Information visualization, user interfaces, toolkits, 2D graphics",
                "AminerCitationCount": 3795,
                "CitationCountCrossRef": 2061,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 10871,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1535,
                "i": [
                    1535
                ]
            }
        },
        {
            "name": "Vadim Ogievetsky",
            "value": 696,
            "numPapers": 9,
            "cluster": "5",
            "visible": 1,
            "index": 56,
            "x": -57.94584908633828,
            "y": 47.87774612137991,
            "vy": 0,
            "vx": 0,
            "r": 1.8013816925734025,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "D³ Data-Driven Documents",
                "DOI": "10.1109/tvcg.2011.185",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.185",
                "FirstPage": 2301,
                "LastPage": 2309,
                "PaperType": "J",
                "Abstract": "Data-Driven Documents (D3) is a novel representation-transparent approach to visualization for the web. Rather than hide the underlying scenegraph within a toolkit-specific abstraction, D3 enables direct inspection and manipulation of a native representation: the standard document object model (DOM). With D3, designers selectively bind input data to arbitrary document elements, applying dynamic transforms to both generate and modify content. We show how representational transparency improves expressiveness and better integrates with developer tools than prior approaches, while offering comparable notational efficiency and retaining powerful declarative components. Immediate evaluation of operators further simplifies debugging and allows iterative development. Additionally, we demonstrate how D3 transforms naturally enable animation and interaction with dramatic performance improvements over intermediate representations.",
                "AuthorNamesDeduped": "Michael Bostock;Vadim Ogievetsky;Jeffrey Heer",
                "AuthorNames": "Michael Bostock;Vadim Ogievetsky;Jeffrey Heer",
                "AuthorAffiliation": "Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA;Computer Science Department, Stanford University, Stanford, CA, USA",
                "InternalReferences": "0.1109/infvis.2000.885091;10.1109/infvis.2000.885098;10.1109/tvcg.2010.144;10.1109/tvcg.2009.174;10.1109/infvis.2004.12;10.1109/tvcg.2006.178;10.1109/infvis.2005.1532122;10.1109/tvcg.2008.166;10.1109/infvis.2004.64;10.1109/tvcg.2007.70539",
                "AuthorKeywords": "Information visualization, user interfaces, toolkits, 2D graphics",
                "AminerCitationCount": 3795,
                "CitationCountCrossRef": 2061,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 10871,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1535,
                "i": [
                    1535
                ]
            }
        },
        {
            "name": "Doris Kosminsky",
            "value": 42,
            "numPapers": 34,
            "cluster": "5",
            "visible": 1,
            "index": 57,
            "x": 10.478025809169152,
            "y": -75.10133803829586,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Søren Knudsen",
            "value": 48,
            "numPapers": 38,
            "cluster": "5",
            "visible": 1,
            "index": 58,
            "x": 43.37639122203199,
            "y": 62.995941810193024,
            "vy": 0,
            "vx": 0,
            "r": 1.0552677029360968,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Jeremy Boy",
            "value": 121,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 59,
            "x": -75.17192175207843,
            "y": -17.296883537198195,
            "vy": 0,
            "vx": 0,
            "r": 1.1393206678180772,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "A Principled Way of Assessing Visualization Literacy",
                "DOI": "10.1109/tvcg.2014.2346984",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346984",
                "FirstPage": 1963,
                "LastPage": 1972,
                "PaperType": "J",
                "Abstract": "We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.",
                "AuthorNamesDeduped": "Jeremy Boy;Ronald A. Rensink;Enrico Bertini;Jean-Daniel Fekete",
                "AuthorNames": "Jeremy Boy;Ronald A. Rensink;Enrico Bertini;Jean-Daniel Fekete",
                "AuthorAffiliation": "Inria, Telecom ParisTech, EnsadLab;University of British Columbia;NYU Polytechnic School of Engineering;Inria",
                "InternalReferences": "0.1109/tvcg.2011.160",
                "AuthorKeywords": "Literacy, Visualization literacy, Rasch Model, Item Response Theory",
                "AminerCitationCount": 237,
                "CitationCountCrossRef": 142,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 2788,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1167,
                "i": [
                    1167
                ]
            }
        },
        {
            "name": "Enrico Bertini",
            "value": 788,
            "numPapers": 118,
            "cluster": "1",
            "visible": 1,
            "index": 60,
            "x": 67.67494002540472,
            "y": -38.34191560887216,
            "vy": 0,
            "vx": 0,
            "r": 1.9073114565342544,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "A Principled Way of Assessing Visualization Literacy",
                "DOI": "10.1109/tvcg.2014.2346984",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346984",
                "FirstPage": 1963,
                "LastPage": 1972,
                "PaperType": "J",
                "Abstract": "We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.",
                "AuthorNamesDeduped": "Jeremy Boy;Ronald A. Rensink;Enrico Bertini;Jean-Daniel Fekete",
                "AuthorNames": "Jeremy Boy;Ronald A. Rensink;Enrico Bertini;Jean-Daniel Fekete",
                "AuthorAffiliation": "Inria, Telecom ParisTech, EnsadLab;University of British Columbia;NYU Polytechnic School of Engineering;Inria",
                "InternalReferences": "0.1109/tvcg.2011.160",
                "AuthorKeywords": "Literacy, Visualization literacy, Rasch Model, Item Response Theory",
                "AminerCitationCount": 237,
                "CitationCountCrossRef": 142,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 2788,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1167,
                "i": [
                    1167
                ]
            }
        },
        {
            "name": "Jo Wood",
            "value": 562,
            "numPapers": 112,
            "cluster": "5",
            "visible": 1,
            "index": 61,
            "x": -24.199351776809383,
            "y": 74.59484817051532,
            "vy": 0,
            "vx": 0,
            "r": 1.6470926885434658,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Design by Immersion: A Transdisciplinary Approach to Problem-Driven Visualizations",
                "DOI": "10.1109/tvcg.2019.2934790",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934790",
                "FirstPage": 109,
                "LastPage": 118,
                "PaperType": "J",
                "Abstract": "While previous work exists on how to conduct and disseminate insights from problem-driven visualization projects and design studies, the literature does not address how to accomplish these goals in transdisciplinary teams in ways that advance all disciplines involved. In this paper we introduce and define a new methodological paradigm we call design by immersion, which provides an alternative perspective on problem-driven visualization work. Design by immersion embeds transdisciplinary experiences at the center of the visualization process by having visualization researchers participate in the work of the target domain (or domain experts participate in visualization research). Based on our own combined experiences of working on cross-disciplinary, problem-driven visualization projects, we present six case studies that expose the opportunities that design by immersion enables, including (1) exploring new domain-inspired visualization design spaces, (2) enriching domain understanding through personal experiences, and (3) building strong transdisciplinary relationships. Furthermore, we illustrate how the process of design by immersion opens up a diverse set of design activities that can be combined in different ways depending on the type of collaboration, project, and goals. Finally, we discuss the challenges and potential pitfalls of design by immersion.",
                "AuthorNamesDeduped": "Kyle Wm. Hall;Adam James Bradley;Uta Hinrichs;Samuel Huron;Jo Wood;Christopher Collins 0001;Sheelagh Carpendale",
                "AuthorNames": "Kyle Wm. Hall;Adam J. Bradley;Uta Hinrichs;Samuel Huron;Jo Wood;Christopher Collins;Sheelagh Carpendale",
                "AuthorAffiliation": "Temple University, Philadelphia, USA;Ontario Tech University, Oshawa, Canada;University of St Andrews, Fife, United Kingdom;Télécom Paristech, Université Paris-Saclay, Paris, France;University of London, London, United Kingdom;Ontario Tech University, Oshawa, Canada;University of Calgary, Calgary, Canada and Simon Fraser University, Burnaby, Canada",
                "InternalReferences": "0.1109/tvcg.2009.122;10.1109/tvcg.2006.160;10.1109/tvcg.2015.2467452;10.1109/tvcg.2018.2865241;10.1109/tvcg.2014.2346325;10.1109/tvcg.2011.209;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467271;10.1109/tvcg.2012.213;10.1109/tvcg.2014.2346323",
                "AuthorKeywords": "Visualization,problem-driven,design studies,collaboration,methodology,framework",
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1323,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 534,
                "i": [
                    534
                ]
            }
        },
        {
            "name": "Christopher Collins 0001",
            "value": 857,
            "numPapers": 157,
            "cluster": "5",
            "visible": 1,
            "index": 62,
            "x": -32.80776754384704,
            "y": -71.92809179165603,
            "vy": 0,
            "vx": 0,
            "r": 1.9867587795048935,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Design by Immersion: A Transdisciplinary Approach to Problem-Driven Visualizations",
                "DOI": "10.1109/tvcg.2019.2934790",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934790",
                "FirstPage": 109,
                "LastPage": 118,
                "PaperType": "J",
                "Abstract": "While previous work exists on how to conduct and disseminate insights from problem-driven visualization projects and design studies, the literature does not address how to accomplish these goals in transdisciplinary teams in ways that advance all disciplines involved. In this paper we introduce and define a new methodological paradigm we call design by immersion, which provides an alternative perspective on problem-driven visualization work. Design by immersion embeds transdisciplinary experiences at the center of the visualization process by having visualization researchers participate in the work of the target domain (or domain experts participate in visualization research). Based on our own combined experiences of working on cross-disciplinary, problem-driven visualization projects, we present six case studies that expose the opportunities that design by immersion enables, including (1) exploring new domain-inspired visualization design spaces, (2) enriching domain understanding through personal experiences, and (3) building strong transdisciplinary relationships. Furthermore, we illustrate how the process of design by immersion opens up a diverse set of design activities that can be combined in different ways depending on the type of collaboration, project, and goals. Finally, we discuss the challenges and potential pitfalls of design by immersion.",
                "AuthorNamesDeduped": "Kyle Wm. Hall;Adam James Bradley;Uta Hinrichs;Samuel Huron;Jo Wood;Christopher Collins 0001;Sheelagh Carpendale",
                "AuthorNames": "Kyle Wm. Hall;Adam J. Bradley;Uta Hinrichs;Samuel Huron;Jo Wood;Christopher Collins;Sheelagh Carpendale",
                "AuthorAffiliation": "Temple University, Philadelphia, USA;Ontario Tech University, Oshawa, Canada;University of St Andrews, Fife, United Kingdom;Télécom Paristech, Université Paris-Saclay, Paris, France;University of London, London, United Kingdom;Ontario Tech University, Oshawa, Canada;University of Calgary, Calgary, Canada and Simon Fraser University, Burnaby, Canada",
                "InternalReferences": "0.1109/tvcg.2009.122;10.1109/tvcg.2006.160;10.1109/tvcg.2015.2467452;10.1109/tvcg.2018.2865241;10.1109/tvcg.2014.2346325;10.1109/tvcg.2011.209;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467271;10.1109/tvcg.2012.213;10.1109/tvcg.2014.2346323",
                "AuthorKeywords": "Visualization,problem-driven,design studies,collaboration,methodology,framework",
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1323,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 534,
                "i": [
                    534
                ]
            }
        },
        {
            "name": "Miriah D. Meyer",
            "value": 1049,
            "numPapers": 94,
            "cluster": "5",
            "visible": 1,
            "index": 63,
            "x": 73.35806941917411,
            "y": 31.122237244318303,
            "vy": 0,
            "vx": 0,
            "r": 2.2078295912492805,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "A Framework for Creative Visualization-Opportunities Workshops",
                "DOI": "10.1109/tvcg.2018.2865241",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865241",
                "FirstPage": 748,
                "LastPage": 758,
                "PaperType": "J",
                "Abstract": "Applied visualization researchers often work closely with domain collaborators to explore new and useful applications of visualization. The early stages of collaborations are typically time consuming for all stakeholders as researchers piece together an understanding of domain challenges from disparate discussions and meetings. A number of recent projects, however, report on the use of creative visualization-opportunities (CVO) workshops to accelerate the early stages of applied work, eliciting a wealth of requirements in a few days of focused work. Yet, there is no established guidance for how to use such workshops effectively. In this paper, we present the results of a 2-year collaboration in which we analyzed the use of 17 workshops in 10 visualization contexts. Its primary contribution is a framework for CVO workshops that: 1) identifies a process model for using workshops; 2) describes a structure of what happens within effective workshops; 3) recommends 25 actionable guidelines for future workshops; and 4) presents an example workshop and workshop methods. The creation of this framework exemplifies the use of critical reflection to learn about visualization in practice from diverse studies and experience.",
                "AuthorNamesDeduped": "Ethan Kerzner;Sarah Goodwin;Jason Dykes;Sara Jones 0001;Miriah D. Meyer",
                "AuthorNames": "Ethan Kerzner;Sarah Goodwin;Jason Dykes;Sara Jones;Miriah Meyer",
                "AuthorAffiliation": "University of Utah, Salt Lake City, UT, US;Monash University, Clayton, VIC, AU;University of London, London, London, GB;University of London, London, London, GB;University of Utah, Salt Lake City, UT, US",
                "InternalReferences": "0.1109/tvcg.2010.191;10.1109/tvcg.2013.145;10.1109/tvcg.2016.2598545;10.1109/tvcg.2016.2599338;10.1109/tvcg.2011.209;10.1109/tvcg.2017.2744459;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2012.213;10.1109/tvcg.2013.132;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "User-centered visualization design,design studies,creativity workshops,critically reflective practice",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 90,
                "DownloadsXplore": 1442,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 661,
                "i": [
                    661
                ]
            }
        },
        {
            "name": "Sarah Goodwin",
            "value": 247,
            "numPapers": 51,
            "cluster": "5",
            "visible": 1,
            "index": 64,
            "x": -75.7038706749832,
            "y": 26.81275750133545,
            "vy": 0,
            "vx": 0,
            "r": 1.284398388025331,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics",
                "DOI": "10.1109/tvcg.2020.3030334",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030334",
                "FirstPage": 1193,
                "LastPage": 1203,
                "PaperType": "J",
                "Abstract": "Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative’ scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use’ interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.",
                "AuthorNamesDeduped": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorNames": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2019.2934803;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2599107;10.1109/vast.2007.4389011;10.1109/vast.2010.5652880;10.1109/tvcg.2018.2865241;10.1109/vast.2007.4389006;10.1109/tvcg.2009.162;10.1109/tvcg.2007.70577;10.1109/tvcg.2019.2934538;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Data visualisation,tangible and embedded interaction,augmented reality,immersive analytics",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1968,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 460,
                "i": [
                    460
                ]
            }
        },
        {
            "name": "Sara Jones 0001",
            "value": 109,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 65,
            "x": 38.00112135473939,
            "y": -71.45568399912193,
            "vy": 0,
            "vx": 0,
            "r": 1.125503742084053,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "A Framework for Creative Visualization-Opportunities Workshops",
                "DOI": "10.1109/tvcg.2018.2865241",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865241",
                "FirstPage": 748,
                "LastPage": 758,
                "PaperType": "J",
                "Abstract": "Applied visualization researchers often work closely with domain collaborators to explore new and useful applications of visualization. The early stages of collaborations are typically time consuming for all stakeholders as researchers piece together an understanding of domain challenges from disparate discussions and meetings. A number of recent projects, however, report on the use of creative visualization-opportunities (CVO) workshops to accelerate the early stages of applied work, eliciting a wealth of requirements in a few days of focused work. Yet, there is no established guidance for how to use such workshops effectively. In this paper, we present the results of a 2-year collaboration in which we analyzed the use of 17 workshops in 10 visualization contexts. Its primary contribution is a framework for CVO workshops that: 1) identifies a process model for using workshops; 2) describes a structure of what happens within effective workshops; 3) recommends 25 actionable guidelines for future workshops; and 4) presents an example workshop and workshop methods. The creation of this framework exemplifies the use of critical reflection to learn about visualization in practice from diverse studies and experience.",
                "AuthorNamesDeduped": "Ethan Kerzner;Sarah Goodwin;Jason Dykes;Sara Jones 0001;Miriah D. Meyer",
                "AuthorNames": "Ethan Kerzner;Sarah Goodwin;Jason Dykes;Sara Jones;Miriah Meyer",
                "AuthorAffiliation": "University of Utah, Salt Lake City, UT, US;Monash University, Clayton, VIC, AU;University of London, London, London, GB;University of London, London, London, GB;University of Utah, Salt Lake City, UT, US",
                "InternalReferences": "0.1109/tvcg.2010.191;10.1109/tvcg.2013.145;10.1109/tvcg.2016.2598545;10.1109/tvcg.2016.2599338;10.1109/tvcg.2011.209;10.1109/tvcg.2017.2744459;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2012.213;10.1109/tvcg.2013.132;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "User-centered visualization design,design studies,creativity workshops,critically reflective practice",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 90,
                "DownloadsXplore": 1442,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 661,
                "i": [
                    661
                ]
            }
        },
        {
            "name": "Sukwon Lee",
            "value": 147,
            "numPapers": 31,
            "cluster": "5",
            "visible": 1,
            "index": 66,
            "x": 20.400746894112622,
            "y": 78.95447755613581,
            "vy": 0,
            "vx": 0,
            "r": 1.1692573402417963,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "VLAT: Development of a Visualization Literacy Assessment Test",
                "DOI": "10.1109/tvcg.2016.2598920",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598920",
                "FirstPage": 551,
                "LastPage": 560,
                "PaperType": "J",
                "Abstract": "The Information Visualization community has begun to pay attention to visualization literacy; however, researchers still lack instruments for measuring the visualization literacy of users. In order to address this gap, we systematically developed a visualization literacy assessment test (VLAT), especially for non-expert users in data visualization, by following the established procedure of test development in Psychological and Educational Measurement: (1) Test Blueprint Construction, (2) Test Item Generation, (3) Content Validity Evaluation, (4) Test Tryout and Item Analysis, (5) Test Item Selection, and (6) Reliability Evaluation. The VLAT consists of 12 data visualizations and 53 multiple-choice test items that cover eight data visualization tasks. The test items in the VLAT were evaluated with respect to their essentialness by five domain experts in Information Visualization and Visual Analytics (average content validity ratio = 0.66). The VLAT was also tried out on a sample of 191 test takers and showed high reliability (reliability coefficient omega = 0.76). In addition, we demonstrated the relationship between users' visualization literacy and aptitude for learning an unfamiliar visualization and showed that they had a fairly high positive relationship (correlation coefficient = 0.64). Finally, we discuss evidence for the validity of the VLAT and potential research areas that are related to the instrument.",
                "AuthorNamesDeduped": "Sukwon Lee;Sung-Hee Kim;Bum Chul Kwon",
                "AuthorNames": "Sukwon Lee;Sung-Hee Kim;Bum Chul Kwon",
                "AuthorAffiliation": "School of Industrial Engineering, Purdue University, West Lafayette, IN, USA;Samsung Electronics Co., Ltd., Seoul, South Korea;IBM T.J. Watson Research Center, Yorktown Heights, NY, USA",
                "InternalReferences": "0.1109/tvcg.2014.2346419;10.1109/tvcg.2014.2346481;10.1109/tvcg.2014.2346984;10.1109/visual.1991.175815;10.1109/tvcg.2007.70515;10.1109/tvcg.2015.2467195;10.1109/vast.2011.6102435;10.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467201",
                "AuthorKeywords": "Visualization Literacy;Assessment Test;Instrument;Measurement;Aptitude;Education",
                "AminerCitationCount": 156,
                "CitationCountCrossRef": 107,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3595,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 891,
                "i": [
                    891
                ]
            }
        },
        {
            "name": "Sung-Hee Kim",
            "value": 161,
            "numPapers": 36,
            "cluster": "5",
            "visible": 1,
            "index": 67,
            "x": -68.88804500491484,
            "y": -44.77094208748379,
            "vy": 0,
            "vx": 0,
            "r": 1.185377086931491,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "VLAT: Development of a Visualization Literacy Assessment Test",
                "DOI": "10.1109/tvcg.2016.2598920",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598920",
                "FirstPage": 551,
                "LastPage": 560,
                "PaperType": "J",
                "Abstract": "The Information Visualization community has begun to pay attention to visualization literacy; however, researchers still lack instruments for measuring the visualization literacy of users. In order to address this gap, we systematically developed a visualization literacy assessment test (VLAT), especially for non-expert users in data visualization, by following the established procedure of test development in Psychological and Educational Measurement: (1) Test Blueprint Construction, (2) Test Item Generation, (3) Content Validity Evaluation, (4) Test Tryout and Item Analysis, (5) Test Item Selection, and (6) Reliability Evaluation. The VLAT consists of 12 data visualizations and 53 multiple-choice test items that cover eight data visualization tasks. The test items in the VLAT were evaluated with respect to their essentialness by five domain experts in Information Visualization and Visual Analytics (average content validity ratio = 0.66). The VLAT was also tried out on a sample of 191 test takers and showed high reliability (reliability coefficient omega = 0.76). In addition, we demonstrated the relationship between users' visualization literacy and aptitude for learning an unfamiliar visualization and showed that they had a fairly high positive relationship (correlation coefficient = 0.64). Finally, we discuss evidence for the validity of the VLAT and potential research areas that are related to the instrument.",
                "AuthorNamesDeduped": "Sukwon Lee;Sung-Hee Kim;Bum Chul Kwon",
                "AuthorNames": "Sukwon Lee;Sung-Hee Kim;Bum Chul Kwon",
                "AuthorAffiliation": "School of Industrial Engineering, Purdue University, West Lafayette, IN, USA;Samsung Electronics Co., Ltd., Seoul, South Korea;IBM T.J. Watson Research Center, Yorktown Heights, NY, USA",
                "InternalReferences": "0.1109/tvcg.2014.2346419;10.1109/tvcg.2014.2346481;10.1109/tvcg.2014.2346984;10.1109/visual.1991.175815;10.1109/tvcg.2007.70515;10.1109/tvcg.2015.2467195;10.1109/vast.2011.6102435;10.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467201",
                "AuthorKeywords": "Visualization Literacy;Assessment Test;Instrument;Measurement;Aptitude;Education",
                "AminerCitationCount": 156,
                "CitationCountCrossRef": 107,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3595,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 891,
                "i": [
                    891
                ]
            }
        },
        {
            "name": "Jock D. Mackinlay",
            "value": 761,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 68,
            "x": 81.63631358910243,
            "y": -13.620290143081709,
            "vy": 0,
            "vx": 0,
            "r": 1.8762233736327,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Show Me: Automatic Presentation for Visual Analysis",
                "DOI": "10.1109/tvcg.2007.70594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70594",
                "FirstPage": 1137,
                "LastPage": 1144,
                "PaperType": "J",
                "Abstract": "This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays). A key research issue for the commercial application of automatic presentation is the user experience, which must support the flow of visual analysis. User experience has not been the focus of previous research on automatic presentation. The Show Me user experience includes the automatic selection of mark types, a command to add a single field to a view, and a pair of commands to build views for multiple fields. Although the use of these defaults and commands is optional, user interface logs indicate that Show Me is used by commercial users.",
                "AuthorNamesDeduped": "Jock D. Mackinlay;Pat Hanrahan;Chris Stolte",
                "AuthorNames": "Jock Mackinlay;Pat Hanrahan;Chris Stolte",
                "AuthorAffiliation": "Tableau Software, USA;Stanford University and Tableau Software, USA;Tableau Software, USA",
                "InternalReferences": "0.1109/infvis.2000.885086",
                "AuthorKeywords": "Automatic presentation, visual analysis, graphic design, best practices, data visualization, small multiples",
                "AminerCitationCount": 570,
                "CitationCountCrossRef": 311,
                "PubsCitedCrossRef": 12,
                "DownloadsXplore": 4002,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2088,
                "i": [
                    2088
                ]
            }
        },
        {
            "name": "Pat Hanrahan",
            "value": 911,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 69,
            "x": -51.36658589269826,
            "y": 65.66181427380795,
            "vy": 0,
            "vx": 0,
            "r": 2.0489349453080026,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Show Me: Automatic Presentation for Visual Analysis",
                "DOI": "10.1109/tvcg.2007.70594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70594",
                "FirstPage": 1137,
                "LastPage": 1144,
                "PaperType": "J",
                "Abstract": "This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays). A key research issue for the commercial application of automatic presentation is the user experience, which must support the flow of visual analysis. User experience has not been the focus of previous research on automatic presentation. The Show Me user experience includes the automatic selection of mark types, a command to add a single field to a view, and a pair of commands to build views for multiple fields. Although the use of these defaults and commands is optional, user interface logs indicate that Show Me is used by commercial users.",
                "AuthorNamesDeduped": "Jock D. Mackinlay;Pat Hanrahan;Chris Stolte",
                "AuthorNames": "Jock Mackinlay;Pat Hanrahan;Chris Stolte",
                "AuthorAffiliation": "Tableau Software, USA;Stanford University and Tableau Software, USA;Tableau Software, USA",
                "InternalReferences": "0.1109/infvis.2000.885086",
                "AuthorKeywords": "Automatic presentation, visual analysis, graphic design, best practices, data visualization, small multiples",
                "AminerCitationCount": 570,
                "CitationCountCrossRef": 311,
                "PubsCitedCrossRef": 12,
                "DownloadsXplore": 4002,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2088,
                "i": [
                    2088
                ]
            }
        },
        {
            "name": "Chris Stolte",
            "value": 721,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 70,
            "x": -6.524232846318406,
            "y": -83.71041981597642,
            "vy": 0,
            "vx": 0,
            "r": 1.8301669545192862,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Show Me: Automatic Presentation for Visual Analysis",
                "DOI": "10.1109/tvcg.2007.70594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70594",
                "FirstPage": 1137,
                "LastPage": 1144,
                "PaperType": "J",
                "Abstract": "This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays). A key research issue for the commercial application of automatic presentation is the user experience, which must support the flow of visual analysis. User experience has not been the focus of previous research on automatic presentation. The Show Me user experience includes the automatic selection of mark types, a command to add a single field to a view, and a pair of commands to build views for multiple fields. Although the use of these defaults and commands is optional, user interface logs indicate that Show Me is used by commercial users.",
                "AuthorNamesDeduped": "Jock D. Mackinlay;Pat Hanrahan;Chris Stolte",
                "AuthorNames": "Jock Mackinlay;Pat Hanrahan;Chris Stolte",
                "AuthorAffiliation": "Tableau Software, USA;Stanford University and Tableau Software, USA;Tableau Software, USA",
                "InternalReferences": "0.1109/infvis.2000.885086",
                "AuthorKeywords": "Automatic presentation, visual analysis, graphic design, best practices, data visualization, small multiples",
                "AminerCitationCount": 570,
                "CitationCountCrossRef": 311,
                "PubsCitedCrossRef": 12,
                "DownloadsXplore": 4002,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2088,
                "i": [
                    2088
                ]
            }
        },
        {
            "name": "Dominik Moritz",
            "value": 832,
            "numPapers": 105,
            "cluster": "5",
            "visible": 1,
            "index": 71,
            "x": 61.78996149066672,
            "y": 57.72348446673958,
            "vy": 0,
            "vx": 0,
            "r": 1.9579735175590098,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Vega-Lite: A Grammar of Interactive Graphics",
                "DOI": "10.1109/tvcg.2016.2599030",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599030",
                "FirstPage": 341,
                "LastPage": 350,
                "PaperType": "J",
                "Abstract": "We present Vega-Lite, a high-level grammar that enables rapid specification of interactive data visualizations. Vega-Lite combines a traditional grammar of graphics, providing visual encoding rules and a composition algebra for layered and multi-view displays, with a novel grammar of interaction. Users specify interactive semantics by composing selections. In Vega-Lite, a selection is an abstraction that defines input event processing, points of interest, and a predicate function for inclusion testing. Selections parameterize visual encodings by serving as input data, defining scale extents, or by driving conditional logic. The Vega-Lite compiler automatically synthesizes requisite data flow and event handling logic, which users can override for further customization. In contrast to existing reactive specifications, Vega-Lite selections decompose an interaction design into concise, enumerable semantic units. We evaluate Vega-Lite through a range of examples, demonstrating succinct specification of both customized interaction methods and common techniques such as panning, zooming, and linked selection.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Dominik Moritz;Kanit Wongsuphasawat;Jeffrey Heer",
                "AuthorNames": "Arvind Satyanarayan;Dominik Moritz;Kanit Wongsuphasawat;Jeffrey Heer",
                "AuthorAffiliation": "Stanford University;University of Washington;University of Washington;University of Washington",
                "InternalReferences": "0.1109/tvcg.2015.2467091;10.1109/tvcg.2009.174;10.1109/tvcg.2015.2467191;10.1109/tvcg.2014.2346260;10.1109/infvis.2000.885086;10.1109/tvcg.2007.70515;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Information visualization;interaction;systems;toolkits;declarative specification",
                "AminerCitationCount": 641,
                "CitationCountCrossRef": 449,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 6043,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 888,
                "i": [
                    888
                ]
            }
        },
        {
            "name": "Tamara Munzner",
            "value": 2256,
            "numPapers": 217,
            "cluster": "5",
            "visible": 1,
            "index": 72,
            "x": -85.14287924641151,
            "y": -0.8307307812929198,
            "vy": 0,
            "vx": 0,
            "r": 3.597582037996546,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "A Nested Model for Visualization Design and Validation",
                "DOI": "10.1109/tvcg.2009.111",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.111",
                "FirstPage": 921,
                "LastPage": 928,
                "PaperType": "J",
                "Abstract": "We present a nested model for the visualization design and validation with four layers: characterize the task and data in the vocabulary of the problem domain, abstract into operations and data types, design visual encoding and interaction techniques, and create algorithms to execute techniques efficiently. The output from a level above is input to the level below, bringing attention to the design challenge that an upstream error inevitably cascades to all downstream levels. This model provides prescriptive guidance for determining appropriate evaluation approaches by identifying threats to validity unique to each level. We also provide three recommendations motivated by this model: authors should distinguish between these levels when claiming contributions at more than one of them, authors should explicitly state upstream assumptions at levels above the focus of a paper, and visualization venues should accept more papers on domain characterization.",
                "AuthorNamesDeduped": "Tamara Munzner",
                "AuthorNames": "Tamara Munzner",
                "AuthorAffiliation": "University of British Columbia, Canada",
                "InternalReferences": "0.1109/vast.2007.4389008;10.1109/infvis.2005.1532136;10.1109/tvcg.2008.117;10.1109/tvcg.2006.160;10.1109/visual.1998.745289;10.1109/tvcg.2007.70515;10.1109/tvcg.2008.109;10.1109/visual.1992.235203;10.1109/infvis.2004.59;10.1109/infvis.2005.1532124;10.1109/infvis.1998.729560;10.1109/infvis.2004.10;10.1109/tvcg.2008.125;10.1109/infvis.1997.636792;10.1109/infvis.2005.1532150;10.1109/visual.1990.146375",
                "AuthorKeywords": "Models, frameworks, design, evaluation",
                "AminerCitationCount": 1035,
                "CitationCountCrossRef": 577,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 9220,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1814,
                "i": [
                    1814
                ]
            }
        },
        {
            "name": "Panagiotis D. Ritsos",
            "value": 67,
            "numPapers": 35,
            "cluster": "5",
            "visible": 1,
            "index": 73,
            "x": 63.778211166050426,
            "y": -57.29170778095798,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Sketching Designs Using the Five Design-Sheet Methodology",
                "DOI": "10.1109/tvcg.2015.2467271",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467271",
                "FirstPage": 419,
                "LastPage": 428,
                "PaperType": "J",
                "Abstract": "Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.",
                "AuthorNamesDeduped": "Jonathan C. Roberts;Chris Headleand;Panagiotis D. Ritsos",
                "AuthorNames": "Jonathan C. Roberts;Chris Headleand;Panagiotis D. Ritsos",
                "AuthorAffiliation": "School of Computer Science, Bangor University;School of Computer Science, Bangor University;Department of Computer Science, University of Chester",
                "InternalReferences": "0.1109/tvcg.2010.132;10.1109/infvis.2000.885092;10.1109/tvcg.2006.178;10.1109/visual.1994.346304;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/infvis.2004.59;10.1109/tvcg.2012.262;10.1109/tvcg.2007.70515;10.1109/tvcg.2008.171",
                "AuthorKeywords": "Lo-fidelity prototyping, User-centred design, Sketching for visualization, Ideation",
                "AminerCitationCount": 116,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 3448,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1013,
                "i": [
                    1013
                ]
            }
        },
        {
            "name": "Bahador Saket",
            "value": 133,
            "numPapers": 37,
            "cluster": "5",
            "visible": 1,
            "index": 74,
            "x": -8.384537640848643,
            "y": 85.90517754215513,
            "vy": 0,
            "vx": 0,
            "r": 1.1531375935521013,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Investigating Direct Manipulation of Graphical Encodings as a Method for User Interaction",
                "DOI": "10.1109/tvcg.2019.2934534",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934534",
                "FirstPage": 482,
                "LastPage": 491,
                "PaperType": "J",
                "Abstract": "We investigate direct manipulation of graphical encodings as a method for interacting with visualizations. There is an increasing interest in developing visualization tools that enable users to perform operations by directly manipulating graphical encodings rather than external widgets such as checkboxes and sliders. Designers of such tools must decide which direct manipulation operations should be supported, and identify how each operation can be invoked. However, we lack empirical guidelines for how people convey their intended operations using direct manipulation of graphical encodings. We address this issue by conducting a qualitative study that examines how participants perform 15 operations using direct manipulation of standard graphical encodings. From this study, we 1) identify a list of strategies people employ to perform each operation, 2) observe commonalities in strategies across operations, and 3) derive implications to help designers leverage direct manipulation of graphical encoding as a method for user interaction.",
                "AuthorNamesDeduped": "Bahador Saket;Samuel Huron;Charles Perin;Alex Endert",
                "AuthorNames": "Bahador Saket;Samuel Huron;Charles Perin;Alex Endert",
                "AuthorAffiliation": "Georgia Tech;University Paris Saclay;University of Victoria;Georgia Tech",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346292;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346250;10.1109/tvcg.2012.204;10.1109/tvcg.2014.2346279;10.1109/tvcg.2014.2346291;10.1109/tvcg.2016.2598839;10.1109/tvcg.2018.2865075;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2745258",
                "AuthorKeywords": "Direct Manipulation,Data Visualization",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 551,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 567,
                "i": [
                    567
                ]
            }
        },
        {
            "name": "Eli T. Brown",
            "value": 385,
            "numPapers": 38,
            "cluster": "4",
            "visible": 1,
            "index": 75,
            "x": -52.19241383102709,
            "y": -69.46907181250381,
            "vy": 0,
            "vx": 0,
            "r": 1.443293033966609,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Visualization by Demonstration: An Interaction Paradigm for Visual Data Exploration",
                "DOI": "10.1109/tvcg.2016.2598839",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598839",
                "FirstPage": 331,
                "LastPage": 340,
                "PaperType": "J",
                "Abstract": "Although data visualization tools continue to improve, during the data exploration process many of them require users to manually specify visualization techniques, mappings, and parameters. In response, we present the Visualization by Demonstration paradigm, a novel interaction method for visual data exploration. A system which adopts this paradigm allows users to provide visual demonstrations of incremental changes to the visual representation. The system then recommends potential transformations (Visual Representation, Data Mapping, Axes, and View Specification transformations) from the given demonstrations. The user and the system continue to collaborate, incrementally producing more demonstrations and refining the transformations, until the most effective possible visualization is created. As a proof of concept, we present VisExemplar, a mixed-initiative prototype that allows users to explore their data by recommending appropriate transformations in response to the given demonstrations.",
                "AuthorNamesDeduped": "Bahador Saket;Hannah Kim 0001;Eli T. Brown;Alex Endert",
                "AuthorNames": "Bahador Saket;Hannah Kim;Eli T. Brown;Alex Endert",
                "AuthorAffiliation": "Georgia Institute of Technology;Georgia Institute of Technology;DePaul University;Georgia Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/tvcg.2015.2467191;10.1109/tvcg.2007.70594;10.1109/vast.2011.6102449;10.1109/tvcg.2007.70515;10.1109/tvcg.2014.2346250;10.1109/tvcg.2012.275;10.1109/tvcg.2015.2467153;10.1109/tvcg.2013.191;10.1109/tvcg.2011.251;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346291;10.1109/vast.2012.6400486",
                "AuthorKeywords": "Visual Data Exploration;Visualization by Demonstration;Visualization Tools",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2549,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 900,
                "i": [
                    900
                ]
            }
        },
        {
            "name": "Hannah Kim 0001",
            "value": 194,
            "numPapers": 44,
            "cluster": "4",
            "visible": 1,
            "index": 76,
            "x": 85.97451880838744,
            "y": 16.074268744370073,
            "vy": 0,
            "vx": 0,
            "r": 1.2233736327000575,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Visualization by Demonstration: An Interaction Paradigm for Visual Data Exploration",
                "DOI": "10.1109/tvcg.2016.2598839",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598839",
                "FirstPage": 331,
                "LastPage": 340,
                "PaperType": "J",
                "Abstract": "Although data visualization tools continue to improve, during the data exploration process many of them require users to manually specify visualization techniques, mappings, and parameters. In response, we present the Visualization by Demonstration paradigm, a novel interaction method for visual data exploration. A system which adopts this paradigm allows users to provide visual demonstrations of incremental changes to the visual representation. The system then recommends potential transformations (Visual Representation, Data Mapping, Axes, and View Specification transformations) from the given demonstrations. The user and the system continue to collaborate, incrementally producing more demonstrations and refining the transformations, until the most effective possible visualization is created. As a proof of concept, we present VisExemplar, a mixed-initiative prototype that allows users to explore their data by recommending appropriate transformations in response to the given demonstrations.",
                "AuthorNamesDeduped": "Bahador Saket;Hannah Kim 0001;Eli T. Brown;Alex Endert",
                "AuthorNames": "Bahador Saket;Hannah Kim;Eli T. Brown;Alex Endert",
                "AuthorAffiliation": "Georgia Institute of Technology;Georgia Institute of Technology;DePaul University;Georgia Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/tvcg.2015.2467191;10.1109/tvcg.2007.70594;10.1109/vast.2011.6102449;10.1109/tvcg.2007.70515;10.1109/tvcg.2014.2346250;10.1109/tvcg.2012.275;10.1109/tvcg.2015.2467153;10.1109/tvcg.2013.191;10.1109/tvcg.2011.251;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346291;10.1109/vast.2012.6400486",
                "AuthorKeywords": "Visual Data Exploration;Visualization by Demonstration;Visualization Tools",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2549,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 900,
                "i": [
                    900
                ]
            }
        },
        {
            "name": "Arvind Satyanarayan",
            "value": 613,
            "numPapers": 106,
            "cluster": "5",
            "visible": 1,
            "index": 77,
            "x": -74.73668467093094,
            "y": 46.52341307769494,
            "vy": 0,
            "vx": 0,
            "r": 1.7058146229130684,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Critical Reflections on Visualization Authoring Systems",
                "DOI": "10.1109/tvcg.2019.2934281",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934281",
                "FirstPage": 461,
                "LastPage": 471,
                "PaperType": "J",
                "Abstract": "An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed —Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John T. Stasko;John Thompson 0002;Matthew Brehmer;Zhicheng Liu 0001",
                "AuthorNames": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John Stasko;John Thompson;Matthew Brehmer;Zhicheng Liu",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Microsoft Research;University of California, Santa Barbara;University of Washington;Georgia Institute of Technology;Georgia Institute of Technology;Microsoft Research;Adobe Research",
                "InternalReferences": "0.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Critical reflection,visualization authoring,expressivity,learnability,reusability",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 529,
                "i": [
                    529
                ]
            }
        },
        {
            "name": "Xingyu Lan",
            "value": 58,
            "numPapers": 34,
            "cluster": "5",
            "visible": 1,
            "index": 78,
            "x": 23.83469124336915,
            "y": -85.33409338203144,
            "vy": 0,
            "vx": 0,
            "r": 1.0667818077144502,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Affective Visualization Design: Leveraging the Emotional Impact of Data",
                "DOI": "10.1109/tvcg.2023.3327385",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327385",
                "FirstPage": 1,
                "LastPage": 11,
                "PaperType": "J",
                "Abstract": "In recent years, more and more researchers have reflected on the undervaluation of emotion in data visualization and highlighted the importance of considering human emotion in visualization design. Meanwhile, an increasing number of studies have been conducted to explore emotion-related factors. However, so far, this research area is still in its early stages and faces a set of challenges, such as the unclear definition of key concepts, the insufficient justification of why emotion is important in visualization design, and the lack of characterization of the design space of affective visualization design. To address these challenges, first, we conducted a literature review and identified three research lines that examined both emotion and data visualization. We clarified the differences between these research lines and kept 109 papers that studied or discussed how data visualization communicates and influences emotion. Then, we coded the 109 papers in terms of how they justified the legitimacy of considering emotion in visualization design (i.e., why emotion is important) and identified five argumentative perspectives. Based on these papers, we also identified 61 projects that practiced affective visualization design. We coded these design projects in three dimensions, including design fields (where), design tasks (what), and design methods (how), to explore the design space of affective visualization design.",
                "AuthorNamesDeduped": "Xingyu Lan;Yanqiu Wu 0001;Nan Cao 0001",
                "AuthorNames": "Xingyu Lan;Yanqiu Wu;Nan Cao",
                "AuthorAffiliation": "Fudan University, Research Group of Computational and AI Communication at Institute for Global Communications and Integrated Media, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China",
                "InternalReferences": "10.1109/tvcg.2021.3114775;10.1109/tvcg.2020.3030435;10.1109/tvcg.2022.3209500;10.1109/tvcg.2022.3209457;10.1109/tvcg.2020.3030472;10.1109/tvcg.2010.179;10.1109/tvcg.2022.3209409;10.1109/infvis.2004.8;10.1109/tvcg.2009.171;10.1109/tvcg.2021.3114774;10.1109/tvcg.2019.2934656",
                "AuthorKeywords": "Information Visualization,Affective Design,Visual Communication,User Experience,Storytelling",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 95,
                "DownloadsXplore": 848,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 4,
                "i": [
                    4
                ]
            }
        },
        {
            "name": "Yang Shi 0007",
            "value": 212,
            "numPapers": 109,
            "cluster": "5",
            "visible": 1,
            "index": 79,
            "x": 40.321791005107514,
            "y": 79.52454445100851,
            "vy": 0,
            "vx": 0,
            "r": 1.2440990213010938,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kineticharts: Augmenting Affective Expressiveness of Charts in Data Stories with Animation Design",
                "DOI": "10.1109/tvcg.2021.3114775",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114775",
                "FirstPage": 933,
                "LastPage": 943,
                "PaperType": "J",
                "Abstract": "Data stories often seek to elicit affective feelings from viewers. However, how to design affective data stories remains under-explored. In this work, we investigate one specific design factor, animation, and present Kineticharts, an animation design scheme for creating charts that express five positive affects: joy, amusement, surprise, tenderness, and excitement. These five affects were found to be frequently communicated through animation in data stories. Regarding each affect, we designed varied kinetic motions represented by bar charts, line charts, and pie charts, resulting in 60 animated charts for the five affects. We designed Kineticharts by first conducting a need-finding study with professional practitioners from data journalism and then analyzing a corpus of affective motion graphics to identify salient kinetic patterns. We evaluated Kineticharts through two user studies. The results suggest that Kineticharts can accurately convey affects, and improve the expressiveness of data stories, as well as enhance user engagement without hindering data comprehension compared to the animation design from DataClips, an authoring tool for data videos.",
                "AuthorNamesDeduped": "Xingyu Lan;Yang Shi 0007;Yanqiu Wu 0001;Xiaohan Jiao;Nan Cao 0001",
                "AuthorNames": "Xingyu Lan;Yang Shi;Yanqiu Wu;Xiaohan Jiao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934397;10.1109/tvcg.2019.2934288;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70539;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2008.125;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/infvis.2005.1532122",
                "AuthorKeywords": "Animation,Storytelling,Affective Design",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 98,
                "DownloadsXplore": 1239,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 280,
                "i": [
                    280
                ]
            }
        },
        {
            "name": "Nan Cao 0001",
            "value": 916,
            "numPapers": 272,
            "cluster": "5",
            "visible": 1,
            "index": 80,
            "x": -83.97329492050187,
            "y": -31.5988249812302,
            "vy": 0,
            "vx": 0,
            "r": 2.054691997697179,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kineticharts: Augmenting Affective Expressiveness of Charts in Data Stories with Animation Design",
                "DOI": "10.1109/tvcg.2021.3114775",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114775",
                "FirstPage": 933,
                "LastPage": 943,
                "PaperType": "J",
                "Abstract": "Data stories often seek to elicit affective feelings from viewers. However, how to design affective data stories remains under-explored. In this work, we investigate one specific design factor, animation, and present Kineticharts, an animation design scheme for creating charts that express five positive affects: joy, amusement, surprise, tenderness, and excitement. These five affects were found to be frequently communicated through animation in data stories. Regarding each affect, we designed varied kinetic motions represented by bar charts, line charts, and pie charts, resulting in 60 animated charts for the five affects. We designed Kineticharts by first conducting a need-finding study with professional practitioners from data journalism and then analyzing a corpus of affective motion graphics to identify salient kinetic patterns. We evaluated Kineticharts through two user studies. The results suggest that Kineticharts can accurately convey affects, and improve the expressiveness of data stories, as well as enhance user engagement without hindering data comprehension compared to the animation design from DataClips, an authoring tool for data videos.",
                "AuthorNamesDeduped": "Xingyu Lan;Yang Shi 0007;Yanqiu Wu 0001;Xiaohan Jiao;Nan Cao 0001",
                "AuthorNames": "Xingyu Lan;Yang Shi;Yanqiu Wu;Xiaohan Jiao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934397;10.1109/tvcg.2019.2934288;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70539;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2008.125;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/infvis.2005.1532122",
                "AuthorKeywords": "Animation,Storytelling,Affective Design",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 98,
                "DownloadsXplore": 1239,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 280,
                "i": [
                    280
                ]
            }
        },
        {
            "name": "Yanqiu Wu 0001",
            "value": 33,
            "numPapers": 43,
            "cluster": "5",
            "visible": 1,
            "index": 81,
            "x": 83.7795653009197,
            "y": -33.63011207220298,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kineticharts: Augmenting Affective Expressiveness of Charts in Data Stories with Animation Design",
                "DOI": "10.1109/tvcg.2021.3114775",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114775",
                "FirstPage": 933,
                "LastPage": 943,
                "PaperType": "J",
                "Abstract": "Data stories often seek to elicit affective feelings from viewers. However, how to design affective data stories remains under-explored. In this work, we investigate one specific design factor, animation, and present Kineticharts, an animation design scheme for creating charts that express five positive affects: joy, amusement, surprise, tenderness, and excitement. These five affects were found to be frequently communicated through animation in data stories. Regarding each affect, we designed varied kinetic motions represented by bar charts, line charts, and pie charts, resulting in 60 animated charts for the five affects. We designed Kineticharts by first conducting a need-finding study with professional practitioners from data journalism and then analyzing a corpus of affective motion graphics to identify salient kinetic patterns. We evaluated Kineticharts through two user studies. The results suggest that Kineticharts can accurately convey affects, and improve the expressiveness of data stories, as well as enhance user engagement without hindering data comprehension compared to the animation design from DataClips, an authoring tool for data videos.",
                "AuthorNamesDeduped": "Xingyu Lan;Yang Shi 0007;Yanqiu Wu 0001;Xiaohan Jiao;Nan Cao 0001",
                "AuthorNames": "Xingyu Lan;Yang Shi;Yanqiu Wu;Xiaohan Jiao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934397;10.1109/tvcg.2019.2934288;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70539;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2008.125;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/infvis.2005.1532122",
                "AuthorKeywords": "Animation,Storytelling,Affective Design",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 98,
                "DownloadsXplore": 1239,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 280,
                "i": [
                    280
                ]
            }
        },
        {
            "name": "Xiaohan Jiao",
            "value": 46,
            "numPapers": 44,
            "cluster": "5",
            "visible": 1,
            "index": 82,
            "x": -39.29852881769943,
            "y": 81.88788452979138,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kineticharts: Augmenting Affective Expressiveness of Charts in Data Stories with Animation Design",
                "DOI": "10.1109/tvcg.2021.3114775",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114775",
                "FirstPage": 933,
                "LastPage": 943,
                "PaperType": "J",
                "Abstract": "Data stories often seek to elicit affective feelings from viewers. However, how to design affective data stories remains under-explored. In this work, we investigate one specific design factor, animation, and present Kineticharts, an animation design scheme for creating charts that express five positive affects: joy, amusement, surprise, tenderness, and excitement. These five affects were found to be frequently communicated through animation in data stories. Regarding each affect, we designed varied kinetic motions represented by bar charts, line charts, and pie charts, resulting in 60 animated charts for the five affects. We designed Kineticharts by first conducting a need-finding study with professional practitioners from data journalism and then analyzing a corpus of affective motion graphics to identify salient kinetic patterns. We evaluated Kineticharts through two user studies. The results suggest that Kineticharts can accurately convey affects, and improve the expressiveness of data stories, as well as enhance user engagement without hindering data comprehension compared to the animation design from DataClips, an authoring tool for data videos.",
                "AuthorNamesDeduped": "Xingyu Lan;Yang Shi 0007;Yanqiu Wu 0001;Xiaohan Jiao;Nan Cao 0001",
                "AuthorNames": "Xingyu Lan;Yang Shi;Yanqiu Wu;Xiaohan Jiao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934397;10.1109/tvcg.2019.2934288;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70539;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2008.125;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/infvis.2005.1532122",
                "AuthorKeywords": "Animation,Storytelling,Affective Design",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 98,
                "DownloadsXplore": 1239,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 280,
                "i": [
                    280
                ]
            }
        },
        {
            "name": "Bongshin Lee",
            "value": 1146,
            "numPapers": 151,
            "cluster": "5",
            "visible": 1,
            "index": 83,
            "x": -26.496096265793614,
            "y": -87.45259791837975,
            "vy": 0,
            "vx": 0,
            "r": 2.319516407599309,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Critical Reflections on Visualization Authoring Systems",
                "DOI": "10.1109/tvcg.2019.2934281",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934281",
                "FirstPage": 461,
                "LastPage": 471,
                "PaperType": "J",
                "Abstract": "An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed —Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John T. Stasko;John Thompson 0002;Matthew Brehmer;Zhicheng Liu 0001",
                "AuthorNames": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John Stasko;John Thompson;Matthew Brehmer;Zhicheng Liu",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Microsoft Research;University of California, Santa Barbara;University of Washington;Georgia Institute of Technology;Georgia Institute of Technology;Microsoft Research;Adobe Research",
                "InternalReferences": "0.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Critical reflection,visualization authoring,expressivity,learnability,reusability",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 529,
                "i": [
                    529
                ]
            }
        },
        {
            "name": "Steven Mark Drucker",
            "value": 335,
            "numPapers": 62,
            "cluster": "5",
            "visible": 1,
            "index": 84,
            "x": 79.08010012524043,
            "y": 46.86510177287518,
            "vy": 0,
            "vx": 0,
            "r": 1.3857225100748418,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Data Visceralization: Enabling Deeper Understanding of Data Using Virtual Reality",
                "DOI": "10.1109/tvcg.2020.3030435",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030435",
                "FirstPage": 1095,
                "LastPage": 1105,
                "PaperType": "J",
                "Abstract": "A fundamental part of data visualization is transforming data to map abstract information onto visual attributes. While this abstraction is a powerful basis for data visualization, the connection between the representation and the original underlying data (i.e., what the quantities and measurements actually correspond with in reality) can be lost. On the other hand, virtual reality (VR) is being increasingly used to represent real and abstract models as natural experiences to users. In this work, we explore the potential of using VR to help restore the basic understanding of units and measures that are often abstracted away in data visualization in an approach we call data visceralization. By building VR prototypes as design probes, we identify key themes and factors for data visceralization. We do this first through a critical reflection by the authors, then by involving external participants. We find that data visceralization is an engaging way of understanding the qualitative aspects of physical measures and their real-life form, which complements analytical and quantitative understanding commonly gained from data visualization. However, data visceralization is most effective when there is a one-to-one mapping between data and representation, with transformations such as scaling affecting this understanding. We conclude with a discussion of future directions for data visceralization.",
                "AuthorNamesDeduped": "Benjamin Lee;Dave Brown;Bongshin Lee;Christophe Hurter;Steven Mark Drucker;Tim Dwyer",
                "AuthorNames": "Benjamin Lee;Dave Brown;Bongshin Lee;Christophe Hurter;Steven Drucker;Tim Dwyer",
                "AuthorAffiliation": "Monash University;Microsoft Research;Microsoft Research;ENAC, French Civil Aviation University;Microsoft Research;Monash University",
                "InternalReferences": "0.1109/tvcg.2013.210;10.1109/infvis.1998.729560;10.1109/tvcg.2018.2865237;10.1109/tvcg.2010.179;10.1109/tvcg.2016.2598498;10.1109/visual.2001.964545",
                "AuthorKeywords": "Data visceralization,virtual reality,exploratory study",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1815,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 368,
                "i": [
                    368
                ]
            }
        },
        {
            "name": "Edward Segel",
            "value": 263,
            "numPapers": 3,
            "cluster": "5",
            "visible": 1,
            "index": 85,
            "x": -90.49891610470821,
            "y": 18.97224772853719,
            "vy": 0,
            "vx": 0,
            "r": 1.3028209556706967,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Narrative Visualization: Telling Stories with Data",
                "DOI": "10.1109/tvcg.2010.179",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.179",
                "FirstPage": 1139,
                "LastPage": 1148,
                "PaperType": "J",
                "Abstract": "Data visualization is regularly promoted for its ability to reveal stories within data, yet these “data stories” differ in important ways from traditional forms of storytelling. Storytellers, especially online journalists, have increasingly been integrating visualizations into their narratives, in some cases allowing the visualization to function in place of a written story. In this paper, we systematically review the design space of this emerging class of visualizations. Drawing on case studies from news media to visualization research, we identify distinct genres of narrative visualization. We characterize these design differences, together with interactivity and messaging, in terms of the balance between the narrative flow intended by the author (imposed by graphical elements and the interface) and story discovery on the part of the reader (often through interactive exploration). Our framework suggests design strategies for narrative visualization, including promising under-explored approaches to journalistic storytelling and educational media.",
                "AuthorNamesDeduped": "Edward Segel;Jeffrey Heer",
                "AuthorNames": "Edward Segel;Jeffrey Heer",
                "AuthorAffiliation": "University of Stanford, Stanford, CA, USA;University of Stanford, Stanford, CA, USA",
                "InternalReferences": "0.1109/tvcg.2007.70577;10.1109/tvcg.2007.70539;10.1109/tvcg.2008.137;10.1109/vast.2007.4388992",
                "AuthorKeywords": "Narrative visualization, storytelling, design methods, case study, journalism, social data analysis",
                "AminerCitationCount": 1408,
                "CitationCountCrossRef": 690,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 27481,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1684,
                "i": [
                    1684
                ]
            }
        },
        {
            "name": "Fernanda B. Viégas",
            "value": 863,
            "numPapers": 41,
            "cluster": "1",
            "visible": 1,
            "index": 86,
            "x": 54.22989387987045,
            "y": -75.55870968841374,
            "vy": 0,
            "vx": 0,
            "r": 1.9936672423719055,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Artifacts of the Presence Era: Using Information Visualization to Create an Evocative Souvenir",
                "DOI": "10.1109/infvis.2004.8",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.8",
                "FirstPage": 105,
                "LastPage": 111,
                "PaperType": "C",
                "Abstract": "We present Artifacts of the Presence Era, a digital installation that uses a geological metaphor to visualize the events in a physical space over time. The piece captures video and audio from a museum and constructs an impressionistic visualization of the evolving history in the space. Instead of creating a visualization tool for data analysis, we chose to produce a piece that functions as a souvenir of a particular time and place. We describe the design choices we made in creating this installation, the visualization techniques we developed, and the reactions we observed from users and the media. We suggest that the same approach can be applied to a more general set of visualization contexts, ranging from email archives to newsgroups conversations",
                "AuthorNamesDeduped": "Fernanda B. Viégas;Ethan Perry;Ethan Howe;Judith S. Donath",
                "AuthorNames": "F.B. Viegas;E. Perry;E. Howe;J. Donath",
                "AuthorAffiliation": "MIT Media Laboratory, USA;MIT Media Laboratory, USA;MIT Media Laboratory, USA;MIT Media Laboratory, USA",
                "InternalReferences": null,
                "AuthorKeywords": "visualization, history, public space",
                "AminerCitationCount": 60,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 341,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2470,
                "i": [
                    2470
                ]
            }
        },
        {
            "name": "Martin Wattenberg",
            "value": 1221,
            "numPapers": 49,
            "cluster": "1",
            "visible": 1,
            "index": 87,
            "x": 11.115438344501243,
            "y": 92.87866832814515,
            "vy": 0,
            "vx": 0,
            "r": 2.4058721934369602,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "Participatory Visualization with Wordle",
                "DOI": "10.1109/tvcg.2009.171",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.171",
                "FirstPage": 1137,
                "LastPage": 1144,
                "PaperType": "J",
                "Abstract": "We discuss the design and usage of ldquoWordle,rdquo a Web-based tool for visualizing text. Wordle creates tag-cloud-like displays that give careful attention to typography, color, and composition. We describe the algorithms used to balance various aesthetic criteria and create the distinctive Wordle layouts. We then present the results of a study of Wordle usage, based both on spontaneous behaviour observed in the wild, and on a large-scale survey of Wordle users. The results suggest that Wordles have become a kind of medium of expression, and that a ldquoparticipatory culturerdquo has arisen around them.",
                "AuthorNamesDeduped": "Fernanda B. Viégas;Martin Wattenberg;Jonathan Feinberg",
                "AuthorNames": "Fernanda B. Viegas;Martin Wattenberg;Jonathan Feinberg",
                "AuthorAffiliation": "IBM Research, USA;IBM Research, USA;IBM Research, USA",
                "InternalReferences": "0.1109/infvis.2005.1532122;10.1109/tvcg.2007.70577",
                "AuthorKeywords": "Visualization, text, tag cloud, participatory culture, memory, educational visualization, social data analysis",
                "AminerCitationCount": 534,
                "CitationCountCrossRef": 258,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 3482,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1815,
                "i": [
                    1815
                ]
            }
        },
        {
            "name": "Shunan Guo",
            "value": 254,
            "numPapers": 62,
            "cluster": "3",
            "visible": 1,
            "index": 88,
            "x": -71.33900756564361,
            "y": -61.32492152093669,
            "vy": 0,
            "vx": 0,
            "r": 1.2924582613701785,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "A Design Space for Applying the Freytag's Pyramid Structure to Data Stories",
                "DOI": "10.1109/tvcg.2021.3114774",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114774",
                "FirstPage": 922,
                "LastPage": 932,
                "PaperType": "J",
                "Abstract": "Data stories integrate compelling visual content to communicate data insights in the form of narratives. The narrative structure of a data story serves as the backbone that determines its expressiveness, and it can largely influence how audiences perceive the insights. Freytag's Pyramid is a classic narrative structure that has been widely used in film and literature. While there are continuous recommendations and discussions about applying Freytag's Pyramid to data stories, little systematic and practical guidance is available on how to use Freytag's Pyramid for creating structured data stories. To bridge this gap, we examined how existing practices apply Freytag's Pyramid by analyzing stories extracted from 103 data videos. Based on our findings, we proposed a design space of narrative patterns, data flows, and visual communications to provide practical guidance on achieving narrative intents, organizing data facts, and selecting visual design techniques through story creation. We evaluated the proposed design space through a workshop with 25 participants. Results show that our design space provides a clear framework for rapid storyboarding of data stories with Freytag's Pyramid.",
                "AuthorNamesDeduped": "Leni Yang;Xian Xu;Xingyu Lan;Ziyan Liu;Shunan Guo;Yang Shi 0007;Huamin Qu;Nan Cao 0001",
                "AuthorNames": "Leni Yang;Xian Xu;XingYu Lan;Ziyan Liu;Shunan Guo;Yang Shi;Huamin Qu;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China and Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Adobe Research, USA;Intelligent Big Data Visualization Lab at Tongji University, China;Hong Kong University of Science and Technology, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2013.234;10.1109/tvcg.2013.124;10.1109/tvcg.2011.175;10.1109/tvcg.2013.119;10.1109/tvcg.2015.2467195;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2020.3030396;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Freytag's Pyramid,Narrative Structure,Narrative Visualization,Data Storytelling,Data Video",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 2247,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 264,
                "i": [
                    264
                ]
            }
        },
        {
            "name": "Michael Krone",
            "value": 33,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 89,
            "x": 94.55729014169245,
            "y": -2.986449574292769,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "ARGUS: Visualization of AI-Assisted Task Guidance in AR",
                "DOI": "10.1109/tvcg.2023.3327396",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327396",
                "FirstPage": 1313,
                "LastPage": 1323,
                "PaperType": "J",
                "Abstract": "The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple of modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive the 3D environment, reason about physical tasks, and model the performer, all in real-time. Within this framework, a wide variety of sensors are needed to generate data across different modalities, such as audio, video, depth, speech, and time-of-flight. The required sensors are typically part of the AR headset, providing performer sensing and interaction through visual, audio, and haptic feedback. AI assistants not only record the performer as they perform activities, but also require machine learning (ML) models to understand and assist the performer as they interact with the physical world. Therefore, developing such assistants is a challenging task. We propose ARGUS, a visual analytics system to support the development of intelligent AR assistants. Our system was designed as part of a multi-year-long collaboration between visualization researchers and ML and AR experts. This co-design process has led to advances in the visualization of ML in AR. Our system allows for online visualization of object, action, and step detection as well as offline analysis of previously recorded AR sessions. It visualizes not only the multimodal sensor data streams but also the output of the ML models. This allows developers to gain insights into the performer activities as well as the ML models, helping them troubleshoot, improve, and fine-tune the components of the AR assistant.",
                "AuthorNamesDeduped": "Sonia Castelo;João Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Irán R. Román;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He 0001;Qi Sun 0003;Huy T. Vo;Juan Pablo Bello;Michael Krone;Cláudio T. Silva",
                "AuthorNames": "Sonia Castelo;Joao Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Iran Roman;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He;Qi Sun;Huy Vo;Juan Bello;Michael Krone;Claudio Silva",
                "AuthorAffiliation": "New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York",
                "InternalReferences": "10.1109/tvcg.2017.2746018;10.1109/tvcg.2018.2865152;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Data Models,Image and Video Data,Temporal Data,Application Motivated Visualization,AR/VR/Immersive",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 467,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 5,
                "i": [
                    5
                ]
            }
        },
        {
            "name": "Maxime Cordeil",
            "value": 314,
            "numPapers": 66,
            "cluster": "3",
            "visible": 1,
            "index": 90,
            "x": -68.08348195242905,
            "y": 66.44275344409853,
            "vy": 0,
            "vx": 0,
            "r": 1.3615428900402993,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Shared Surfaces and Spaces: Collaborative Data Visualisation in a Co-located Immersive Environment",
                "DOI": "10.1109/tvcg.2020.3030450",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030450",
                "FirstPage": 1171,
                "LastPage": 1181,
                "PaperType": "J",
                "Abstract": "Immersive technologies offer new opportunities to support collaborative visual data analysis by providing each collaborator a personal, high-resolution view of a flexible shared visualisation space through a head mounted display. However, most prior studies of collaborative immersive analytics have focused on how groups interact with surface interfaces such as tabletops and wall displays. This paper reports on a study in which teams of three co-located participants are given flexible visualisation authoring tools to allow a great deal of control in how they structure their shared workspace. They do so using a prototype system we call FIESTA: the Free-roaming Immersive Environment to Support Team-based Analysis. Unlike traditional visualisation tools, FIESTA allows users to freely position authoring interfaces and visualisation artefacts anywhere in the virtual environment, either on virtual surfaces or suspended within the interaction space. Our participants solved visual analytics tasks on a multivariate data set, doing so individually and collaboratively by creating a large number of 2D and 3D visualisations. Their behaviours suggest that the usage of surfaces is coupled with the type of visualisation used, often using walls to organise 2D visualisations, but positioning 3D visualisations in the space around them. Outside of tightly-coupled collaboration, participants followed social protocols and did not interact with visualisations that did not belong to them even if outside of its owner's personal workspace.",
                "AuthorNamesDeduped": "Benjamin Lee;Xiaoyun Hu;Maxime Cordeil;Arnaud Prouzeau;Bernhard Jenny;Tim Dwyer",
                "AuthorNames": "Benjamin Lee;Xiaoyun Hu;Maxime Cordeil;Arnaud Prouzeau;Bernhard Jenny;Tim Dwyer",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2019.2934803;10.1109/tvcg.2016.2599107;10.1109/tvcg.2008.153;10.1109/vast.2007.4389011;10.1109/tvcg.2019.2934395",
                "AuthorKeywords": "Immersive analytics,collaboration,virtual reality,qualitative study,multivariate data",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 2379,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 362,
                "i": [
                    362
                ]
            }
        },
        {
            "name": "Cláudio T. Silva",
            "value": 1024,
            "numPapers": 250,
            "cluster": "3",
            "visible": 1,
            "index": 91,
            "x": 5.350523786656618,
            "y": -95.50587361627777,
            "vy": 0,
            "vx": 0,
            "r": 2.1790443293033968,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "ARGUS: Visualization of AI-Assisted Task Guidance in AR",
                "DOI": "10.1109/tvcg.2023.3327396",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327396",
                "FirstPage": 1313,
                "LastPage": 1323,
                "PaperType": "J",
                "Abstract": "The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple of modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive the 3D environment, reason about physical tasks, and model the performer, all in real-time. Within this framework, a wide variety of sensors are needed to generate data across different modalities, such as audio, video, depth, speech, and time-of-flight. The required sensors are typically part of the AR headset, providing performer sensing and interaction through visual, audio, and haptic feedback. AI assistants not only record the performer as they perform activities, but also require machine learning (ML) models to understand and assist the performer as they interact with the physical world. Therefore, developing such assistants is a challenging task. We propose ARGUS, a visual analytics system to support the development of intelligent AR assistants. Our system was designed as part of a multi-year-long collaboration between visualization researchers and ML and AR experts. This co-design process has led to advances in the visualization of ML in AR. Our system allows for online visualization of object, action, and step detection as well as offline analysis of previously recorded AR sessions. It visualizes not only the multimodal sensor data streams but also the output of the ML models. This allows developers to gain insights into the performer activities as well as the ML models, helping them troubleshoot, improve, and fine-tune the components of the AR assistant.",
                "AuthorNamesDeduped": "Sonia Castelo;João Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Irán R. Román;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He 0001;Qi Sun 0003;Huy T. Vo;Juan Pablo Bello;Michael Krone;Cláudio T. Silva",
                "AuthorNames": "Sonia Castelo;Joao Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Iran Roman;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He;Qi Sun;Huy Vo;Juan Bello;Michael Krone;Claudio Silva",
                "AuthorAffiliation": "New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York",
                "InternalReferences": "10.1109/tvcg.2017.2746018;10.1109/tvcg.2018.2865152;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Data Models,Image and Video Data,Temporal Data,Application Motivated Visualization,AR/VR/Immersive",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 467,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 5,
                "i": [
                    5
                ]
            }
        },
        {
            "name": "David S. Ebert",
            "value": 941,
            "numPapers": 248,
            "cluster": "6",
            "visible": 1,
            "index": 92,
            "x": 60.89805413593339,
            "y": 74.44076169987062,
            "vy": 0,
            "vx": 0,
            "r": 2.083477259643063,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Panel 3: The Future Visualization Platform",
                "DOI": "10.1109/visual.2004.78",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.78",
                "FirstPage": 569,
                "LastPage": 571,
                "PaperType": "M",
                "Abstract": "Advances in graphics hardware and rendering methods are shaping the future of visualization. For example, programmable graphics processors are redefining the traditional visualization cycle. In some cases it is now possible to run the computational simulation and associated visualization side-by-side on the same chip. Moreover, global illumination and non-photorealistic effects promise to deliver imagery which enables greater insight into high resolution, multivariate, and higher-dimensional data. The panelists will offer distinct viewpoints on the direction of future graphics hardware and its potential impact on visualization, and on the nature of advanced visualizationrelated tools and techniques. Presentation of these viewpoints will be followed by audience participation in the form of a question and answer period moderated by the panel organizer.",
                "AuthorNamesDeduped": "Greg Johnson;David S. Ebert;Chuck Hansen;David Blair Kirk;Bill Mark;Hanspeter Pfister",
                "AuthorNames": "G. Johnson;D. Ebert;C. Hansen;D. Kirk;B. Mark;H. Pfister",
                "AuthorAffiliation": "University of Texas, Austin, USA;Purdue University, USA;University of Utah, USA;NVIDIA Corporation, USA;University of Texas, Austin, USA;Mitsubishi Electric Research Laboratories, Inc., USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 67,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2597,
                "i": [
                    2597
                ]
            }
        },
        {
            "name": "Yilin Ye",
            "value": 8,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 93,
            "x": -95.70149029940134,
            "y": -13.828403901882153,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Let the Chart Spark: Embedding Semantic Context into Chart with Text-to-Image Generative Model",
                "DOI": "10.1109/tvcg.2023.3326913",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326913",
                "FirstPage": 284,
                "LastPage": 294,
                "PaperType": "J",
                "Abstract": "Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in an engaging and informative manner. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise data integrity. Text-guided generation methods are emerging, but may have limited applicability due to their predefined entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative models. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative models combined with an interactive interface for visualization design.",
                "AuthorNamesDeduped": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng 0004",
                "AuthorNames": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng",
                "AuthorAffiliation": "Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China",
                "InternalReferences": "10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934785;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598620;10.1109/tvcg.2012.221;10.1109/tvcg.2020.3030448;10.1109/tvcg.2022.3209486;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2022.3209447",
                "AuthorKeywords": "pictorial visualization,generative model,authoring tool",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 6,
                "i": [
                    6
                ]
            }
        },
        {
            "name": "Wei Zeng 0004",
            "value": 168,
            "numPapers": 121,
            "cluster": "3",
            "visible": 1,
            "index": 94,
            "x": 80.33443352607085,
            "y": -54.73918879601071,
            "vy": 0,
            "vx": 0,
            "r": 1.1934369602763386,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Let the Chart Spark: Embedding Semantic Context into Chart with Text-to-Image Generative Model",
                "DOI": "10.1109/tvcg.2023.3326913",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326913",
                "FirstPage": 284,
                "LastPage": 294,
                "PaperType": "J",
                "Abstract": "Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in an engaging and informative manner. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise data integrity. Text-guided generation methods are emerging, but may have limited applicability due to their predefined entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative models. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative models combined with an interactive interface for visualization design.",
                "AuthorNamesDeduped": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng 0004",
                "AuthorNames": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng",
                "AuthorAffiliation": "Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China",
                "InternalReferences": "10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934785;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598620;10.1109/tvcg.2012.221;10.1109/tvcg.2020.3030448;10.1109/tvcg.2022.3209486;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2022.3209447",
                "AuthorKeywords": "pictorial visualization,generative model,authoring tool",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 6,
                "i": [
                    6
                ]
            }
        },
        {
            "name": "Shishi Xiao",
            "value": 8,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 95,
            "x": -22.377789897465878,
            "y": 95.12746458991155,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Let the Chart Spark: Embedding Semantic Context into Chart with Text-to-Image Generative Model",
                "DOI": "10.1109/tvcg.2023.3326913",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326913",
                "FirstPage": 284,
                "LastPage": 294,
                "PaperType": "J",
                "Abstract": "Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in an engaging and informative manner. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise data integrity. Text-guided generation methods are emerging, but may have limited applicability due to their predefined entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative models. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative models combined with an interactive interface for visualization design.",
                "AuthorNamesDeduped": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng 0004",
                "AuthorNames": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng",
                "AuthorAffiliation": "Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China",
                "InternalReferences": "10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934785;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598620;10.1109/tvcg.2012.221;10.1109/tvcg.2020.3030448;10.1109/tvcg.2022.3209486;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2022.3209447",
                "AuthorKeywords": "pictorial visualization,generative model,authoring tool",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 6,
                "i": [
                    6
                ]
            }
        },
        {
            "name": "Yun Wang 0012",
            "value": 411,
            "numPapers": 130,
            "cluster": "5",
            "visible": 1,
            "index": 96,
            "x": -48.00637868990163,
            "y": -85.70523674246375,
            "vy": 0,
            "vx": 0,
            "r": 1.4732297063903281,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Towards Automated Infographic Design: Deep Learning-based Auto-Extraction of Extensible Timeline",
                "DOI": "10.1109/tvcg.2019.2934810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934810",
                "FirstPage": 917,
                "LastPage": 926,
                "PaperType": "J",
                "Abstract": "Designers need to consider not only perceptual effectiveness but also visual styles when creating an infographic. This process can be difficult and time consuming for professional designers, not to mention non-expert users, leading to the demand for automated infographics design. As a first step, we focus on timeline infographics, which have been widely used for centuries. We contribute an end-to-end approach that automatically extracts an extensible timeline template from a bitmap image. Our approach adopts a deconstruction and reconstruction paradigm. At the deconstruction stage, we propose a multi-task deep neural network that simultaneously parses two kinds of information from a bitmap timeline: 1) the global information, i.e., the representation, scale, layout, and orientation of the timeline, and 2) the local information, i.e., the location, category, and pixels of each visual element on the timeline. At the reconstruction stage, we propose a pipeline with three techniques, i.e., Non-Maximum Merging, Redundancy Recover, and DL GrabCut, to extract an extensible template from the infographic, by utilizing the deconstruction results. To evaluate the effectiveness of our approach, we synthesize a timeline dataset (4296 images) and collect a real-world timeline dataset (393 images) from the Internet. We first report quantitative evaluation results of our approach over the two datasets. Then, we present examples of automatically extracted templates and timelines automatically generated based on these templates to qualitatively demonstrate the performance. The results confirm that our approach can effectively extract extensible templates from real-world timeline infographics.",
                "AuthorNamesDeduped": "Zhutian Chen;Yun Wang 0012;Qianwen Wang;Yong Wang 0021;Huamin Qu",
                "AuthorNames": "Zhutian Chen;Yun Wang;Qianwen Wang;Yong Wang;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Microsoft Research;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2015.2467732;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2017.2744320;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Automated Infographic Design,Deep Learning-based Approach,Timeline Infographics,Multi-task Model",
                "AminerCitationCount": 58,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1657,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 530,
                "i": [
                    530
                ]
            }
        },
        {
            "name": "Suizi Huang",
            "value": 8,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 97,
            "x": 93.77359482802532,
            "y": 30.92754295170151,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Let the Chart Spark: Embedding Semantic Context into Chart with Text-to-Image Generative Model",
                "DOI": "10.1109/tvcg.2023.3326913",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326913",
                "FirstPage": 284,
                "LastPage": 294,
                "PaperType": "J",
                "Abstract": "Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in an engaging and informative manner. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise data integrity. Text-guided generation methods are emerging, but may have limited applicability due to their predefined entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative models. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative models combined with an interactive interface for visualization design.",
                "AuthorNamesDeduped": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng 0004",
                "AuthorNames": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng",
                "AuthorAffiliation": "Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China",
                "InternalReferences": "10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934785;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598620;10.1109/tvcg.2012.221;10.1109/tvcg.2020.3030448;10.1109/tvcg.2022.3209486;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2022.3209447",
                "AuthorKeywords": "pictorial visualization,generative model,authoring tool",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 6,
                "i": [
                    6
                ]
            }
        },
        {
            "name": "Yue Lin",
            "value": 8,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 98,
            "x": -90.49753574616686,
            "y": 40.74550311226072,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Let the Chart Spark: Embedding Semantic Context into Chart with Text-to-Image Generative Model",
                "DOI": "10.1109/tvcg.2023.3326913",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326913",
                "FirstPage": 284,
                "LastPage": 294,
                "PaperType": "J",
                "Abstract": "Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in an engaging and informative manner. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise data integrity. Text-guided generation methods are emerging, but may have limited applicability due to their predefined entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative models. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative models combined with an interactive interface for visualization design.",
                "AuthorNamesDeduped": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng 0004",
                "AuthorNames": "Shishi Xiao;Suizi Huang;Yue Lin;Yilin Ye;Wei Zeng",
                "AuthorAffiliation": "Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China;Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China",
                "InternalReferences": "10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934785;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598620;10.1109/tvcg.2012.221;10.1109/tvcg.2020.3030448;10.1109/tvcg.2022.3209486;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2022.3209447",
                "AuthorKeywords": "pictorial visualization,generative model,authoring tool",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 6,
                "i": [
                    6
                ]
            }
        },
        {
            "name": "Zhutian Chen",
            "value": 379,
            "numPapers": 136,
            "cluster": "3",
            "visible": 1,
            "index": 99,
            "x": 39.405391604334184,
            "y": -91.63631983285377,
            "vy": 0,
            "vx": 0,
            "r": 1.436384571099597,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Exploring Interactions with Printed Data Visualizations in Augmented Reality",
                "DOI": "10.1109/tvcg.2022.3209386",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209386",
                "FirstPage": 418,
                "LastPage": 428,
                "PaperType": "J",
                "Abstract": "This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ($\\mathrm{N}=20$) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ($\\mathrm{N}=12$, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement “point” for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.",
                "AuthorNamesDeduped": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorNames": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, USA;Harvard University, USA;Carnegie Mellon University, USA;Hong Kong University of Science and Technology, Hong Kong, USA;Hong Kong University of Science and Technology, Hong Kong, USA;University of Edinburgh, United Kingdom;Hong Kong University of Science and Technology, Hong Kong, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467201;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114806;10.1109/tvcg.2021.3114861;10.1109/tvcg.2019.2934283;10.1109/tvcg.2020.3030334;10.1109/tvcg.2013.121;10.1109/tvcg.2013.134;10.1109/tvcg.2017.2744319;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2020.3028948;10.1109/tvcg.2010.177;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.237;10.1109/tvcg.2020.3030392;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2599211",
                "AuthorKeywords": "Interaction design,augmented reality,paper interaction,tangible user interface,printed data visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 1055,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 147,
                "i": [
                    147
                ]
            }
        },
        {
            "name": "Haidong Zhang",
            "value": 306,
            "numPapers": 122,
            "cluster": "5",
            "visible": 1,
            "index": 100,
            "x": 33.007763527292475,
            "y": 94.65985182180637,
            "vy": 0,
            "vx": 0,
            "r": 1.3523316062176165,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Dongmei Zhang 0001",
            "value": 291,
            "numPapers": 85,
            "cluster": "5",
            "visible": 1,
            "index": 101,
            "x": -88.71882907398403,
            "y": -47.738552216643605,
            "vy": 0,
            "vx": 0,
            "r": 1.3350604490500864,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Weiwei Cui",
            "value": 1088,
            "numPapers": 194,
            "cluster": "1",
            "visible": 1,
            "index": 102,
            "x": 98.14536344682021,
            "y": -24.84929845874063,
            "vy": 0,
            "vx": 0,
            "r": 2.252734599884859,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Yingcai Wu",
            "value": 2054,
            "numPapers": 737,
            "cluster": "3",
            "visible": 1,
            "index": 103,
            "x": -55.85435935400874,
            "y": 85.03111513530358,
            "vy": 0,
            "vx": 0,
            "r": 3.3649971214738055,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization",
                "DOI": "10.1109/tvcg.2022.3209447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209447",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.",
                "AuthorNamesDeduped": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209468;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2468292;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.206;10.1109/tvcg.2017.2745258;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/vast50239.2020.00014;10.1109/tvcg.2022.3209360;10.1109/tvcg.2019.2934613;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Glyph-based visualization,metaphor,machine learning,automatic visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 152,
                "i": [
                    152
                ]
            }
        },
        {
            "name": "David Saffo",
            "value": 2,
            "numPapers": 22,
            "cluster": "2",
            "visible": 1,
            "index": 104,
            "x": -16.330752093088236,
            "y": -100.91237057999427,
            "vy": 0,
            "vx": 0,
            "r": 1.0023028209556706,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Unraveling the Design Space of Immersive Analytics: A Systematic Review",
                "DOI": "10.1109/tvcg.2023.3327368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327368",
                "FirstPage": 495,
                "LastPage": 506,
                "PaperType": "J",
                "Abstract": "Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions:  Academic Theory and Contribution,  Immersive Technology,  Data,  Spatial Presentation, and  Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.",
                "AuthorNamesDeduped": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorNames": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 7,
                "i": [
                    7
                ]
            }
        },
        {
            "name": "Sara Di Bartolomeo",
            "value": 42,
            "numPapers": 31,
            "cluster": "2",
            "visible": 1,
            "index": 105,
            "x": 80.58996761442954,
            "y": 63.68090074665402,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Unraveling the Design Space of Immersive Analytics: A Systematic Review",
                "DOI": "10.1109/tvcg.2023.3327368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327368",
                "FirstPage": 495,
                "LastPage": 506,
                "PaperType": "J",
                "Abstract": "Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions:  Academic Theory and Contribution,  Immersive Technology,  Data,  Spatial Presentation, and  Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.",
                "AuthorNamesDeduped": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorNames": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 7,
                "i": [
                    7
                ]
            }
        },
        {
            "name": "Tarik Crnovrsanin",
            "value": 124,
            "numPapers": 36,
            "cluster": "3",
            "visible": 1,
            "index": 106,
            "x": -102.92471869368397,
            "y": 7.516799972463076,
            "vy": 0,
            "vx": 0,
            "r": 1.1427748992515832,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Unraveling the Design Space of Immersive Analytics: A Systematic Review",
                "DOI": "10.1109/tvcg.2023.3327368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327368",
                "FirstPage": 495,
                "LastPage": 506,
                "PaperType": "J",
                "Abstract": "Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions:  Academic Theory and Contribution,  Immersive Technology,  Data,  Spatial Presentation, and  Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.",
                "AuthorNamesDeduped": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorNames": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 7,
                "i": [
                    7
                ]
            }
        },
        {
            "name": "Laura South",
            "value": 0,
            "numPapers": 25,
            "cluster": "2",
            "visible": 1,
            "index": 107,
            "x": 71.14765141346048,
            "y": -75.41890809570711,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Unraveling the Design Space of Immersive Analytics: A Systematic Review",
                "DOI": "10.1109/tvcg.2023.3327368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327368",
                "FirstPage": 495,
                "LastPage": 506,
                "PaperType": "J",
                "Abstract": "Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions:  Academic Theory and Contribution,  Immersive Technology,  Data,  Spatial Presentation, and  Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.",
                "AuthorNamesDeduped": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorNames": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 7,
                "i": [
                    7
                ]
            }
        },
        {
            "name": "Justin Raynor",
            "value": 0,
            "numPapers": 16,
            "cluster": "2",
            "visible": 1,
            "index": 108,
            "x": -1.5243644602292337,
            "y": 104.15217862816117,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Unraveling the Design Space of Immersive Analytics: A Systematic Review",
                "DOI": "10.1109/tvcg.2023.3327368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327368",
                "FirstPage": 495,
                "LastPage": 506,
                "PaperType": "J",
                "Abstract": "Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions:  Academic Theory and Contribution,  Immersive Technology,  Data,  Spatial Presentation, and  Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.",
                "AuthorNamesDeduped": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorNames": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 7,
                "i": [
                    7
                ]
            }
        },
        {
            "name": "Caglar Yildirim",
            "value": 0,
            "numPapers": 13,
            "cluster": "2",
            "visible": 1,
            "index": 109,
            "x": -69.54806633471385,
            "y": -78.18610150853053,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Unraveling the Design Space of Immersive Analytics: A Systematic Review",
                "DOI": "10.1109/tvcg.2023.3327368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327368",
                "FirstPage": 495,
                "LastPage": 506,
                "PaperType": "J",
                "Abstract": "Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions:  Academic Theory and Contribution,  Immersive Technology,  Data,  Spatial Presentation, and  Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.",
                "AuthorNamesDeduped": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorNames": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 7,
                "i": [
                    7
                ]
            }
        },
        {
            "name": "Cody Dunne",
            "value": 126,
            "numPapers": 55,
            "cluster": "2",
            "visible": 1,
            "index": 110,
            "x": 104.57077869930853,
            "y": 10.721578345572096,
            "vy": 0,
            "vx": 0,
            "r": 1.145077720207254,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Unraveling the Design Space of Immersive Analytics: A Systematic Review",
                "DOI": "10.1109/tvcg.2023.3327368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327368",
                "FirstPage": 495,
                "LastPage": 506,
                "PaperType": "J",
                "Abstract": "Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions:  Academic Theory and Contribution,  Immersive Technology,  Data,  Spatial Presentation, and  Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.",
                "AuthorNamesDeduped": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorNames": "David Saffo;Sara Di Bartolomeo;Tarik Crnovrsanin;Laura South;Justin Raynor;Caglar Yildirim;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/tvcg.2021.3114835;10.1109/tvcg.2016.2599107;10.1109/tvcg.2019.2934415;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2020.3030450;10.1109/tvcg.2018.2865237;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209475;10.1109/tvcg.2021.3114844;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Immersive Analytics,Systematic Review,Survey,Augmented Reality,Virtual Reality,Design Space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 7,
                "i": [
                    7
                ]
            }
        },
        {
            "name": "Kim Marriott",
            "value": 562,
            "numPapers": 86,
            "cluster": "2",
            "visible": 1,
            "index": 111,
            "x": -84.73037247106939,
            "y": 63.01399829334625,
            "vy": 0,
            "vx": 0,
            "r": 1.6470926885434658,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Immersive Collaborative Analysis of Network Connectivity: CAVE-style or Head-Mounted Display?",
                "DOI": "10.1109/tvcg.2016.2599107",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599107",
                "FirstPage": 441,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "High-quality immersive display technologies are becoming mainstream with the release of head-mounted displays (HMDs) such as the Oculus Rift. These devices potentially represent an affordable alternative to the more traditional, centralised CAVE-style immersive environments. One driver for the development of CAVE-style immersive environments has been collaborative sense-making. Despite this, there has been little research on the effectiveness of collaborative visualisation in CAVE-style facilities, especially with respect to abstract data visualisation tasks. Indeed, very few studies have focused on the use of these displays to explore and analyse abstract data such as networks and there have been no formal user studies investigating collaborative visualisation of abstract data in immersive environments. In this paper we present the results of the first such study. It explores the relative merits of HMD and CAVE-style immersive environments for collaborative analysis of network connectivity, a common and important task involving abstract data. We find significant differences between the two conditions in task completion time and the physical movements of the participants within the space: participants using the HMD were faster while the CAVE2 condition introduced an asymmetry in movement between collaborators. Otherwise, affordances for collaborative data analysis offered by the low-cost HMD condition were not found to be different for accuracy and communication with the CAVE2. These results are notable, given that the latest HMDs will soon be accessible (in terms of cost and potentially ubiquity) to a massive audience.",
                "AuthorNamesDeduped": "Maxime Cordeil;Tim Dwyer;Karsten Klein 0001;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorNames": "Maxime Cordeil;Tim Dwyer;Karsten Klein;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Stanford University, USA;Monash University;University of South Australia",
                "InternalReferences": "0.1109/visual.2001.964545;10.1109/tvcg.2014.2346573;10.1109/vast.2007.4389011;10.1109/tvcg.2006.156;10.1109/tvcg.2011.234;10.1109/tvcg.2016.2598446",
                "AuthorKeywords": "3D Network;Oculus Rift;CAVE;Immersive Analytics;Collaboration",
                "AminerCitationCount": 188,
                "CitationCountCrossRef": 132,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 3680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 890,
                "i": [
                    890
                ]
            }
        },
        {
            "name": "Nathalie Henry Riche",
            "value": 641,
            "numPapers": 119,
            "cluster": "5",
            "visible": 1,
            "index": 112,
            "x": 20.001288480320913,
            "y": -104.16308587559693,
            "vy": 0,
            "vx": 0,
            "r": 1.7380541162924583,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Authoring Data-Driven Videos with DataClips",
                "DOI": "10.1109/tvcg.2016.2598647",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598647",
                "FirstPage": 501,
                "LastPage": 510,
                "PaperType": "J",
                "Abstract": "Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven “clips” together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.",
                "AuthorNamesDeduped": "Fereshteh Amini;Nathalie Henry Riche;Bongshin Lee;Andrés Monroy-Hernández;Pourang Irani",
                "AuthorNames": "Fereshteh Amini;Nathalie Henry Riche;Bongshin Lee;Andres Monroy-Hernandez;Pourang Irani",
                "AuthorAffiliation": "University of Manitoba, Canada;Microsoft;Microsoft;Microsoft;University of Manitoba, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70539;10.1109/tvcg.2008.137;10.1109/vast.2007.4388992;10.1109/tvcg.2013.234;10.1109/tvcg.2013.119;10.1109/tvcg.2011.255;10.1109/tvcg.2010.179;10.1109/vast.2012.6400487;10.1109/tvcg.2011.185",
                "AuthorKeywords": "data video;narrative visualization;data storytelling;authoring tools;visualization systems",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 77,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 2269,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 894,
                "i": [
                    894
                ]
            }
        },
        {
            "name": "Zhicheng Liu 0001",
            "value": 857,
            "numPapers": 117,
            "cluster": "5",
            "visible": 1,
            "index": 113,
            "x": 55.85944716268368,
            "y": 90.71781612604742,
            "vy": 0,
            "vx": 0,
            "r": 1.9867587795048935,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Critical Reflections on Visualization Authoring Systems",
                "DOI": "10.1109/tvcg.2019.2934281",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934281",
                "FirstPage": 461,
                "LastPage": 471,
                "PaperType": "J",
                "Abstract": "An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed —Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John T. Stasko;John Thompson 0002;Matthew Brehmer;Zhicheng Liu 0001",
                "AuthorNames": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John Stasko;John Thompson;Matthew Brehmer;Zhicheng Liu",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Microsoft Research;University of California, Santa Barbara;University of Washington;Georgia Institute of Technology;Georgia Institute of Technology;Microsoft Research;Adobe Research",
                "InternalReferences": "0.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Critical reflection,visualization authoring,expressivity,learnability,reusability",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 529,
                "i": [
                    529
                ]
            }
        },
        {
            "name": "Leixian Shen",
            "value": 21,
            "numPapers": 27,
            "cluster": "5",
            "visible": 1,
            "index": 114,
            "x": -102.91843325381073,
            "y": -29.288156250964384,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Towards Natural Language-Based Visualization Authoring",
                "DOI": "10.1109/tvcg.2022.3209357",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209357",
                "FirstPage": 1222,
                "LastPage": 1232,
                "PaperType": "J",
                "Abstract": "A key challenge to visualization authoring is the process of getting familiar with the complex user interfaces of authoring tools. Natural Language Interface (NLI) presents promising benefits due to its learnability and usability. However, supporting NLIs for authoring tools requires expertise in natural language processing, while existing NLIs are mostly designed for visual analytic workflow. In this paper, we propose an authoring-oriented NLI pipeline by introducing a structured representation of users' visualization editing intents, called editing actions, based on a formative study and an extensive survey on visualization construction tools. The editing actions are executable, and thus decouple natural language interpretation and visualization applications as an intermediate layer. We implement a deep learning-based NL interpreter to translate NL utterances into editing actions. The interpreter is reusable and extensible across authoring tools. The authoring tools only need to map the editing actions into tool-specific operations. To illustrate the usages of the NL interpreter, we implement an Excel chart editor and a proof-of-concept authoring tool, VisTalk. We conduct a user study with VisTalk to understand the usage patterns of NL-based authoring systems. Finally, we discuss observations on how users author charts with natural language, as well as implications for future research.",
                "AuthorNamesDeduped": "Yun Wang 0012;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang 0001",
                "AuthorNames": "Yun Wang;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Tsinghua University, China;Carnegie Mellon University, USA;Oxford University, USA;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2744684;10.1109/tvcg.2016.2598620;10.1109/tvcg.2021.3114848;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030378;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2745219;10.1109/infvis.2000.885086;10.1109/infvis.2005.1532146;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Visualization authoring,Natural language interface,Natural language understanding",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 75,
                "DownloadsXplore": 1174,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 140,
                "i": [
                    140
                ]
            }
        },
        {
            "name": "Yizhi Zhang",
            "value": 0,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 115,
            "x": 96.08959149562926,
            "y": -48.13304900173157,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Data Player: Automatic Generation of Data Videos with Narration-Animation Interplay",
                "DOI": "10.1109/tvcg.2023.3327197",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327197",
                "FirstPage": 109,
                "LastPage": 119,
                "PaperType": "J",
                "Abstract": "Data visualizations and narratives are often integrated to convey data stories effectively. Among various data storytelling formats, data videos have been garnering increasing attention. These videos provide an intuitive interpretation of data charts while vividly articulating the underlying data insights. However, the production of data videos demands a diverse set of professional skills and considerable manual labor, including understanding narratives, linking visual elements with narration segments, designing and crafting animations, recording audio narrations, and synchronizing audio with visual animations. To simplify this process, our paper introduces a novel method, referred to as Data Player, capable of automatically generating dynamic data videos with narration-animation interplay. This approach lowers the technical barriers associated with creating data videos rich in narration. To enable narration-animation interplay, Data Player constructs references between visualizations and text input. Specifically, it first extracts data into tables from the visualizations. Subsequently, it utilizes large language models to form semantic connections between text and visuals. Finally, Data Player encodes animation design knowledge as computational low-level constraints, allowing for the recommendation of suitable animation presets that align with the audio narration produced by text-to-speech technologies. We assessed Data Player's efficacy through an example gallery, a user study, and expert interviews. The evaluation results demonstrated that Data Player can generate high-quality data videos that are comparable to human-composed ones.",
                "AuthorNamesDeduped": "Leixian Shen;Yizhi Zhang;Haidong Zhang;Yun Wang 0012",
                "AuthorNames": "Leixian Shen;Yizhi Zhang;Haidong Zhang;Yun Wang",
                "AuthorAffiliation": "The Hong Kong University of Science and Technology, China;Cornell University, USA;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China",
                "InternalReferences": "10.1109/tvcg.2016.2598647;10.1109/tvcg.2018.2865119;10.1109/tvcg.2007.70539;10.1109/tvcg.2020.3030360;10.1109/tvcg.2021.3114775;10.1109/tvcg.2021.3114802;10.1109/tvcg.2018.2865240;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2022.3209357;10.1109/tvcg.2022.3209447;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Visualization,Narration-animation interplay,Data video,Human-AI collaboration",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 435,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 8,
                "i": [
                    8
                ]
            }
        },
        {
            "name": "Nam Wook Kim",
            "value": 341,
            "numPapers": 60,
            "cluster": "5",
            "visible": 1,
            "index": 116,
            "x": -38.505683452653834,
            "y": 100.83309150196685,
            "vy": 0,
            "vx": 0,
            "r": 1.3926309729418538,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Beyond Memorability: Visualization Recognition and Recall",
                "DOI": "10.1109/tvcg.2015.2467732",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467732",
                "FirstPage": 519,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.",
                "AuthorNamesDeduped": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorNames": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorAffiliation": "University of British Columbia, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;University of Michigan;School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT)",
                "InternalReferences": "0.1109/tvcg.2012.197;10.1109/tvcg.2013.234;10.1109/tvcg.2011.193;10.1109/tvcg.2012.233;10.1109/tvcg.2011.175;10.1109/tvcg.2013.234;10.1109/tvcg.2012.215;10.1109/vast.2010.5653598;10.1109/tvcg.2012.245;10.1109/tvcg.2012.221",
                "AuthorKeywords": "Information visualization, memorability, recognition, recall, eye-tracking study",
                "AminerCitationCount": 295,
                "CitationCountCrossRef": 188,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 5067,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1004,
                "i": [
                    1004
                ]
            }
        },
        {
            "name": "Chenglong Wang",
            "value": 217,
            "numPapers": 25,
            "cluster": "5",
            "visible": 1,
            "index": 117,
            "x": -39.88898509970408,
            "y": -100.79121423872017,
            "vy": 0,
            "vx": 0,
            "r": 1.2498560736902706,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco",
                "DOI": "10.1109/tvcg.2018.2865240",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865240",
                "FirstPage": 438,
                "LastPage": 448,
                "PaperType": "J",
                "Abstract": "There exists a gap between visualization design guidelines and their application in visualization tools. While empirical studies can provide design guidance, we lack a formal framework for representing design knowledge, integrating results across studies, and applying this knowledge in automated design tools that promote effective encodings and facilitate visual exploration. We propose modeling visualization design knowledge as a collection of constraints, in conjunction with a method to learn weights for soft constraints from experimental data. Using constraints, we can take theoretical design knowledge and express it in a concrete, extensible, and testable form: the resulting models can recommend visualization designs and can easily be augmented with additional constraints or updated weights. We implement our approach in Draco, a constraint-based system based on Answer Set Programming (ASP). We demonstrate how to construct increasingly sophisticated automated visualization design systems, including systems based on weights learned directly from the results of graphical perception experiments.",
                "AuthorNamesDeduped": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith 0001;Bill Howe;Jeffrey Heer",
                "AuthorNames": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith;Bill Howe;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Washington;University of Washington;University of California Santa Cruz;University of Washington;University of Washington",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346984;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2007.70594;10.1109/tvcg.2017.2744320;10.1109/tvcg.2017.2744198;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2744359;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Automated Visualization Design,Perceptual Effectiveness,Constraints,Knowledge Bases,Answer Set Programming",
                "AminerCitationCount": 225,
                "CitationCountCrossRef": 151,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 2809,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 648,
                "i": [
                    648
                ]
            }
        },
        {
            "name": "Greg L. Nelson",
            "value": 217,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 118,
            "x": 97.91038239869727,
            "y": 47.576853810869764,
            "vy": 0,
            "vx": 0,
            "r": 1.2498560736902706,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco",
                "DOI": "10.1109/tvcg.2018.2865240",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865240",
                "FirstPage": 438,
                "LastPage": 448,
                "PaperType": "J",
                "Abstract": "There exists a gap between visualization design guidelines and their application in visualization tools. While empirical studies can provide design guidance, we lack a formal framework for representing design knowledge, integrating results across studies, and applying this knowledge in automated design tools that promote effective encodings and facilitate visual exploration. We propose modeling visualization design knowledge as a collection of constraints, in conjunction with a method to learn weights for soft constraints from experimental data. Using constraints, we can take theoretical design knowledge and express it in a concrete, extensible, and testable form: the resulting models can recommend visualization designs and can easily be augmented with additional constraints or updated weights. We implement our approach in Draco, a constraint-based system based on Answer Set Programming (ASP). We demonstrate how to construct increasingly sophisticated automated visualization design systems, including systems based on weights learned directly from the results of graphical perception experiments.",
                "AuthorNamesDeduped": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith 0001;Bill Howe;Jeffrey Heer",
                "AuthorNames": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith;Bill Howe;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Washington;University of Washington;University of California Santa Cruz;University of Washington;University of Washington",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346984;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2007.70594;10.1109/tvcg.2017.2744320;10.1109/tvcg.2017.2744198;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2744359;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Automated Visualization Design,Perceptual Effectiveness,Constraints,Knowledge Bases,Answer Set Programming",
                "AminerCitationCount": 225,
                "CitationCountCrossRef": 151,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 2809,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 648,
                "i": [
                    648
                ]
            }
        },
        {
            "name": "Halden Lin",
            "value": 217,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 119,
            "x": -104.77307383705094,
            "y": 31.18658363360554,
            "vy": 0,
            "vx": 0,
            "r": 1.2498560736902706,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco",
                "DOI": "10.1109/tvcg.2018.2865240",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865240",
                "FirstPage": 438,
                "LastPage": 448,
                "PaperType": "J",
                "Abstract": "There exists a gap between visualization design guidelines and their application in visualization tools. While empirical studies can provide design guidance, we lack a formal framework for representing design knowledge, integrating results across studies, and applying this knowledge in automated design tools that promote effective encodings and facilitate visual exploration. We propose modeling visualization design knowledge as a collection of constraints, in conjunction with a method to learn weights for soft constraints from experimental data. Using constraints, we can take theoretical design knowledge and express it in a concrete, extensible, and testable form: the resulting models can recommend visualization designs and can easily be augmented with additional constraints or updated weights. We implement our approach in Draco, a constraint-based system based on Answer Set Programming (ASP). We demonstrate how to construct increasingly sophisticated automated visualization design systems, including systems based on weights learned directly from the results of graphical perception experiments.",
                "AuthorNamesDeduped": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith 0001;Bill Howe;Jeffrey Heer",
                "AuthorNames": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith;Bill Howe;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Washington;University of Washington;University of California Santa Cruz;University of Washington;University of Washington",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346984;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2007.70594;10.1109/tvcg.2017.2744320;10.1109/tvcg.2017.2744198;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2744359;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Automated Visualization Design,Perceptual Effectiveness,Constraints,Knowledge Bases,Answer Set Programming",
                "AminerCitationCount": 225,
                "CitationCountCrossRef": 151,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 2809,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 648,
                "i": [
                    648
                ]
            }
        },
        {
            "name": "Adam M. Smith 0001",
            "value": 217,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 120,
            "x": 56.424784858608,
            "y": -94.16073307732795,
            "vy": 0,
            "vx": 0,
            "r": 1.2498560736902706,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco",
                "DOI": "10.1109/tvcg.2018.2865240",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865240",
                "FirstPage": 438,
                "LastPage": 448,
                "PaperType": "J",
                "Abstract": "There exists a gap between visualization design guidelines and their application in visualization tools. While empirical studies can provide design guidance, we lack a formal framework for representing design knowledge, integrating results across studies, and applying this knowledge in automated design tools that promote effective encodings and facilitate visual exploration. We propose modeling visualization design knowledge as a collection of constraints, in conjunction with a method to learn weights for soft constraints from experimental data. Using constraints, we can take theoretical design knowledge and express it in a concrete, extensible, and testable form: the resulting models can recommend visualization designs and can easily be augmented with additional constraints or updated weights. We implement our approach in Draco, a constraint-based system based on Answer Set Programming (ASP). We demonstrate how to construct increasingly sophisticated automated visualization design systems, including systems based on weights learned directly from the results of graphical perception experiments.",
                "AuthorNamesDeduped": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith 0001;Bill Howe;Jeffrey Heer",
                "AuthorNames": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith;Bill Howe;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Washington;University of Washington;University of California Santa Cruz;University of Washington;University of Washington",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346984;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2007.70594;10.1109/tvcg.2017.2744320;10.1109/tvcg.2017.2744198;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2744359;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Automated Visualization Design,Perceptual Effectiveness,Constraints,Knowledge Bases,Answer Set Programming",
                "AminerCitationCount": 225,
                "CitationCountCrossRef": 151,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 2809,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 648,
                "i": [
                    648
                ]
            }
        },
        {
            "name": "Bill Howe",
            "value": 486,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 121,
            "x": 22.08987363776554,
            "y": 107.99091388939881,
            "vy": 0,
            "vx": 0,
            "r": 1.5595854922279793,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco",
                "DOI": "10.1109/tvcg.2018.2865240",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865240",
                "FirstPage": 438,
                "LastPage": 448,
                "PaperType": "J",
                "Abstract": "There exists a gap between visualization design guidelines and their application in visualization tools. While empirical studies can provide design guidance, we lack a formal framework for representing design knowledge, integrating results across studies, and applying this knowledge in automated design tools that promote effective encodings and facilitate visual exploration. We propose modeling visualization design knowledge as a collection of constraints, in conjunction with a method to learn weights for soft constraints from experimental data. Using constraints, we can take theoretical design knowledge and express it in a concrete, extensible, and testable form: the resulting models can recommend visualization designs and can easily be augmented with additional constraints or updated weights. We implement our approach in Draco, a constraint-based system based on Answer Set Programming (ASP). We demonstrate how to construct increasingly sophisticated automated visualization design systems, including systems based on weights learned directly from the results of graphical perception experiments.",
                "AuthorNamesDeduped": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith 0001;Bill Howe;Jeffrey Heer",
                "AuthorNames": "Dominik Moritz;Chenglong Wang;Greg L. Nelson;Halden Lin;Adam M. Smith;Bill Howe;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington;University of Washington;University of Washington;University of Washington;University of California Santa Cruz;University of Washington;University of Washington",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346984;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2007.70594;10.1109/tvcg.2017.2744320;10.1109/tvcg.2017.2744198;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2744359;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Automated Visualization Design,Perceptual Effectiveness,Constraints,Knowledge Bases,Answer Set Programming",
                "AminerCitationCount": 225,
                "CitationCountCrossRef": 151,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 2809,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 648,
                "i": [
                    648
                ]
            }
        },
        {
            "name": "Arjun Srinivasan",
            "value": 290,
            "numPapers": 65,
            "cluster": "5",
            "visible": 1,
            "index": 122,
            "x": -89.6016700758777,
            "y": -64.97338470184204,
            "vy": 0,
            "vx": 0,
            "r": 1.333909038572251,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Augmenting Visualizations with Interactive Data Facts to Facilitate Interpretation and Communication",
                "DOI": "10.1109/tvcg.2018.2865145",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865145",
                "FirstPage": 672,
                "LastPage": 681,
                "PaperType": "J",
                "Abstract": "Recently, an increasing number of visualization systems have begun to incorporate natural language generation (NLG) capabilities into their interfaces. NLG-based visualization systems typically leverage a suite of statistical functions to automatically extract key facts about the underlying data and surface them as natural language sentences alongside visualizations. With current systems, users are typically required to read the system-generated sentences and mentally map them back to the accompanying visualization. However, depending on the features of the visualization (e.g., visualization type, data density) and the complexity of the data fact, mentally mapping facts to visualizations can be a challenging task. Furthermore, more than one visualization could be used to illustrate a single data fact. Unfortunately, current tools provide little or no support for users to explore such alternatives. In this paper, we explore how system-generated data facts can be treated as interactive widgets to help users interpret visualizations and communicate their findings. We present Voder, a system that lets users interact with automatically-generated data facts to explore both alternative visualizations to convey a data fact as well as a set of embellishments to highlight a fact within a visualization. Leveraging data facts as interactive widgets, Voder also facilitates data fact-based visualization search. To assess Voder's design and features, we conducted a preliminary user study with 12 participants having varying levels of experience with visualization tools. Participant feedback suggested that interactive data facts aided them in interpreting visualizations. Participants also stated that the suggestions surfaced through the facts helped them explore alternative visualizations and embellishments to communicate individual data facts.",
                "AuthorNamesDeduped": "Arjun Srinivasan;Steven Mark Drucker;Alex Endert;John T. Stasko",
                "AuthorNames": "Arjun Srinivasan;Steven M. Drucker;Alex Endert;John Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, US;Microsoft Research, Redmond, WA, US;Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2010.164;10.1109/tvcg.2013.119;10.1109/tvcg.2012.229;10.1109/tvcg.2007.70594;10.1109/visual.1992.235203;10.1109/tvcg.2017.2744843;10.1109/tvcg.2017.2745219;10.1109/visual.1990.146375;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Natural Language Generation,Mixed-initiative Interaction,Visualization Recommendation,Data-driven Communication",
                "AminerCitationCount": 120,
                "CitationCountCrossRef": 99,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 2524,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 650,
                "i": [
                    650
                ]
            }
        },
        {
            "name": "He Huang",
            "value": 119,
            "numPapers": 45,
            "cluster": "5",
            "visible": 1,
            "index": 123,
            "x": 110.406271159153,
            "y": -12.667094723399856,
            "vy": 0,
            "vx": 0,
            "r": 1.1370178468624064,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Lingyun Yu 0001",
            "value": 370,
            "numPapers": 161,
            "cluster": "3",
            "visible": 1,
            "index": 124,
            "x": -73.1480081310037,
            "y": 84.25775279739317,
            "vy": 0,
            "vx": 0,
            "r": 1.4260218767990789,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization",
                "DOI": "10.1109/tvcg.2022.3209447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209447",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.",
                "AuthorNamesDeduped": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209468;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2468292;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.206;10.1109/tvcg.2017.2745258;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/vast50239.2020.00014;10.1109/tvcg.2022.3209360;10.1109/tvcg.2019.2934613;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Glyph-based visualization,metaphor,machine learning,automatic visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 152,
                "i": [
                    152
                ]
            }
        },
        {
            "name": "Daniel A. Keim",
            "value": 1796,
            "numPapers": 308,
            "cluster": "3",
            "visible": 1,
            "index": 125,
            "x": -2.990166381297133,
            "y": -111.98686934195527,
            "vy": 0,
            "vx": 0,
            "r": 3.0679332181922856,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "The Impact of Immersion on Cluster Identification Tasks",
                "DOI": "10.1109/tvcg.2019.2934395",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934395",
                "FirstPage": 525,
                "LastPage": 535,
                "PaperType": "J",
                "Abstract": "Recent developments in technology encourage the use of head-mounted displays (HMDs) as a medium to explore visualizations in virtual realities (VRs). VR environments (VREs) enable new, more immersive visualization design spaces compared to traditional computer screens. Previous studies in different domains, such as medicine, psychology, and geology, report a positive effect of immersion, e.g., on learning performance or phobia treatment effectiveness. Our work presented in this paper assesses the applicability of those findings to a common task from the information visualization (InfoVis) domain. We conducted a quantitative user study to investigate the impact of immersion on cluster identification tasks in scatterplot visualizations. The main experiment was carried out with 18 participants in a within-subjects setting using four different visualizations, (1) a 2D scatterplot matrix on a screen, (2) a 3D scatterplot on a screen, (3) a 3D scatterplot miniature in a VRE and (4) a fully immersive 3D scatterplot in a VRE. The four visualization design spaces vary in their level of immersion, as shown in a supplementary study. The results of our main study indicate that task performance differs between the investigated visualization design spaces in terms of accuracy, efficiency, memorability, sense of orientation, and user preference. In particular, the 2D visualization on the screen performed worse compared to the 3D visualizations with regard to the measured variables. The study shows that an increased level of immersion can be a substantial benefit in the context of 3D data and cluster detection.",
                "AuthorNamesDeduped": "Matthias Kraus 0002;Niklas Weiler;Daniela Oelke;Johannes Kehrer;Daniel A. Keim;Johannes Fuchs 0001",
                "AuthorNames": "M. Kraus;N. Weiler;D. Oelke;J. Kehrer;D. A. Keim;J. Fuchs",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Siemens Corporate Technology, Munich, Germany;Siemens Corporate Technology, Munich, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/tvcg.2018.2864477;10.1109/infvis.1998.729555;10.1109/tvcg.2008.153;10.1109/vast.2008.4677350;10.1109/tvcg.2013.153;10.1109/visual.2002.1183816;10.1109/infvis.1999.801851;10.1109/vast.2007.4389000;10.1109/tvcg.2015.2467202;10.1109/tvcg.2017.2745941",
                "AuthorKeywords": "Virtual reality,evaluation,visual analytics,clustering",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1309,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 523,
                "i": [
                    523
                ]
            }
        },
        {
            "name": "Tobias Isenberg 0001",
            "value": 420,
            "numPapers": 126,
            "cluster": "6",
            "visible": 1,
            "index": 126,
            "x": 78.16044649402322,
            "y": 80.87610650776244,
            "vy": 0,
            "vx": 0,
            "r": 1.4835924006908463,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "MeTACAST: Target- and Context-Aware Spatial Selection in VR",
                "DOI": "10.1109/tvcg.2023.3326517",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326517",
                "FirstPage": 480,
                "LastPage": 494,
                "PaperType": "J",
                "Abstract": "We propose three novel spatial data selection techniques for particle data in VR visualization environments. They are designed to be target- and context-aware and be suitable for a wide range of data features and complex scenarios. Each technique is designed to be adjusted to particular selection intents: the selection of consecutive dense regions, the selection of filament-like structures, and the selection of clusters—with all of them facilitating post-selection threshold adjustment. These techniques allow users to precisely select those regions of space for further exploration—with simple and approximate 3D pointing, brushing, or drawing input—using flexible point- or path-based input and without being limited by 3D occlusions, non-homogeneous feature density, or complex data shapes. These new techniques are evaluated in a controlled experiment and compared with the Baseline method, a region-based 3D painting selection. Our results indicate that our techniques are effective in handling a wide range of scenarios and allow users to select data based on their comprehension of crucial features. Furthermore, we analyze the attributes, requirements, and strategies of our spatial selection methods and compare them with existing state-of-the-art selection methods to handle diverse data features and situations. Based on this analysis we provide guidelines for choosing the most suitable 3D spatial selection techniques based on the interaction environment, the given data characteristics, or the need for interactive post-selection threshold adjustment.",
                "AuthorNamesDeduped": "Lixiang Zhao;Tobias Isenberg 0001;Fuqi Xie;Hai-Ning Liang;Lingyun Yu 0001",
                "AuthorNames": "Lixiang Zhao;Tobias Isenberg;Fuqi Xie;Hai-Ning Liang;Lingyun Yu",
                "AuthorAffiliation": "Xi'an Jiaotong-Liverpool University, China;Université Paris-Saclay, CNRS, Inria, LISN, France;Xi'an Jiaotong-Liverpool University, China;Xi'an Jiaotong-Liverpool University, China;Xi'an Jiaotong-Liverpool University, China",
                "InternalReferences": "10.1109/tvcg.2009.112;10.1109/tvcg.2019.2934332;10.1109/tvcg.2018.2865191;10.1109/tvcg.2013.121;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030363;10.1109/tvcg.2012.292;10.1109/infvis.1996.559216;10.1109/tvcg.2012.217;10.1109/tvcg.2015.2467202",
                "AuthorKeywords": "Spatial selection,immersive analytics,virtual reality (VR),target-aware and context-aware interaction for visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 363,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 9,
                "i": [
                    9
                ]
            }
        },
        {
            "name": "Alexander Wiebel",
            "value": 149,
            "numPapers": 47,
            "cluster": "11",
            "visible": 1,
            "index": 127,
            "x": -112.70696387736386,
            "y": -6.8658789347465445,
            "vy": 0,
            "vx": 0,
            "r": 1.1715601611974669,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "WYSIWYP: What You See Is What You Pick",
                "DOI": "10.1109/tvcg.2012.292",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.292",
                "FirstPage": 2236,
                "LastPage": 2244,
                "PaperType": "J",
                "Abstract": "Scientists, engineers and physicians are used to analyze 3D data with slice-based visualizations. Radiologists for example are trained to read slices of medical imaging data. Despite the numerous examples of sophisticated 3D rendering techniques, domain experts, who still prefer slice-based visualization do not consider these to be very useful. Since 3D renderings have the advantage of providing an overview at a glance, while 2D depictions better serve detailed analyses, it is of general interest to better combine these methods. Recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking in medical imaging data that result in repositioning slices. In this paper, we present a new volume picking technique called WYSIWYP (“what you see is what you pick”) that, in contrast to previous work, does not require pre-segmented data or metadata and thus is more generally applicable. The positions picked by our method are solely based on the data itself, the transfer function, and the way the volumetric rendering is perceived by the user. To demonstrate the utility of the proposed method, we apply it to automated positioning of slices in volumetric scalar fields from various application areas. Finally, we present results of a user study in which 3D locations selected by users are compared to those resulting from WYSIWYP. The user study confirms our claim that the resulting positions correlate well with those perceived by the user.",
                "AuthorNamesDeduped": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorNames": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorAffiliation": "Zuse Institute Berlin, Germany;TU Delft and AMC Amsterdam, Netherlands;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany",
                "InternalReferences": "0.1109/tvcg.2012.217;10.1109/visual.1998.745337;10.1109/visual.2003.1250384;10.1109/tvcg.2007.70576;10.1109/visual.2005.1532833;10.1109/tvcg.2009.121",
                "AuthorKeywords": "Picking, volume rendering, WYSIWYG",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1452,
                "i": [
                    1452
                ]
            }
        },
        {
            "name": "Frans M. Vos",
            "value": 31,
            "numPapers": 5,
            "cluster": "6",
            "visible": 1,
            "index": 128,
            "x": 88.0878658744132,
            "y": -71.34793539894054,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "WYSIWYP: What You See Is What You Pick",
                "DOI": "10.1109/tvcg.2012.292",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.292",
                "FirstPage": 2236,
                "LastPage": 2244,
                "PaperType": "J",
                "Abstract": "Scientists, engineers and physicians are used to analyze 3D data with slice-based visualizations. Radiologists for example are trained to read slices of medical imaging data. Despite the numerous examples of sophisticated 3D rendering techniques, domain experts, who still prefer slice-based visualization do not consider these to be very useful. Since 3D renderings have the advantage of providing an overview at a glance, while 2D depictions better serve detailed analyses, it is of general interest to better combine these methods. Recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking in medical imaging data that result in repositioning slices. In this paper, we present a new volume picking technique called WYSIWYP (“what you see is what you pick”) that, in contrast to previous work, does not require pre-segmented data or metadata and thus is more generally applicable. The positions picked by our method are solely based on the data itself, the transfer function, and the way the volumetric rendering is perceived by the user. To demonstrate the utility of the proposed method, we apply it to automated positioning of slices in volumetric scalar fields from various application areas. Finally, we present results of a user study in which 3D locations selected by users are compared to those resulting from WYSIWYP. The user study confirms our claim that the resulting positions correlate well with those perceived by the user.",
                "AuthorNamesDeduped": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorNames": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorAffiliation": "Zuse Institute Berlin, Germany;TU Delft and AMC Amsterdam, Netherlands;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany",
                "InternalReferences": "0.1109/tvcg.2012.217;10.1109/visual.1998.745337;10.1109/visual.2003.1250384;10.1109/tvcg.2007.70576;10.1109/visual.2005.1532833;10.1109/tvcg.2009.121",
                "AuthorKeywords": "Picking, volume rendering, WYSIWYG",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1452,
                "i": [
                    1452
                ]
            }
        },
        {
            "name": "David Foerster",
            "value": 31,
            "numPapers": 5,
            "cluster": "6",
            "visible": 1,
            "index": 129,
            "x": -16.823494414812195,
            "y": 112.5476345183442,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "WYSIWYP: What You See Is What You Pick",
                "DOI": "10.1109/tvcg.2012.292",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.292",
                "FirstPage": 2236,
                "LastPage": 2244,
                "PaperType": "J",
                "Abstract": "Scientists, engineers and physicians are used to analyze 3D data with slice-based visualizations. Radiologists for example are trained to read slices of medical imaging data. Despite the numerous examples of sophisticated 3D rendering techniques, domain experts, who still prefer slice-based visualization do not consider these to be very useful. Since 3D renderings have the advantage of providing an overview at a glance, while 2D depictions better serve detailed analyses, it is of general interest to better combine these methods. Recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking in medical imaging data that result in repositioning slices. In this paper, we present a new volume picking technique called WYSIWYP (“what you see is what you pick”) that, in contrast to previous work, does not require pre-segmented data or metadata and thus is more generally applicable. The positions picked by our method are solely based on the data itself, the transfer function, and the way the volumetric rendering is perceived by the user. To demonstrate the utility of the proposed method, we apply it to automated positioning of slices in volumetric scalar fields from various application areas. Finally, we present results of a user study in which 3D locations selected by users are compared to those resulting from WYSIWYP. The user study confirms our claim that the resulting positions correlate well with those perceived by the user.",
                "AuthorNamesDeduped": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorNames": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorAffiliation": "Zuse Institute Berlin, Germany;TU Delft and AMC Amsterdam, Netherlands;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany",
                "InternalReferences": "0.1109/tvcg.2012.217;10.1109/visual.1998.745337;10.1109/visual.2003.1250384;10.1109/tvcg.2007.70576;10.1109/visual.2005.1532833;10.1109/tvcg.2009.121",
                "AuthorKeywords": "Picking, volume rendering, WYSIWYG",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1452,
                "i": [
                    1452
                ]
            }
        },
        {
            "name": "Hans-Christian Hege",
            "value": 575,
            "numPapers": 82,
            "cluster": "11",
            "visible": 1,
            "index": 130,
            "x": -63.864877138315514,
            "y": -94.71682779795712,
            "vy": 0,
            "vx": 0,
            "r": 1.6620610247553254,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "WYSIWYP: What You See Is What You Pick",
                "DOI": "10.1109/tvcg.2012.292",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.292",
                "FirstPage": 2236,
                "LastPage": 2244,
                "PaperType": "J",
                "Abstract": "Scientists, engineers and physicians are used to analyze 3D data with slice-based visualizations. Radiologists for example are trained to read slices of medical imaging data. Despite the numerous examples of sophisticated 3D rendering techniques, domain experts, who still prefer slice-based visualization do not consider these to be very useful. Since 3D renderings have the advantage of providing an overview at a glance, while 2D depictions better serve detailed analyses, it is of general interest to better combine these methods. Recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking in medical imaging data that result in repositioning slices. In this paper, we present a new volume picking technique called WYSIWYP (“what you see is what you pick”) that, in contrast to previous work, does not require pre-segmented data or metadata and thus is more generally applicable. The positions picked by our method are solely based on the data itself, the transfer function, and the way the volumetric rendering is perceived by the user. To demonstrate the utility of the proposed method, we apply it to automated positioning of slices in volumetric scalar fields from various application areas. Finally, we present results of a user study in which 3D locations selected by users are compared to those resulting from WYSIWYP. The user study confirms our claim that the resulting positions correlate well with those perceived by the user.",
                "AuthorNamesDeduped": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorNames": "Alexander Wiebel;Frans M. Vos;David Foerster;Hans-Christian Hege",
                "AuthorAffiliation": "Zuse Institute Berlin, Germany;TU Delft and AMC Amsterdam, Netherlands;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany",
                "InternalReferences": "0.1109/tvcg.2012.217;10.1109/visual.1998.745337;10.1109/visual.2003.1250384;10.1109/tvcg.2007.70576;10.1109/visual.2005.1532833;10.1109/tvcg.2009.121",
                "AuthorKeywords": "Picking, volume rendering, WYSIWYG",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1452,
                "i": [
                    1452
                ]
            }
        },
        {
            "name": "Konstantinos Efstathiou 0001",
            "value": 59,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 131,
            "x": 111.49702279186344,
            "y": 26.803244366133846,
            "vy": 0,
            "vx": 0,
            "r": 1.0679332181922856,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Efficient Structure-Aware Selection Techniques for 3D Point Cloud Visualizations with 2DOF Input",
                "DOI": "10.1109/tvcg.2012.217",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.217",
                "FirstPage": 2245,
                "LastPage": 2254,
                "PaperType": "J",
                "Abstract": "Data selection is a fundamental task in visualization because it serves as a pre-requisite to many follow-up interactions. Efficient spatial selection in 3D point cloud datasets consisting of thousands or millions of particles can be particularly challenging. We present two new techniques, TeddySelection and CloudLasso, that support the selection of subsets in large particle 3D datasets in an interactive and visually intuitive manner. Specifically, we describe how to spatially select a subset of a 3D particle cloud by simply encircling the target particles on screen using either the mouse or direct-touch input. Based on the drawn lasso, our techniques automatically determine a bounding selection surface around the encircled particles based on their density. This kind of selection technique can be applied to particle datasets in several application domains. TeddySelection and CloudLasso reduce, and in some cases even eliminate, the need for complex multi-step selection processes involving Boolean operations. This was confirmed in a formal, controlled user study in which we compared the more flexible CloudLasso technique to the standard cylinder-based selection technique. This study showed that the former is consistently more efficient than the latter - in several cases the CloudLasso selection time was half that of the corresponding cylinder-based selection.",
                "AuthorNamesDeduped": "Lingyun Yu 0001;Konstantinos Efstathiou 0001;Petra Isenberg;Tobias Isenberg 0001",
                "AuthorNames": "Lingyun Yu;Konstantinos Efstathiou;Petra Isenberg;Tobias Isenberg",
                "AuthorAffiliation": "University of Groningen, Netherlands;University of Groningen, Netherlands;INRIA, France;University of Groningen, Netherlands",
                "InternalReferences": "0.1109/tvcg.2010.157;10.1109/tvcg.2012.292;10.1109/tvcg.2008.153",
                "AuthorKeywords": "3D interaction, spatial selection, direct-touch interaction",
                "AminerCitationCount": 89,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1209,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1443,
                "i": [
                    1443
                ]
            }
        },
        {
            "name": "Petra Isenberg",
            "value": 798,
            "numPapers": 169,
            "cluster": "5",
            "visible": 1,
            "index": 132,
            "x": -100.70048757208318,
            "y": 55.76210005680131,
            "vy": 0,
            "vx": 0,
            "r": 1.918825561312608,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Perception! Immersion! Empowerment! Superpowers as Inspiration for Visualization",
                "DOI": "10.1109/tvcg.2021.3114844",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114844",
                "FirstPage": 22,
                "LastPage": 32,
                "PaperType": "J",
                "Abstract": "We explore how the lens of fictional superpowers can help characterize how visualizations empower people and provide inspiration for new visualization systems. Researchers and practitioners often tout visualizations' ability to “make the invisible visible” and to “enhance cognitive abilities.” Meanwhile superhero comics and other modern fiction often depict characters with similarly fantastic abilities that allow them to see and interpret the world in ways that transcend traditional human perception. We investigate the intersection of these domains, and show how the language of superpowers can be used to characterize existing visualization systems and suggest opportunities for new and empowering ones. We introduce two frameworks: The first characterizes seven underlying mechanisms that form the basis for a variety of visual superpowers portrayed in fiction. The second identifies seven ways in which visualization tools and interfaces can instill a sense of empowerment in the people who use them. Building on these observations, we illustrate a diverse set of “visualization superpowers” and highlight opportunities for the visualization community to create new systems and interactions that empower new experiences with data Material and illustrations are available under CC-BY 4.0 at osf.io/8yhfz.",
                "AuthorNamesDeduped": "Wesley Willett;Bon Adriel Aseniero;Sheelagh Carpendale;Pierre Dragicevic;Yvonne Jansen;Lora Oehlberg;Petra Isenberg",
                "AuthorNames": "Wesley Willett;Bon Adriel Aseniero;Sheelagh Carpendale;Pierre Dragicevic;Yvonne Jansen;Lora Oehlberg;Petra Isenberg",
                "AuthorAffiliation": "University of Calgary, United States;Autodesk, United States;Simon Fraser Univ., Canada;Universite Paris-Saclay, CNRS. Inria. LISN, France;Sorbonne Universite, CNRS, ISIR, France;University of Calgary, United States;Universite Paris-Saclay, CNRS. Inria. LISN, France",
                "InternalReferences": "0.1109/tvcg.2019.2934283;10.1109/tvcg.2008.137;10.1109/tvcg.2013.134;10.1109/tvcg.2006.146;10.1109/tvcg.2015.2467551;10.1109/tvcg.2018.2865152;10.1109/visual.2005.1532781;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Visualization,superpowers,empowerment,vision,perception,cognition,fiction,situated visualization",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 103,
                "DownloadsXplore": 2251,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 283,
                "i": [
                    283
                ]
            }
        },
        {
            "name": "Cindy Xiong Bearfield",
            "value": 0,
            "numPapers": 38,
            "cluster": "5",
            "visible": 1,
            "index": 133,
            "x": 36.72445147272723,
            "y": -109.55051192955378,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Vistrust: a Multidimensional Framework and Empirical Study of Trust in Data Visualizations",
                "DOI": "10.1109/tvcg.2023.3326579",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326579",
                "FirstPage": 348,
                "LastPage": 358,
                "PaperType": "J",
                "Abstract": "Trust is an essential aspect of data visualization, as it plays a crucial role in the interpretation and decision-making processes of users. While research in social sciences outlines the multi-dimensional factors that can play a role in trust formation, most data visualization trust researchers employ a single-item scale to measure trust. We address this gap by proposing a comprehensive, multidimensional conceptualization and operationalization of trust in visualization. We do this by applying general theories of trust from social sciences, as well as synthesizing and extending earlier work and factors identified by studies in the visualization field. We apply a two-dimensional approach to trust in visualization, to distinguish between cognitive and affective elements, as well as between visualization and data-specific trust antecedents. We use our framework to design and run a large crowd-sourced study to quantify the role of visual complexity in establishing trust in science visualizations. Our study provides empirical evidence for several aspects of our proposed theoretical framework, most notably the impact of cognition, affective responses, and individual differences when establishing trust in visualizations.",
                "AuthorNamesDeduped": "Hamza Elhamdadi;Adam Stefkovics;Johanna Beyer;Eric Mörth;Hanspeter Pfister;Cindy Xiong Bearfield;Carolina Nobre",
                "AuthorNames": "Hamza Elhamdadi;Adam Stefkovics;Johanna Beyer;Eric Moerth;Hanspeter Pfister;Cindy Xiong Bearfield;Carolina Nobre",
                "AuthorAffiliation": "UMass Amherst, USA;HUN-REN Centre for Social Sciences, USA;Harvard University, USA;Harvard Medical School, USA;Harvard University, USA;UMass Amherst, USA;University of Toronto, Canada",
                "InternalReferences": "10.1109/tvcg.2016.2598544;10.1109/tvcg.2020.3028984;10.1109/tvcg.2017.2745240;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Trust,visualization,science,framework",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 307,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 10,
                "i": [
                    10
                ]
            }
        },
        {
            "name": "Yea-Seul Kim",
            "value": 123,
            "numPapers": 41,
            "cluster": "5",
            "visible": 1,
            "index": 134,
            "x": 47.096245580184764,
            "y": 105.98086455700823,
            "vy": 0,
            "vx": 0,
            "r": 1.1416234887737478,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Bayesian-Assisted Inference from Visualized Data",
                "DOI": "10.1109/tvcg.2020.3028984",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028984",
                "FirstPage": 989,
                "LastPage": 999,
                "PaperType": "J",
                "Abstract": "A Bayesian view of data interpretation suggests that a visualization user should update their existing beliefs about a parameter's value in accordance with the amount of information about the parameter value captured by the new observations. Extending recent work applying Bayesian models to understand and evaluate belief updating from visualizations, we show how the predictions of Bayesian inference can be used to guide more rational belief updating. We design a Bayesian inference-assisted uncertainty analogy that numerically relates uncertainty in observed data to the user's subjective uncertainty, and a posterior visualization that prescribes how a user should update their beliefs given their prior beliefs and the observed data. In a pre-registered experiment on 4,800 people, we find that when a newly observed data sample is relatively small (N=158), both techniques reliably improve people's Bayesian updating on average compared to the current best practice of visualizing uncertainty in the observed data. For large data samples (N=5208), where people's updated beliefs tend to deviate more strongly from the prescriptions of a Bayesian model, we find evidence that the effectiveness of the two forms of Bayesian assistance may depend on people's proclivity toward trusting the source of the data. We discuss how our results provide insight into individual processes of belief updating and subjective uncertainty, and how understanding these aspects of interpretation paves the way for more sophisticated interactive visualizations for analysis and communication.",
                "AuthorNamesDeduped": "Yea-Seul Kim;Paula Kayongo;Madeleine Grunde-McLaughlin;Jessica Hullman",
                "AuthorNames": "Yea-Seul Kim;Paula Kayongo;Madeleine Grunde-McLaughlin;Jessica Hullman",
                "AuthorAffiliation": "University of Washington;Northwestern University;University of Pennsylvania;University of Washington",
                "InternalReferences": "0.1109/tvcg.2014.2346298;10.1109/tvcg.2010.176;10.1109/tvcg.2019.2934287;10.1109/tvcg.2017.2743898;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2864913;10.1109/tvcg.2012.199;10.1109/tvcg.2015.2467758",
                "AuthorKeywords": "Bayesian cognition,Belief updating,Uncertainty visualization,Adaptive visualization",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 586,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 394,
                "i": [
                    394
                ]
            }
        },
        {
            "name": "Bum Chul Kwon",
            "value": 667,
            "numPapers": 114,
            "cluster": "4",
            "visible": 1,
            "index": 135,
            "x": -106.71084731571325,
            "y": -46.50586054641428,
            "vy": 0,
            "vx": 0,
            "r": 1.7679907887161774,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "VLAT: Development of a Visualization Literacy Assessment Test",
                "DOI": "10.1109/tvcg.2016.2598920",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598920",
                "FirstPage": 551,
                "LastPage": 560,
                "PaperType": "J",
                "Abstract": "The Information Visualization community has begun to pay attention to visualization literacy; however, researchers still lack instruments for measuring the visualization literacy of users. In order to address this gap, we systematically developed a visualization literacy assessment test (VLAT), especially for non-expert users in data visualization, by following the established procedure of test development in Psychological and Educational Measurement: (1) Test Blueprint Construction, (2) Test Item Generation, (3) Content Validity Evaluation, (4) Test Tryout and Item Analysis, (5) Test Item Selection, and (6) Reliability Evaluation. The VLAT consists of 12 data visualizations and 53 multiple-choice test items that cover eight data visualization tasks. The test items in the VLAT were evaluated with respect to their essentialness by five domain experts in Information Visualization and Visual Analytics (average content validity ratio = 0.66). The VLAT was also tried out on a sample of 191 test takers and showed high reliability (reliability coefficient omega = 0.76). In addition, we demonstrated the relationship between users' visualization literacy and aptitude for learning an unfamiliar visualization and showed that they had a fairly high positive relationship (correlation coefficient = 0.64). Finally, we discuss evidence for the validity of the VLAT and potential research areas that are related to the instrument.",
                "AuthorNamesDeduped": "Sukwon Lee;Sung-Hee Kim;Bum Chul Kwon",
                "AuthorNames": "Sukwon Lee;Sung-Hee Kim;Bum Chul Kwon",
                "AuthorAffiliation": "School of Industrial Engineering, Purdue University, West Lafayette, IN, USA;Samsung Electronics Co., Ltd., Seoul, South Korea;IBM T.J. Watson Research Center, Yorktown Heights, NY, USA",
                "InternalReferences": "0.1109/tvcg.2014.2346419;10.1109/tvcg.2014.2346481;10.1109/tvcg.2014.2346984;10.1109/visual.1991.175815;10.1109/tvcg.2007.70515;10.1109/tvcg.2015.2467195;10.1109/vast.2011.6102435;10.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467201",
                "AuthorKeywords": "Visualization Literacy;Assessment Test;Instrument;Measurement;Aptitude;Education",
                "AminerCitationCount": 156,
                "CitationCountCrossRef": 107,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3595,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 891,
                "i": [
                    891
                ]
            }
        },
        {
            "name": "Johanna Beyer",
            "value": 413,
            "numPapers": 165,
            "cluster": "6",
            "visible": 1,
            "index": 136,
            "x": 110.50503963840067,
            "y": -37.92935821386251,
            "vy": 0,
            "vx": 0,
            "r": 1.4755325273459987,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models",
                "DOI": "10.1109/tvcg.2022.3209479",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209479",
                "FirstPage": 1146,
                "LastPage": 1156,
                "PaperType": "J",
                "Abstract": "State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorNames": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorAffiliation": "IBM Research, China;Brown University, USA;Huggingface, USA;IBM Research, China;Harvard SEAS, USA;Harvard SEAS, USA;Huggingface, USA",
                "InternalReferences": "0.1109/tvcg.2020.3028976;10.1109/tvcg.2021.3114683;10.1109/tvcg.2018.2865230;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Natural language processing,language modeling,zero-shot models",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 3637,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 133,
                "i": [
                    133
                ]
            }
        },
        {
            "name": "Dominik Sacha",
            "value": 472,
            "numPapers": 84,
            "cluster": "4",
            "visible": 1,
            "index": 137,
            "x": -56.06631330253133,
            "y": 102.98819598702755,
            "vy": 0,
            "vx": 0,
            "r": 1.5434657455382843,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "The Role of Uncertainty, Awareness, and Trust in Visual Analytics",
                "DOI": "10.1109/tvcg.2015.2467591",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467591",
                "FirstPage": 240,
                "LastPage": 249,
                "PaperType": "J",
                "Abstract": "Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making.",
                "AuthorNamesDeduped": "Dominik Sacha;Hansi Senaratne;Bum Chul Kwon;Geoffrey P. Ellis;Daniel A. Keim",
                "AuthorNames": "Dominik Sacha;Hansi Senaratne;Bum Chul Kwon;Geoffrey Ellis;Daniel A. Keim",
                "AuthorAffiliation": "Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/visual.2000.885679;10.1109/vast.2008.4677385;10.1109/vast.2009.5332611;10.1109/tvcg.2012.260;10.1109/vast.2011.6102473;10.1109/vast.2009.5333020;10.1109/vast.2011.6102435;10.1109/tvcg.2012.279;10.1109/tvcg.2014.2346481;10.1109/vast.2006.261416",
                "AuthorKeywords": "Visual Analytics, Knowledge Generation, Uncertainty Measures and Propagation, Trust Building, Human Factors",
                "AminerCitationCount": 261,
                "CitationCountCrossRef": 167,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 3805,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1097,
                "i": [
                    1097
                ]
            }
        },
        {
            "name": "Yanna Lin",
            "value": 86,
            "numPapers": 41,
            "cluster": "5",
            "visible": 1,
            "index": 138,
            "x": -28.328426231590125,
            "y": -114.22565503091396,
            "vy": 0,
            "vx": 0,
            "r": 1.0990213010938399,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InkSight: Leveraging Sketch Interaction for Documenting Chart Findings in Computational Notebooks",
                "DOI": "10.1109/tvcg.2023.3327170",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327170",
                "FirstPage": 944,
                "LastPage": 954,
                "PaperType": "J",
                "Abstract": "Computational notebooks have become increasingly popular for exploratory data analysis due to their ability to support data exploration and explanation within a single document. Effective documentation for explaining chart findings during the exploration process is essential as it helps recall and share data analysis. However, documenting chart findings remains a challenge due to its time-consuming and tedious nature. While existing automatic methods alleviate some of the burden on users, they often fail to cater to users' specific interests. In response to these limitations, we present InkSight, a mixed-initiative computational notebook plugin that generates finding documentation based on the user's intent. InkSight allows users to express their intent in specific data subsets through sketching atop visualizations intuitively. To facilitate this, we designed two types of sketches, i.e., open-path and closed-path sketch. Upon receiving a user's sketch, InkSight identifies the sketch type and corresponding selected data items. Subsequently, it filters data fact types based on the sketch and selected data items before employing existing automatic data fact recommendation algorithms to infer data facts. Using large language models (GPT-3.5), InkSight converts data facts into effective natural language documentation. Users can conveniently fine-tune the generated documentation within InkSight. A user study with 12 participants demonstrated the usability and effectiveness of InkSight in expressing user intent and facilitating chart finding documentation.",
                "AuthorNamesDeduped": "Yanna Lin;Haotian Li 0001;Leni Yang;Aoyu Wu;Huamin Qu",
                "AuthorNames": "Yanna Lin;Haotian Li;Leni Yang;Aoyu Wu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Harvard University, USA;Hong Kong University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2019.2934785;10.1109/tvcg.2021.3114802;10.1109/tvcg.2013.191;10.1109/tvcg.2020.3030378;10.1109/tvcg.2022.3209421;10.1109/tvcg.2020.3030403;10.1109/tvcg.2018.2865145;10.1109/tvcg.2012.275;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2021.3114826;10.1109/tvcg.2021.3114774;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Computational Notebook,Sketch-based Interaction,Documentation,Visualization,Exploratory Data Analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 249,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 11,
                "i": [
                    11
                ]
            }
        },
        {
            "name": "Haotian Li 0001",
            "value": 37,
            "numPapers": 30,
            "cluster": "5",
            "visible": 1,
            "index": 139,
            "x": 98.40014422034744,
            "y": 65.3254285666372,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InkSight: Leveraging Sketch Interaction for Documenting Chart Findings in Computational Notebooks",
                "DOI": "10.1109/tvcg.2023.3327170",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327170",
                "FirstPage": 944,
                "LastPage": 954,
                "PaperType": "J",
                "Abstract": "Computational notebooks have become increasingly popular for exploratory data analysis due to their ability to support data exploration and explanation within a single document. Effective documentation for explaining chart findings during the exploration process is essential as it helps recall and share data analysis. However, documenting chart findings remains a challenge due to its time-consuming and tedious nature. While existing automatic methods alleviate some of the burden on users, they often fail to cater to users' specific interests. In response to these limitations, we present InkSight, a mixed-initiative computational notebook plugin that generates finding documentation based on the user's intent. InkSight allows users to express their intent in specific data subsets through sketching atop visualizations intuitively. To facilitate this, we designed two types of sketches, i.e., open-path and closed-path sketch. Upon receiving a user's sketch, InkSight identifies the sketch type and corresponding selected data items. Subsequently, it filters data fact types based on the sketch and selected data items before employing existing automatic data fact recommendation algorithms to infer data facts. Using large language models (GPT-3.5), InkSight converts data facts into effective natural language documentation. Users can conveniently fine-tune the generated documentation within InkSight. A user study with 12 participants demonstrated the usability and effectiveness of InkSight in expressing user intent and facilitating chart finding documentation.",
                "AuthorNamesDeduped": "Yanna Lin;Haotian Li 0001;Leni Yang;Aoyu Wu;Huamin Qu",
                "AuthorNames": "Yanna Lin;Haotian Li;Leni Yang;Aoyu Wu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Harvard University, USA;Hong Kong University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2019.2934785;10.1109/tvcg.2021.3114802;10.1109/tvcg.2013.191;10.1109/tvcg.2020.3030378;10.1109/tvcg.2022.3209421;10.1109/tvcg.2020.3030403;10.1109/tvcg.2018.2865145;10.1109/tvcg.2012.275;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934398;10.1109/tvcg.2021.3114826;10.1109/tvcg.2021.3114774;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Computational Notebook,Sketch-based Interaction,Documentation,Visualization,Exploratory Data Analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 249,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 11,
                "i": [
                    11
                ]
            }
        },
        {
            "name": "Leni Yang",
            "value": 46,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 140,
            "x": -117.10137193735217,
            "y": 18.36487654164615,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "A Design Space for Applying the Freytag's Pyramid Structure to Data Stories",
                "DOI": "10.1109/tvcg.2021.3114774",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114774",
                "FirstPage": 922,
                "LastPage": 932,
                "PaperType": "J",
                "Abstract": "Data stories integrate compelling visual content to communicate data insights in the form of narratives. The narrative structure of a data story serves as the backbone that determines its expressiveness, and it can largely influence how audiences perceive the insights. Freytag's Pyramid is a classic narrative structure that has been widely used in film and literature. While there are continuous recommendations and discussions about applying Freytag's Pyramid to data stories, little systematic and practical guidance is available on how to use Freytag's Pyramid for creating structured data stories. To bridge this gap, we examined how existing practices apply Freytag's Pyramid by analyzing stories extracted from 103 data videos. Based on our findings, we proposed a design space of narrative patterns, data flows, and visual communications to provide practical guidance on achieving narrative intents, organizing data facts, and selecting visual design techniques through story creation. We evaluated the proposed design space through a workshop with 25 participants. Results show that our design space provides a clear framework for rapid storyboarding of data stories with Freytag's Pyramid.",
                "AuthorNamesDeduped": "Leni Yang;Xian Xu;Xingyu Lan;Ziyan Liu;Shunan Guo;Yang Shi 0007;Huamin Qu;Nan Cao 0001",
                "AuthorNames": "Leni Yang;Xian Xu;XingYu Lan;Ziyan Liu;Shunan Guo;Yang Shi;Huamin Qu;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China and Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Adobe Research, USA;Intelligent Big Data Visualization Lab at Tongji University, China;Hong Kong University of Science and Technology, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2013.234;10.1109/tvcg.2013.124;10.1109/tvcg.2011.175;10.1109/tvcg.2013.119;10.1109/tvcg.2015.2467195;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2020.3030396;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Freytag's Pyramid,Narrative Structure,Narrative Visualization,Data Storytelling,Data Video",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 2247,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 264,
                "i": [
                    264
                ]
            }
        },
        {
            "name": "Fabian Beck 0001",
            "value": 237,
            "numPapers": 50,
            "cluster": "5",
            "visible": 1,
            "index": 141,
            "x": 74.20428223759981,
            "y": -92.97163275753863,
            "vy": 0,
            "vx": 0,
            "r": 1.2728842832469776,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kori: Interactive Synthesis of Text and Charts in Data Documents",
                "DOI": "10.1109/tvcg.2021.3114802",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114802",
                "FirstPage": 184,
                "LastPage": 194,
                "PaperType": "J",
                "Abstract": "Charts go hand in hand with text to communicate complex data and are widely adopted in news articles, online blogs, and academic papers. They provide graphical summaries of the data, while text explains the message and context. However, synthesizing information across text and charts is difficult; it requires readers to frequently shift their attention. We investigated ways to support the tight coupling of text and charts in data documents. To understand their interplay, we analyzed the design space of chart-text references through news articles and scientific papers. Informed by the analysis, we developed a mixed-initiative interface enabling users to construct interactive references between text and charts. It leverages natural language processing to automatically suggest references as well as allows users to manually construct other references effortlessly. A user study complemented with algorithmic evaluation of the system suggests that the interface provides an effective way to compose interactive data documents.",
                "AuthorNamesDeduped": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck 0001;Nam Wook Kim",
                "AuthorNames": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck;Nam Wook Kim",
                "AuthorAffiliation": "University of Duisburg-Essen, Germany;Boston College, USA;Harvard University, USA;University of Duisburg-Essen, Germany;Boston College, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2018.2865119;10.1109/tvcg.2015.2467732;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2018.2865022;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2011.183;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Data-driven storytelling,interaction design,authoring,visualization-text linking,mixed-initiative interface,interactive documents",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 992,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 268,
                "i": [
                    268
                ]
            }
        },
        {
            "name": "Arpit Narechania",
            "value": 127,
            "numPapers": 60,
            "cluster": "5",
            "visible": 1,
            "index": 142,
            "x": 8.114027623455186,
            "y": 119.09728189898293,
            "vy": 0,
            "vx": 0,
            "r": 1.1462291306850891,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries",
                "DOI": "10.1109/tvcg.2020.3030378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030378",
                "FirstPage": 369,
                "LastPage": 379,
                "PaperType": "J",
                "Abstract": "Natural language interfaces (NLls) have shown great promise for visual data analysis, allowing people to flexibly specify and interact with visualizations. However, developing visualization NLIs remains a challenging task, requiring low-level implementation of natural language processing (NLP) techniques as well as knowledge of visual analytic tasks and visualization design. We present NL4DV, a toolkit for natural language-driven data visualization. NL4DV is a Python package that takes as input a tabular dataset and a natural language query about that dataset. In response, the toolkit returns an analytic specification modeled as a JSON object containing data attributes, analytic tasks, and a list of Vega-Lite specifications relevant to the input query. In doing so, NL4DV aids visualization developers who may not have a background in NLP, enabling them to create new visualization NLIs or incorporate natural language input within their existing systems. We demonstrate NL4DV's usage and capabilities through four examples: 1) rendering visualizations using natural language in a Jupyter notebook, 2) developing a NLI to specify and edit Vega-Lite charts, 3) recreating data ambiguity widgets from the DataTone system, and 4) incorporating speech input to create a multimodal visualization system.",
                "AuthorNamesDeduped": "Arpit Narechania;Arjun Srinivasan;John T. Stasko",
                "AuthorNames": "Arpit Narechania;Arjun Srinivasan;John Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2010.144;10.1109/tvcg.2017.2744684;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2745219;10.1109/infvis.2000.885086;10.1109/vast47406.2019.8986918;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Natural Language Interfaces,Visualization Toolkits",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 66,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 1823,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 363,
                "i": [
                    363
                ]
            }
        },
        {
            "name": "Xinyue Xu",
            "value": 139,
            "numPapers": 27,
            "cluster": "5",
            "visible": 1,
            "index": 143,
            "x": -86.73482973557856,
            "y": -82.62608129846286,
            "vy": 0,
            "vx": 0,
            "r": 1.1600460564191135,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Calliope: Automatic Visual Data Story Generation from a Spreadsheet",
                "DOI": "10.1109/tvcg.2020.3030403",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030403",
                "FirstPage": 453,
                "LastPage": 463,
                "PaperType": "J",
                "Abstract": "Visual data stories shown in the form of narrative visualizations such as a poster or a data video, are frequently used in data-oriented storytelling to facilitate the understanding and memorization of the story content. Although useful, technique barriers, such as data analysis, visualization, and scripting, make the generation of a visual data story difficult. Existing authoring tools rely on users' skills and experiences, which are usually inefficient and still difficult. In this paper, we introduce a novel visual data story generating system, Calliope, which creates visual data stories from an input spreadsheet through an automatic process and facilities the easy revision of the generated story based on an online story editor. Particularly, Calliope incorporates a new logic-oriented Monte Carlo tree search algorithm that explores the data space given by the input spreadsheet to progressively generate story pieces (i.e., data facts) and organize them in a logical order. The importance of data facts is measured based on information theory, and each data fact is visualized in a chart and captioned by an automatically generated description. We evaluate the proposed technique through three example stories, two controlled experiments, and a series of interviews with 10 domain experts. Our evaluation shows that Calliope is beneficial to efficient visual data story generation.",
                "AuthorNamesDeduped": "Danqing Shi;Xinyue Xu;Fuling Sun;Yang Shi 0007;Nan Cao 0001",
                "AuthorNames": "Danqing Shi;Xinyue Xu;Fuling Sun;Yang Shi;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2015.2467732;10.1109/tvcg.2019.2934785;10.1109/tvcg.2013.119;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2019.2934281;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2018.2865232;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Information Visualization,Visual Storytelling,Data Story",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 3040,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 364,
                "i": [
                    364
                ]
            }
        },
        {
            "name": "Fuling Sun",
            "value": 139,
            "numPapers": 27,
            "cluster": "5",
            "visible": 1,
            "index": 144,
            "x": 120.1852680334912,
            "y": 2.345495196728664,
            "vy": 0,
            "vx": 0,
            "r": 1.1600460564191135,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Calliope: Automatic Visual Data Story Generation from a Spreadsheet",
                "DOI": "10.1109/tvcg.2020.3030403",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030403",
                "FirstPage": 453,
                "LastPage": 463,
                "PaperType": "J",
                "Abstract": "Visual data stories shown in the form of narrative visualizations such as a poster or a data video, are frequently used in data-oriented storytelling to facilitate the understanding and memorization of the story content. Although useful, technique barriers, such as data analysis, visualization, and scripting, make the generation of a visual data story difficult. Existing authoring tools rely on users' skills and experiences, which are usually inefficient and still difficult. In this paper, we introduce a novel visual data story generating system, Calliope, which creates visual data stories from an input spreadsheet through an automatic process and facilities the easy revision of the generated story based on an online story editor. Particularly, Calliope incorporates a new logic-oriented Monte Carlo tree search algorithm that explores the data space given by the input spreadsheet to progressively generate story pieces (i.e., data facts) and organize them in a logical order. The importance of data facts is measured based on information theory, and each data fact is visualized in a chart and captioned by an automatically generated description. We evaluate the proposed technique through three example stories, two controlled experiments, and a series of interviews with 10 domain experts. Our evaluation shows that Calliope is beneficial to efficient visual data story generation.",
                "AuthorNamesDeduped": "Danqing Shi;Xinyue Xu;Fuling Sun;Yang Shi 0007;Nan Cao 0001",
                "AuthorNames": "Danqing Shi;Xinyue Xu;Fuling Sun;Yang Shi;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2015.2467732;10.1109/tvcg.2019.2934785;10.1109/tvcg.2013.119;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2019.2934281;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2018.2865232;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Information Visualization,Visual Storytelling,Data Story",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 3040,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 364,
                "i": [
                    364
                ]
            }
        },
        {
            "name": "Danqing Shi",
            "value": 106,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 145,
            "x": -90.51682617978139,
            "y": 79.72894191157462,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Calliope: Automatic Visual Data Story Generation from a Spreadsheet",
                "DOI": "10.1109/tvcg.2020.3030403",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030403",
                "FirstPage": 453,
                "LastPage": 463,
                "PaperType": "J",
                "Abstract": "Visual data stories shown in the form of narrative visualizations such as a poster or a data video, are frequently used in data-oriented storytelling to facilitate the understanding and memorization of the story content. Although useful, technique barriers, such as data analysis, visualization, and scripting, make the generation of a visual data story difficult. Existing authoring tools rely on users' skills and experiences, which are usually inefficient and still difficult. In this paper, we introduce a novel visual data story generating system, Calliope, which creates visual data stories from an input spreadsheet through an automatic process and facilities the easy revision of the generated story based on an online story editor. Particularly, Calliope incorporates a new logic-oriented Monte Carlo tree search algorithm that explores the data space given by the input spreadsheet to progressively generate story pieces (i.e., data facts) and organize them in a logical order. The importance of data facts is measured based on information theory, and each data fact is visualized in a chart and captioned by an automatically generated description. We evaluate the proposed technique through three example stories, two controlled experiments, and a series of interviews with 10 domain experts. Our evaluation shows that Calliope is beneficial to efficient visual data story generation.",
                "AuthorNamesDeduped": "Danqing Shi;Xinyue Xu;Fuling Sun;Yang Shi 0007;Nan Cao 0001",
                "AuthorNames": "Danqing Shi;Xinyue Xu;Fuling Sun;Yang Shi;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University;Intelligent Big Data Visualization Lab, Tongji University",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2015.2467732;10.1109/tvcg.2019.2934785;10.1109/tvcg.2013.119;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2019.2934281;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2018.2865232;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Information Visualization,Visual Storytelling,Data Story",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 3040,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 364,
                "i": [
                    364
                ]
            }
        },
        {
            "name": "Xiaojuan Ma",
            "value": 180,
            "numPapers": 72,
            "cluster": "4",
            "visible": 1,
            "index": 146,
            "x": 12.932377520281568,
            "y": -120.34431275167479,
            "vy": 0,
            "vx": 0,
            "r": 1.2072538860103628,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "DataShot: Automatic Generation of Fact Sheets from Tabular Data",
                "DOI": "10.1109/tvcg.2019.2934398",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934398",
                "FirstPage": 895,
                "LastPage": 905,
                "PaperType": "J",
                "Abstract": "Fact sheets with vivid graphical design and intriguing statistical insights are prevalent for presenting raw data. They help audiences understand data-related facts effectively and make a deep impression. However, designing a fact sheet requires both data and design expertise and is a laborious and time-consuming process. One needs to not only understand the data in depth but also produce intricate graphical representations. To assist in the design process, we present DataShot which, to the best of our knowledge, is the first automated system that creates fact sheets automatically from tabular data. First, we conduct a qualitative analysis of 245 infographic examples to explore general infographic design space at both the sheet and element levels. We identify common infographic structures, sheet layouts, fact types, and visualization styles during the study. Based on these findings, we propose a fact sheet generation pipeline, consisting of fact extraction, fact composition, and presentation synthesis, for the auto-generation workflow. To validate our system, we present use cases with three real-world datasets. We conduct an in-lab user study to understand the usage of our system. Our evaluation results show that DataShot can efficiently generate satisfactory fact sheets to support further customization and data presentation.",
                "AuthorNamesDeduped": "Yun Wang 0012;Zhida Sun;Haidong Zhang;Weiwei Cui;Ke Xu;Xiaojuan Ma;Dongmei Zhang 0001",
                "AuthorNames": "Yun Wang;Zhida Sun;Haidong Zhang;Weiwei Cui;Ke Xu;Xiaojuan Ma;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Microsoft Research Asia;Microsoft Research Asia;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Microsoft Research Asia",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2016.2598647;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2013.119;10.1109/tvcg.2016.2598620;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Fact sheet,infographic,visualization,and automated design",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 76,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2498,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 519,
                "i": [
                    519
                ]
            }
        },
        {
            "name": "Zhida Sun",
            "value": 112,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 147,
            "x": 71.99996529155578,
            "y": 97.80595584122044,
            "vy": 0,
            "vx": 0,
            "r": 1.128957973517559,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "DataShot: Automatic Generation of Fact Sheets from Tabular Data",
                "DOI": "10.1109/tvcg.2019.2934398",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934398",
                "FirstPage": 895,
                "LastPage": 905,
                "PaperType": "J",
                "Abstract": "Fact sheets with vivid graphical design and intriguing statistical insights are prevalent for presenting raw data. They help audiences understand data-related facts effectively and make a deep impression. However, designing a fact sheet requires both data and design expertise and is a laborious and time-consuming process. One needs to not only understand the data in depth but also produce intricate graphical representations. To assist in the design process, we present DataShot which, to the best of our knowledge, is the first automated system that creates fact sheets automatically from tabular data. First, we conduct a qualitative analysis of 245 infographic examples to explore general infographic design space at both the sheet and element levels. We identify common infographic structures, sheet layouts, fact types, and visualization styles during the study. Based on these findings, we propose a fact sheet generation pipeline, consisting of fact extraction, fact composition, and presentation synthesis, for the auto-generation workflow. To validate our system, we present use cases with three real-world datasets. We conduct an in-lab user study to understand the usage of our system. Our evaluation results show that DataShot can efficiently generate satisfactory fact sheets to support further customization and data presentation.",
                "AuthorNamesDeduped": "Yun Wang 0012;Zhida Sun;Haidong Zhang;Weiwei Cui;Ke Xu;Xiaojuan Ma;Dongmei Zhang 0001",
                "AuthorNames": "Yun Wang;Zhida Sun;Haidong Zhang;Weiwei Cui;Ke Xu;Xiaojuan Ma;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Microsoft Research Asia;Microsoft Research Asia;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Microsoft Research Asia",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2016.2598647;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2013.119;10.1109/tvcg.2016.2598620;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Fact sheet,infographic,visualization,and automated design",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 76,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2498,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 519,
                "i": [
                    519
                ]
            }
        },
        {
            "name": "Ke Xu",
            "value": 254,
            "numPapers": 63,
            "cluster": "3",
            "visible": 1,
            "index": 148,
            "x": -119.56074896914863,
            "y": -23.56326178474077,
            "vy": 0,
            "vx": 0,
            "r": 1.2924582613701785,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "DataShot: Automatic Generation of Fact Sheets from Tabular Data",
                "DOI": "10.1109/tvcg.2019.2934398",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934398",
                "FirstPage": 895,
                "LastPage": 905,
                "PaperType": "J",
                "Abstract": "Fact sheets with vivid graphical design and intriguing statistical insights are prevalent for presenting raw data. They help audiences understand data-related facts effectively and make a deep impression. However, designing a fact sheet requires both data and design expertise and is a laborious and time-consuming process. One needs to not only understand the data in depth but also produce intricate graphical representations. To assist in the design process, we present DataShot which, to the best of our knowledge, is the first automated system that creates fact sheets automatically from tabular data. First, we conduct a qualitative analysis of 245 infographic examples to explore general infographic design space at both the sheet and element levels. We identify common infographic structures, sheet layouts, fact types, and visualization styles during the study. Based on these findings, we propose a fact sheet generation pipeline, consisting of fact extraction, fact composition, and presentation synthesis, for the auto-generation workflow. To validate our system, we present use cases with three real-world datasets. We conduct an in-lab user study to understand the usage of our system. Our evaluation results show that DataShot can efficiently generate satisfactory fact sheets to support further customization and data presentation.",
                "AuthorNamesDeduped": "Yun Wang 0012;Zhida Sun;Haidong Zhang;Weiwei Cui;Ke Xu;Xiaojuan Ma;Dongmei Zhang 0001",
                "AuthorNames": "Yun Wang;Zhida Sun;Haidong Zhang;Weiwei Cui;Ke Xu;Xiaojuan Ma;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Microsoft Research Asia;Microsoft Research Asia;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Department of Computer Science and Engineering, Hong Kong University of Science and Technology;Microsoft Research Asia",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2016.2598647;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2013.119;10.1109/tvcg.2016.2598620;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Fact sheet,infographic,visualization,and automated design",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 76,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2498,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 519,
                "i": [
                    519
                ]
            }
        },
        {
            "name": "Chenhui Li",
            "value": 12,
            "numPapers": 33,
            "cluster": "5",
            "visible": 1,
            "index": 149,
            "x": 104.42697004516829,
            "y": -63.60037678493361,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InvVis: Large-Scale Data Embedding for Invertible Visualization",
                "DOI": "10.1109/tvcg.2023.3326597",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326597",
                "FirstPage": 1139,
                "LastPage": 1149,
                "PaperType": "J",
                "Abstract": "We present InvVis, a new approach for invertible visualization, which is reconstructing or further modifying a visualization from an image. InvVis allows the embedding of a significant amount of data, such as chart data, chart information, source code, etc., into visualization images. The encoded image is perceptually indistinguishable from the original one. We propose a new method to efficiently express chart data in the form of images, enabling large-capacity data embedding. We also outline a model based on the invertible neural network to achieve high-quality data concealing and revealing. We explore and implement a variety of application scenarios of InvVis. Additionally, we conduct a series of evaluation experiments to assess our method from multiple perspectives, including data embedding quality, data restoration accuracy, data encoding capacity, etc. The result of our experiments demonstrates the great potential of InvVis in invertible visualization.",
                "AuthorNamesDeduped": "Huayuan Ye;Chenhui Li;Yang Li;Changbo Wang",
                "AuthorNames": "Huayuan Ye;Chenhui Li;Yang Li;Changbo Wang",
                "AuthorAffiliation": "School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China",
                "InternalReferences": "10.1109/tvcg.2019.2934810;10.1109/tvcg.2020.3030351;10.1109/tvcg.2017.2744320;10.1109/tvcg.2020.3030343",
                "AuthorKeywords": "Information visualization,information steganography,invertible visualization,invertible neural network",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 218,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 12,
                "i": [
                    12
                ]
            }
        },
        {
            "name": "Changbo Wang",
            "value": 12,
            "numPapers": 33,
            "cluster": "5",
            "visible": 1,
            "index": 150,
            "x": -34.15341612383254,
            "y": 117.82845228157899,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InvVis: Large-Scale Data Embedding for Invertible Visualization",
                "DOI": "10.1109/tvcg.2023.3326597",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326597",
                "FirstPage": 1139,
                "LastPage": 1149,
                "PaperType": "J",
                "Abstract": "We present InvVis, a new approach for invertible visualization, which is reconstructing or further modifying a visualization from an image. InvVis allows the embedding of a significant amount of data, such as chart data, chart information, source code, etc., into visualization images. The encoded image is perceptually indistinguishable from the original one. We propose a new method to efficiently express chart data in the form of images, enabling large-capacity data embedding. We also outline a model based on the invertible neural network to achieve high-quality data concealing and revealing. We explore and implement a variety of application scenarios of InvVis. Additionally, we conduct a series of evaluation experiments to assess our method from multiple perspectives, including data embedding quality, data restoration accuracy, data encoding capacity, etc. The result of our experiments demonstrates the great potential of InvVis in invertible visualization.",
                "AuthorNamesDeduped": "Huayuan Ye;Chenhui Li;Yang Li;Changbo Wang",
                "AuthorNames": "Huayuan Ye;Chenhui Li;Yang Li;Changbo Wang",
                "AuthorAffiliation": "School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China;School of Computer Science and Technology, East China Normal University, China",
                "InternalReferences": "10.1109/tvcg.2019.2934810;10.1109/tvcg.2020.3030351;10.1109/tvcg.2017.2744320;10.1109/tvcg.2020.3030343",
                "AuthorKeywords": "Information visualization,information steganography,invertible visualization,invertible neural network",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 218,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 12,
                "i": [
                    12
                ]
            }
        },
        {
            "name": "Steven Franconeri",
            "value": 608,
            "numPapers": 131,
            "cluster": "5",
            "visible": 1,
            "index": 151,
            "x": -54.588769008657515,
            "y": -110.3180234509277,
            "vy": 0,
            "vx": 0,
            "r": 1.7000575705238918,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Taking Word Clouds Apart: An Empirical Investigation of the Design Space for Keyword Summaries",
                "DOI": "10.1109/tvcg.2017.2746018",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2746018",
                "FirstPage": 657,
                "LastPage": 666,
                "PaperType": "J",
                "Abstract": "In this paper we present a set of four user studies aimed at exploring the visual design space of what we call keyword summaries: lists of words with associated quantitative values used to help people derive an intuition of what information a given document collection (or part of it) may contain. We seek to systematically study how different visual representations may affect people's performance in extracting information out of keyword summaries. To this purpose, we first create a design space of possible visual representations and compare the possible solutions in this design space through a variety of representative tasks and performance metrics. Other researchers have, in the past, studied some aspects of effectiveness with word clouds, however, the existing literature is somewhat scattered and do not seem to address the problem in a sufficiently systematic and holistic manner. The results of our studies showed a strong dependency on the tasks users are performing. In this paper we present details of our methodology, the results, as well as, guidelines on how to design effective keyword summaries based in our discoveries.",
                "AuthorNamesDeduped": "Cristian Felix;Steven Franconeri;Enrico Bertini",
                "AuthorNames": "Cristian Felix;Steven Franconeri;Enrico Bertini",
                "AuthorAffiliation": "New York University;Northwestern University;New York University",
                "InternalReferences": "0.1109/vast.2009.5333443;10.1109/tvcg.2016.2598447;10.1109/tvcg.2011.176;10.1109/tvcg.2010.194;10.1109/tvcg.2009.165;10.1109/tvcg.2009.171",
                "AuthorKeywords": "Word Clouds,Tag Clouds,Text Visualization,Keyword Summaries",
                "AminerCitationCount": 78,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1166,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 788,
                "i": [
                    788
                ]
            }
        },
        {
            "name": "Fumeng Yang",
            "value": 143,
            "numPapers": 39,
            "cluster": "5",
            "visible": 1,
            "index": 152,
            "x": 115.14897204778009,
            "y": 44.617420771931215,
            "vy": 0,
            "vx": 0,
            "r": 1.164651698330455,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Swaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms",
                "DOI": "10.1109/tvcg.2023.3327356",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327356",
                "FirstPage": 23,
                "LastPage": 33,
                "PaperType": "J",
                "Abstract": "We conducted a longitudinal study during the 2022 U.S. midterm elections, investigating the real-world impacts of uncertainty visualizations. Using our forecast model of the governor elections in 33 states, we created a website and deployed four uncertainty visualizations for the election forecasts: single quantile dotplot (1-Dotplot), dual quantile dotplots (2-Dotplot), dual histogram intervals (2-Interval), and Plinko quantile dotplot (Plinko), an animated design with a physical and probabilistic analogy. Our online experiment ran from Oct. 18, 2022, to Nov. 23, 2022, involving 1,327 participants from 15 states. We use Bayesian multilevel modeling and post-stratification to produce demographically-representative estimates of people's emotions, trust in forecasts, and political participation intention. We find that election forecast visualizations can heighten emotions, increase trust, and slightly affect people's intentions to participate in elections. 2-Interval shows the strongest effects across all measures; 1-Dotplot increases trust the most after elections. Both visualizations create emotional and trust gaps between different partisan identities, especially when a Republican candidate is predicted to win. Our qualitative analysis uncovers the complex political and social contexts of election forecast visualizations, showcasing that visualizations may provoke polarization. This intriguing interplay between visualization types, partisanship, and trust exemplifies the fundamental challenge of disentangling visualization from its context, underscoring a need for deeper investigation into the real-world impacts of visualizations. Our preprint and supplements are available at https://doi.org/osf.io/ajq8f.",
                "AuthorNamesDeduped": "Fumeng Yang;Mandi Cai;Chloe Mortenson;Hoda Fakhari;Ayse D. Lokmanoglu;Jessica Hullman;Steven Franconeri;Nicholas Diakopoulos;Erik C. Nisbet;Matthew Kay 0001",
                "AuthorNames": "Fumeng Yang;Mandi Cai;Chloe Mortenson;Hoda Fakhari;Ayse D. Lokmanoglu;Jessica Hullman;Steven Franconeri;Nicholas Diakopoulos;Erik C. Nisbet;Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2020.3030335;10.1109/tvcg.2022.3209500;10.1109/tvcg.2022.3209457;10.1109/tvcg.2022.3209348;10.1109/tvcg.2022.3209383;10.1109/tvcg.2021.3114679",
                "AuthorKeywords": "Uncertainty visualization,Probabilistic forecasts,Elections,Emotions,Trust,Political participation,Longitudinal study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 481,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 14,
                "i": [
                    14
                ]
            }
        },
        {
            "name": "Vidya Setlur",
            "value": 270,
            "numPapers": 82,
            "cluster": "5",
            "visible": 1,
            "index": 153,
            "x": -115.42248635819374,
            "y": 45.02943085241671,
            "vy": 0,
            "vx": 0,
            "r": 1.310880829015544,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MEDLEY: Intent-based Recommendations to Support Dashboard Composition",
                "DOI": "10.1109/tvcg.2022.3209421",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209421",
                "FirstPage": 1135,
                "LastPage": 1145,
                "PaperType": "J",
                "Abstract": "Despite the ever-growing popularity of dashboards across a wide range of domains, their authoring still remains a tedious and complex process. Current tools offer considerable support for creating individual visualizations but provide limited support for discovering groups of visualizations that can be collectively useful for composing analytic dashboards. To address this problem, we present Medley, a mixed-initiative interface that assists in dashboard composition by recommending dashboard collections (i.e., a logically grouped set of views and filtering widgets) that map to specific analytical intents. Users can specify dashboard intents (namely, measure analysis, change analysis, category analysis, or distribution analysis) explicitly through an input panel in the interface or implicitly by selecting data attributes and views of interest. The system recommends collections based on these analytic intents, and views and widgets can be selected to compose a variety of dashboards. Medley also provides a lightweight direct manipulation interface to configure interactions between views in a dashboard. Based on a study with 13 participants performing both targeted and open-ended tasks, we discuss how Medley's recommendations guide dashboard composition and facilitate different user workflows. Observations from the study identify potential directions for future work, including combining manual view specification with dashboard recommendations and designing natural language interfaces for dashboard authoring.",
                "AuthorNamesDeduped": "Aditeya Pandey;Arjun Srinivasan;Vidya Setlur",
                "AuthorNames": "Aditeya Pandey;Arjun Srinivasan;Vidya Setlur",
                "AuthorAffiliation": "Northeastern University, USA;Tableau Research, Germany;Tableau Research, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2020.3030338;10.1109/tvcg.2020.3030424;10.1109/tvcg.2021.3114860;10.1109/tvcg.2021.3114848;10.1109/tvcg.2007.70594;10.1109/tvcg.2020.3030378;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2017.2744184;10.1109/tvcg.2016.2599030;10.1109/tvcg.2013.120;10.1109/tvcg.2018.2865145;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2021.3114826",
                "AuthorKeywords": "Dashboards,intent,recommendations,direct manipulation,multi-view coordination",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 1038,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 155,
                "i": [
                    155
                ]
            }
        },
        {
            "name": "Ryan A. Rossi",
            "value": 19,
            "numPapers": 62,
            "cluster": "5",
            "visible": 1,
            "index": 154,
            "x": 54.86986642489152,
            "y": -111.53159982047492,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Evaluating the Use of Uncertainty Visualisations for Imputations of Data Missing At Random in Scatterplots",
                "DOI": "10.1109/tvcg.2022.3209348",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209348",
                "FirstPage": 602,
                "LastPage": 612,
                "PaperType": "J",
                "Abstract": "Most real-world datasets contain missing values yet most exploratory data analysis (EDA) systems only support visualising data points with complete cases. This omission may potentially lead the user to biased analyses and insights. Imputation techniques can help estimate the value of a missing data point, but introduces additional uncertainty. In this work, we investigate the effects of visualising imputed values in charts using different ways of representing data imputations and imputation uncertainty—no imputation, mean, 95% confidence intervals, probability density plots, gradient intervals, and hypothetical outcome plots. We focus on scatterplots, which is a commonly used chart type, and conduct a crowdsourced study with 202 participants. We measure users' bias and precision in performing two tasks—estimating average and detecting trend—and their self-reported confidence in performing these tasks. Our results suggest that, when estimating averages, uncertainty representations may reduce bias but at the cost of decreasing precision. When estimating trend, only hypothetical outcome plots may lead to a small probability of reducing bias while increasing precision. Participants in every uncertainty representation were less certain about their response when compared to the baseline. The findings point towards potential trade-offs in using uncertainty encodings for datasets with a large number of missing values. This paper and the associated analysis materials are available at: https://osf.io/q4y5r/",
                "AuthorNamesDeduped": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan A. Rossi;Fan Du;Eunyee Koh;Matthew Kay 0001",
                "AuthorNames": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan Rossi;Fan Du;Eunyee Koh;Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3029413;10.1109/tvcg.2011.175;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2012.279;10.1109/tvcg.2021.3114684;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864914",
                "AuthorKeywords": "Uncertainty visualisations,missing values,data imputation,multivariate data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 533,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 172,
                "i": [
                    172
                ]
            }
        },
        {
            "name": "Jane Hoffswell",
            "value": 177,
            "numPapers": 72,
            "cluster": "5",
            "visible": 1,
            "index": 155,
            "x": 34.99187688452936,
            "y": 119.68946717275476,
            "vy": 0,
            "vx": 0,
            "r": 1.2037996545768566,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Evaluating the Use of Uncertainty Visualisations for Imputations of Data Missing At Random in Scatterplots",
                "DOI": "10.1109/tvcg.2022.3209348",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209348",
                "FirstPage": 602,
                "LastPage": 612,
                "PaperType": "J",
                "Abstract": "Most real-world datasets contain missing values yet most exploratory data analysis (EDA) systems only support visualising data points with complete cases. This omission may potentially lead the user to biased analyses and insights. Imputation techniques can help estimate the value of a missing data point, but introduces additional uncertainty. In this work, we investigate the effects of visualising imputed values in charts using different ways of representing data imputations and imputation uncertainty—no imputation, mean, 95% confidence intervals, probability density plots, gradient intervals, and hypothetical outcome plots. We focus on scatterplots, which is a commonly used chart type, and conduct a crowdsourced study with 202 participants. We measure users' bias and precision in performing two tasks—estimating average and detecting trend—and their self-reported confidence in performing these tasks. Our results suggest that, when estimating averages, uncertainty representations may reduce bias but at the cost of decreasing precision. When estimating trend, only hypothetical outcome plots may lead to a small probability of reducing bias while increasing precision. Participants in every uncertainty representation were less certain about their response when compared to the baseline. The findings point towards potential trade-offs in using uncertainty encodings for datasets with a large number of missing values. This paper and the associated analysis materials are available at: https://osf.io/q4y5r/",
                "AuthorNamesDeduped": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan A. Rossi;Fan Du;Eunyee Koh;Matthew Kay 0001",
                "AuthorNames": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan Rossi;Fan Du;Eunyee Koh;Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3029413;10.1109/tvcg.2011.175;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2012.279;10.1109/tvcg.2021.3114684;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864914",
                "AuthorKeywords": "Uncertainty visualisations,missing values,data imputation,multivariate data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 533,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 172,
                "i": [
                    172
                ]
            }
        },
        {
            "name": "Eunyee Koh",
            "value": 56,
            "numPapers": 49,
            "cluster": "5",
            "visible": 1,
            "index": 156,
            "x": -106.99337407567009,
            "y": -64.82605883364907,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Evaluating the Use of Uncertainty Visualisations for Imputations of Data Missing At Random in Scatterplots",
                "DOI": "10.1109/tvcg.2022.3209348",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209348",
                "FirstPage": 602,
                "LastPage": 612,
                "PaperType": "J",
                "Abstract": "Most real-world datasets contain missing values yet most exploratory data analysis (EDA) systems only support visualising data points with complete cases. This omission may potentially lead the user to biased analyses and insights. Imputation techniques can help estimate the value of a missing data point, but introduces additional uncertainty. In this work, we investigate the effects of visualising imputed values in charts using different ways of representing data imputations and imputation uncertainty—no imputation, mean, 95% confidence intervals, probability density plots, gradient intervals, and hypothetical outcome plots. We focus on scatterplots, which is a commonly used chart type, and conduct a crowdsourced study with 202 participants. We measure users' bias and precision in performing two tasks—estimating average and detecting trend—and their self-reported confidence in performing these tasks. Our results suggest that, when estimating averages, uncertainty representations may reduce bias but at the cost of decreasing precision. When estimating trend, only hypothetical outcome plots may lead to a small probability of reducing bias while increasing precision. Participants in every uncertainty representation were less certain about their response when compared to the baseline. The findings point towards potential trade-offs in using uncertainty encodings for datasets with a large number of missing values. This paper and the associated analysis materials are available at: https://osf.io/q4y5r/",
                "AuthorNamesDeduped": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan A. Rossi;Fan Du;Eunyee Koh;Matthew Kay 0001",
                "AuthorNames": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan Rossi;Fan Du;Eunyee Koh;Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3029413;10.1109/tvcg.2011.175;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2012.279;10.1109/tvcg.2021.3114684;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864914",
                "AuthorKeywords": "Uncertainty visualisations,missing values,data imputation,multivariate data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 533,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 172,
                "i": [
                    172
                ]
            }
        },
        {
            "name": "Gromit Yeuk-Yin Chan",
            "value": 32,
            "numPapers": 45,
            "cluster": "5",
            "visible": 1,
            "index": 157,
            "x": 123.07429195380011,
            "y": -24.550329123471442,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Socrates: Data Story Generation via Adaptive Machine-Guided Elicitation of User Feedback",
                "DOI": "10.1109/tvcg.2023.3327363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327363",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Visual data stories can effectively convey insights from data, yet their creation often necessitates intricate data exploration, insight discovery, narrative organization, and customization to meet the communication objectives of the storyteller. Existing automated data storytelling techniques, however, tend to overlook the importance of user customization during the data story authoring process, limiting the system's ability to create tailored narratives that reflect the user's intentions. We present a novel data story generation workflow that leverages adaptive machine-guided elicitation of user feedback to customize the story. Our approach employs an adaptive plug-in module for existing story generation systems, which incorporates user feedback through interactive questioning based on the conversation history and dataset. This adaptability refines the system's understanding of the user's intentions, ensuring the final narrative aligns with their goals. We demonstrate the feasibility of our approach through the implementation of an interactive prototype: Socrates. Through a quantitative user study with 18 participants that compares our method to a state-of-the-art data story generation algorithm, we show that Socrates produces more relevant stories with a larger overlap of insights compared to human-generated stories. We also demonstrate the usability of Socrates via interviews with three data analysts and highlight areas of future work.",
                "AuthorNamesDeduped": "Guande Wu;Shunan Guo;Jane Hoffswell;Gromit Yeuk-Yin Chan;Ryan A. Rossi;Eunyee Koh",
                "AuthorNames": "Guande Wu;Shunan Guo;Jane Hoffswell;Gromit Yeuk-Yin Chan;Ryan A. Rossi;Eunyee Koh",
                "AuthorAffiliation": "New York University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA",
                "InternalReferences": "10.1109/tvcg.2016.2598647;10.1109/tvcg.2015.2467732;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2016.2598468;10.1109/tvcg.2021.3114804;10.1109/tvcg.2021.3114806;10.1109/vast.2015.7347625;10.1109/tvcg.2019.2934785;10.1109/tvcg.2012.260;10.1109/tvcg.2013.119;10.1109/tvcg.2021.3114802;10.1109/tvcg.2022.3209421;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2022.3209428;10.1109/tvcg.2020.3030467;10.1109/tvcg.2017.2745078;10.1109/tvcg.2019.2934398;10.1109/tvcg.2021.3114826;10.1109/tvcg.2021.3114774",
                "AuthorKeywords": "Narrative visualization,visual storytelling,conversational agent",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 348,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 15,
                "i": [
                    15
                ]
            }
        },
        {
            "name": "Matthew Brehmer",
            "value": 720,
            "numPapers": 155,
            "cluster": "5",
            "visible": 1,
            "index": 158,
            "x": -74.40272381815424,
            "y": 101.55902071425987,
            "vy": 0,
            "vx": 0,
            "r": 1.8290155440414506,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Critical Reflections on Visualization Authoring Systems",
                "DOI": "10.1109/tvcg.2019.2934281",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934281",
                "FirstPage": 461,
                "LastPage": 471,
                "PaperType": "J",
                "Abstract": "An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed —Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John T. Stasko;John Thompson 0002;Matthew Brehmer;Zhicheng Liu 0001",
                "AuthorNames": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John Stasko;John Thompson;Matthew Brehmer;Zhicheng Liu",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Microsoft Research;University of California, Santa Barbara;University of Washington;Georgia Institute of Technology;Georgia Institute of Technology;Microsoft Research;Adobe Research",
                "InternalReferences": "0.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Critical reflection,visualization authoring,expressivity,learnability,reusability",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 529,
                "i": [
                    529
                ]
            }
        },
        {
            "name": "Guande Wu",
            "value": 0,
            "numPapers": 35,
            "cluster": "5",
            "visible": 1,
            "index": 159,
            "x": -13.78315511854506,
            "y": -125.53893672872223,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "ARGUS: Visualization of AI-Assisted Task Guidance in AR",
                "DOI": "10.1109/tvcg.2023.3327396",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327396",
                "FirstPage": 1313,
                "LastPage": 1323,
                "PaperType": "J",
                "Abstract": "The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple of modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive the 3D environment, reason about physical tasks, and model the performer, all in real-time. Within this framework, a wide variety of sensors are needed to generate data across different modalities, such as audio, video, depth, speech, and time-of-flight. The required sensors are typically part of the AR headset, providing performer sensing and interaction through visual, audio, and haptic feedback. AI assistants not only record the performer as they perform activities, but also require machine learning (ML) models to understand and assist the performer as they interact with the physical world. Therefore, developing such assistants is a challenging task. We propose ARGUS, a visual analytics system to support the development of intelligent AR assistants. Our system was designed as part of a multi-year-long collaboration between visualization researchers and ML and AR experts. This co-design process has led to advances in the visualization of ML in AR. Our system allows for online visualization of object, action, and step detection as well as offline analysis of previously recorded AR sessions. It visualizes not only the multimodal sensor data streams but also the output of the ML models. This allows developers to gain insights into the performer activities as well as the ML models, helping them troubleshoot, improve, and fine-tune the components of the AR assistant.",
                "AuthorNamesDeduped": "Sonia Castelo;João Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Irán R. Román;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He 0001;Qi Sun 0003;Huy T. Vo;Juan Pablo Bello;Michael Krone;Cláudio T. Silva",
                "AuthorNames": "Sonia Castelo;Joao Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Iran Roman;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He;Qi Sun;Huy Vo;Juan Bello;Michael Krone;Claudio Silva",
                "AuthorAffiliation": "New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York",
                "InternalReferences": "10.1109/tvcg.2017.2746018;10.1109/tvcg.2018.2865152;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Data Models,Image and Video Data,Temporal Data,Application Motivated Visualization,AR/VR/Immersive",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 467,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 5,
                "i": [
                    5
                ]
            }
        },
        {
            "name": "Furui Cheng",
            "value": 154,
            "numPapers": 93,
            "cluster": "1",
            "visible": 1,
            "index": 160,
            "x": 95.2608293393903,
            "y": 83.51870684805623,
            "vy": 0,
            "vx": 0,
            "r": 1.1773172135866437,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Leveraging Historical Medical Records as a Proxy via Multimodal Modeling and Visualization to Enrich Medical Diagnostic Learning",
                "DOI": "10.1109/tvcg.2023.3326929",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326929",
                "FirstPage": 1238,
                "LastPage": 1248,
                "PaperType": "J",
                "Abstract": "Simulation-based Medical Education (SBME) has been developed as a cost-effective means of enhancing the diagnostic skills of novice physicians and interns, thereby mitigating the need for resource-intensive mentor-apprentice training. However, feedback provided in most SBME is often directed towards improving the operational proficiency of learners, rather than providing summative medical diagnoses that result from experience and time. Additionally, the multimodal nature of medical data during diagnosis poses significant challenges for interns and novice physicians, including the tendency to overlook or over-rely on data from certain modalities, and difficulties in comprehending potential associations between modalities. To address these challenges, we present DiagnosisAssistant, a visual analytics system that leverages historical medical records as a proxy for multimodal modeling and visualization to enhance the learning experience of interns and novice physicians. The system employs elaborately designed visualizations to explore different modality data, offer diagnostic interpretive hints based on the constructed model, and enable comparative analyses of specific patients. Our approach is validated through two case studies and expert interviews, demonstrating its effectiveness in enhancing medical training.",
                "AuthorNamesDeduped": "Yang Ouyang;Yuchen Wu;He Wang;Chenyang Zhang;Furui Cheng;Chang Jiang;Lixia Jin;Yuanwu Cao;Quan Li",
                "AuthorNames": "Yang Ouyang;Yuchen Wu;He Wang;Chenyang Zhang;Furui Cheng;Chang Jiang;Lixia Jin;Yuanwu Cao;Quan Li",
                "AuthorAffiliation": "School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;Department of Computer Science, University of Illinois at Urbana-Champaign, USA;Department of Computer Science, ETH Zürich, Switzerland;Zhongshan Hospital Fudan University, China;Zhongshan Hospital Fudan University, China;Zhongshan Hospital Fudan University, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China",
                "InternalReferences": "10.1109/tvcg.2020.3030437;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802454;10.1109/tvcg.2021.3114840",
                "AuthorKeywords": "Multimodal Medical Dataset,Visual Analytics,Explainable Machine Learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 310,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 16,
                "i": [
                    16
                ]
            }
        },
        {
            "name": "Min-Je Choi",
            "value": 117,
            "numPapers": 14,
            "cluster": "4",
            "visible": 1,
            "index": 161,
            "x": -127.05240862534943,
            "y": 2.7722666713780066,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records",
                "DOI": "10.1109/tvcg.2018.2865027",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865027",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo",
                "AuthorNames": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo",
                "AuthorAffiliation": "IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR",
                "InternalReferences": "0.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 168,
                "PubsCitedCrossRef": 85,
                "DownloadsXplore": 4161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 726,
                "i": [
                    726
                ]
            }
        },
        {
            "name": "Joanne Taery Kim",
            "value": 117,
            "numPapers": 14,
            "cluster": "4",
            "visible": 1,
            "index": 162,
            "x": 92.09566134365191,
            "y": -88.13846584593686,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records",
                "DOI": "10.1109/tvcg.2018.2865027",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865027",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo",
                "AuthorNames": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo",
                "AuthorAffiliation": "IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR",
                "InternalReferences": "0.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 168,
                "PubsCitedCrossRef": 85,
                "DownloadsXplore": 4161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 726,
                "i": [
                    726
                ]
            }
        },
        {
            "name": "Edward Choi",
            "value": 117,
            "numPapers": 14,
            "cluster": "4",
            "visible": 1,
            "index": 163,
            "x": -8.397516115564207,
            "y": 127.59107227031537,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records",
                "DOI": "10.1109/tvcg.2018.2865027",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865027",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo",
                "AuthorNames": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo",
                "AuthorAffiliation": "IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR",
                "InternalReferences": "0.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 168,
                "PubsCitedCrossRef": 85,
                "DownloadsXplore": 4161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 726,
                "i": [
                    726
                ]
            }
        },
        {
            "name": "Young Bin Kim",
            "value": 117,
            "numPapers": 14,
            "cluster": "4",
            "visible": 1,
            "index": 164,
            "x": -80.23872238979536,
            "y": -100.05871990612988,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records",
                "DOI": "10.1109/tvcg.2018.2865027",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865027",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo",
                "AuthorNames": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo",
                "AuthorAffiliation": "IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR",
                "InternalReferences": "0.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 168,
                "PubsCitedCrossRef": 85,
                "DownloadsXplore": 4161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 726,
                "i": [
                    726
                ]
            }
        },
        {
            "name": "Soonwook Kwon",
            "value": 117,
            "numPapers": 14,
            "cluster": "4",
            "visible": 1,
            "index": 165,
            "x": 127.13891833263267,
            "y": 19.639130459573824,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records",
                "DOI": "10.1109/tvcg.2018.2865027",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865027",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo",
                "AuthorNames": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo",
                "AuthorAffiliation": "IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR",
                "InternalReferences": "0.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 168,
                "PubsCitedCrossRef": 85,
                "DownloadsXplore": 4161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 726,
                "i": [
                    726
                ]
            }
        },
        {
            "name": "Jimeng Sun 0001",
            "value": 232,
            "numPapers": 42,
            "cluster": "1",
            "visible": 1,
            "index": 166,
            "x": -107.33714278984816,
            "y": 71.61520633016248,
            "vy": 0,
            "vx": 0,
            "r": 1.2671272308578008,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records",
                "DOI": "10.1109/tvcg.2018.2865027",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865027",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo",
                "AuthorNames": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo",
                "AuthorAffiliation": "IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR",
                "InternalReferences": "0.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 168,
                "PubsCitedCrossRef": 85,
                "DownloadsXplore": 4161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 726,
                "i": [
                    726
                ]
            }
        },
        {
            "name": "Jaegul Choo",
            "value": 652,
            "numPapers": 101,
            "cluster": "1",
            "visible": 1,
            "index": 167,
            "x": 30.863960905404966,
            "y": -125.68777155009803,
            "vy": 0,
            "vx": 0,
            "r": 1.750719631548647,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records",
                "DOI": "10.1109/tvcg.2018.2865027",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865027",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun 0001;Jaegul Choo",
                "AuthorNames": "Bum Chul Kwon;Min-Je Choi;Joanne Taery Kim;Edward Choi;Young Bin Kim;Soonwook Kwon;Jimeng Sun;Jaegul Choo",
                "AuthorAffiliation": "IBM T.J. Watson Research Center, Korea University;Korea University, Seongbuk-gu, Seoul, KR;Korea University, Seongbuk-gu, Seoul, KR;Georgia Institute of Technology, Atlanta, GA, US;Chung-Ang University, Seoul, Seoul, KR;Catholic University of Daegu, Gyeongsan, Gyeongsangbuk-do, KR;Georgia Institute of Technology, Atlanta, GA, US;Korea University, Seongbuk-gu, Seoul, KR",
                "InternalReferences": "0.1109/tvcg.2013.212;10.1109/tvcg.2017.2745080;10.1109/tvcg.2012.277;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467555;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158;10.1109/tvcg.2017.2744878;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Interactive Artificial Intelligence,XAI (Explainable Artificial Intelligence),Interpretable Deep Learning,Healthcare",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 168,
                "PubsCitedCrossRef": 85,
                "DownloadsXplore": 4161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 726,
                "i": [
                    726
                ]
            }
        },
        {
            "name": "Yuchen Wu",
            "value": 0,
            "numPapers": 12,
            "cluster": "4",
            "visible": 1,
            "index": 168,
            "x": 62.32797025323594,
            "y": 113.8649380806565,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Leveraging Historical Medical Records as a Proxy via Multimodal Modeling and Visualization to Enrich Medical Diagnostic Learning",
                "DOI": "10.1109/tvcg.2023.3326929",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326929",
                "FirstPage": 1238,
                "LastPage": 1248,
                "PaperType": "J",
                "Abstract": "Simulation-based Medical Education (SBME) has been developed as a cost-effective means of enhancing the diagnostic skills of novice physicians and interns, thereby mitigating the need for resource-intensive mentor-apprentice training. However, feedback provided in most SBME is often directed towards improving the operational proficiency of learners, rather than providing summative medical diagnoses that result from experience and time. Additionally, the multimodal nature of medical data during diagnosis poses significant challenges for interns and novice physicians, including the tendency to overlook or over-rely on data from certain modalities, and difficulties in comprehending potential associations between modalities. To address these challenges, we present DiagnosisAssistant, a visual analytics system that leverages historical medical records as a proxy for multimodal modeling and visualization to enhance the learning experience of interns and novice physicians. The system employs elaborately designed visualizations to explore different modality data, offer diagnostic interpretive hints based on the constructed model, and enable comparative analyses of specific patients. Our approach is validated through two case studies and expert interviews, demonstrating its effectiveness in enhancing medical training.",
                "AuthorNamesDeduped": "Yang Ouyang;Yuchen Wu;He Wang;Chenyang Zhang;Furui Cheng;Chang Jiang;Lixia Jin;Yuanwu Cao;Quan Li",
                "AuthorNames": "Yang Ouyang;Yuchen Wu;He Wang;Chenyang Zhang;Furui Cheng;Chang Jiang;Lixia Jin;Yuanwu Cao;Quan Li",
                "AuthorAffiliation": "School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;Department of Computer Science, University of Illinois at Urbana-Champaign, USA;Department of Computer Science, ETH Zürich, Switzerland;Zhongshan Hospital Fudan University, China;Zhongshan Hospital Fudan University, China;Zhongshan Hospital Fudan University, China;School of Information Science and Technology, ShanghaiTech University, and Shanghai Engineering Research Center of Intelligent Vision and Imaging, China",
                "InternalReferences": "10.1109/tvcg.2020.3030437;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802454;10.1109/tvcg.2021.3114840",
                "AuthorKeywords": "Multimodal Medical Dataset,Visual Analytics,Explainable Machine Learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 310,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 16,
                "i": [
                    16
                ]
            }
        },
        {
            "name": "Nivan Ferreira",
            "value": 324,
            "numPapers": 82,
            "cluster": "3",
            "visible": 1,
            "index": 169,
            "x": -123.23743605925385,
            "y": -41.98254820209602,
            "vy": 0,
            "vx": 0,
            "r": 1.3730569948186528,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics",
                "DOI": "10.1109/tvcg.2023.3326598",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326598",
                "FirstPage": 1402,
                "LastPage": 1412,
                "PaperType": "J",
                "Abstract": "While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org.",
                "AuthorNamesDeduped": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda 0001",
                "AuthorNames": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda",
                "AuthorAffiliation": "University of Illinois Chicago, USA;Massachusetts Institute of Technology, USA;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil;Universidade Federal de Pernambuco, Brazil;University of Illinois Chicago, USA",
                "InternalReferences": "10.1109/vast.2009.5332584;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70574;10.1109/tvcg.2015.2467619;10.1109/tvcg.2006.144;10.1109/tvcg.2019.2934670;10.1109/vast.2015.7347636;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467449;10.1109/tvcg.2021.3114876;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598585;10.1109/tvcg.2022.3209474;10.1109/tvcg.2014.2346318;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864841;10.1109/tvcg.2018.2865152;10.1109/tvcg.2010.180;10.1109/tvcg.2010.177;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Urban visual analytics,Urban analytics,Urban data,Visualization toolkit",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 295,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 17,
                "i": [
                    17
                ]
            }
        },
        {
            "name": "Gennady L. Andrienko",
            "value": 714,
            "numPapers": 90,
            "cluster": "3",
            "visible": 1,
            "index": 170,
            "x": 119.58144824212677,
            "y": -52.44308568644255,
            "vy": 0,
            "vx": 0,
            "r": 1.8221070811744386,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "Interactive visual clustering of large collections of trajectories",
                "DOI": "10.1109/vast.2009.5332584",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5332584",
                "FirstPage": 3,
                "LastPage": 10,
                "PaperType": "C",
                "Abstract": "One of the most common operations in exploration and analysis of various kinds of data is clustering, i.e. discovery and interpretation of groups of objects having similar properties and/or behaviors. In clustering, objects are often treated as points in multi-dimensional space of properties. However, structurally complex objects, such as trajectories of moving entities and other kinds of spatio-temporal data, cannot be adequately represented in this manner. Such data require sophisticated and computationally intensive clustering algorithms, which are very hard to scale effectively to large datasets not fitting in the computer main memory. We propose an approach to extracting meaningful clusters from large databases by combining clustering and classification, which are driven by a human analyst through an interactive visual interface.",
                "AuthorNamesDeduped": "Gennady L. Andrienko;Natalia V. Andrienko;Salvatore Rinzivillo;Mirco Nanni;Dino Pedreschi;Fosca Giannotti",
                "AuthorNames": "Gennady Andrienko;Natalia Andrienko;Salvatore Rinzivillo;Mirco Nanni;Dino Pedreschi;Fosca Giannotti",
                "AuthorAffiliation": "Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt Augustin, Germany;Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt Augustin, Germany;KDD Lab-ISTI-CNR, Pisa, Italy;KDD Lab-ISTI-CNR, Pisa, Italy;University of Pisa, Pisa, Italy;KDD Lab-ISTI-CNR, Pisa, Italy",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/vast.2007.4388999",
                "AuthorKeywords": "Spatio-temporal data, movement data, trajectories, clustering, classification, scalable visualization, geovisualization",
                "AminerCitationCount": 291,
                "CitationCountCrossRef": 138,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 1578,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1852,
                "i": [
                    1852
                ]
            }
        },
        {
            "name": "Natalia V. Andrienko",
            "value": 623,
            "numPapers": 84,
            "cluster": "3",
            "visible": 1,
            "index": 171,
            "x": -52.90531133894502,
            "y": 119.79577635346455,
            "vy": 0,
            "vx": 0,
            "r": 1.717328727691422,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "Interactive visual clustering of large collections of trajectories",
                "DOI": "10.1109/vast.2009.5332584",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5332584",
                "FirstPage": 3,
                "LastPage": 10,
                "PaperType": "C",
                "Abstract": "One of the most common operations in exploration and analysis of various kinds of data is clustering, i.e. discovery and interpretation of groups of objects having similar properties and/or behaviors. In clustering, objects are often treated as points in multi-dimensional space of properties. However, structurally complex objects, such as trajectories of moving entities and other kinds of spatio-temporal data, cannot be adequately represented in this manner. Such data require sophisticated and computationally intensive clustering algorithms, which are very hard to scale effectively to large datasets not fitting in the computer main memory. We propose an approach to extracting meaningful clusters from large databases by combining clustering and classification, which are driven by a human analyst through an interactive visual interface.",
                "AuthorNamesDeduped": "Gennady L. Andrienko;Natalia V. Andrienko;Salvatore Rinzivillo;Mirco Nanni;Dino Pedreschi;Fosca Giannotti",
                "AuthorNames": "Gennady Andrienko;Natalia Andrienko;Salvatore Rinzivillo;Mirco Nanni;Dino Pedreschi;Fosca Giannotti",
                "AuthorAffiliation": "Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt Augustin, Germany;Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt Augustin, Germany;KDD Lab-ISTI-CNR, Pisa, Italy;KDD Lab-ISTI-CNR, Pisa, Italy;University of Pisa, Pisa, Italy;KDD Lab-ISTI-CNR, Pisa, Italy",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/vast.2007.4388999",
                "AuthorKeywords": "Spatio-temporal data, movement data, trajectories, clustering, classification, scalable visualization, geovisualization",
                "AminerCitationCount": 291,
                "CitationCountCrossRef": 138,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 1578,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1852,
                "i": [
                    1852
                ]
            }
        },
        {
            "name": "Gustavo Moreira",
            "value": 0,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 172,
            "x": -42.03216359444202,
            "y": -124.43189793445273,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics",
                "DOI": "10.1109/tvcg.2023.3326598",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326598",
                "FirstPage": 1402,
                "LastPage": 1412,
                "PaperType": "J",
                "Abstract": "While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org.",
                "AuthorNamesDeduped": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda 0001",
                "AuthorNames": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda",
                "AuthorAffiliation": "University of Illinois Chicago, USA;Massachusetts Institute of Technology, USA;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil;Universidade Federal de Pernambuco, Brazil;University of Illinois Chicago, USA",
                "InternalReferences": "10.1109/vast.2009.5332584;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70574;10.1109/tvcg.2015.2467619;10.1109/tvcg.2006.144;10.1109/tvcg.2019.2934670;10.1109/vast.2015.7347636;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467449;10.1109/tvcg.2021.3114876;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598585;10.1109/tvcg.2022.3209474;10.1109/tvcg.2014.2346318;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864841;10.1109/tvcg.2018.2865152;10.1109/tvcg.2010.180;10.1109/tvcg.2010.177;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Urban visual analytics,Urban analytics,Urban data,Visualization toolkit",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 295,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 17,
                "i": [
                    17
                ]
            }
        },
        {
            "name": "Maryam Hosseini",
            "value": 0,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 173,
            "x": 115.37873271405797,
            "y": 63.54327688511167,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics",
                "DOI": "10.1109/tvcg.2023.3326598",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326598",
                "FirstPage": 1402,
                "LastPage": 1412,
                "PaperType": "J",
                "Abstract": "While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org.",
                "AuthorNamesDeduped": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda 0001",
                "AuthorNames": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda",
                "AuthorAffiliation": "University of Illinois Chicago, USA;Massachusetts Institute of Technology, USA;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil;Universidade Federal de Pernambuco, Brazil;University of Illinois Chicago, USA",
                "InternalReferences": "10.1109/vast.2009.5332584;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70574;10.1109/tvcg.2015.2467619;10.1109/tvcg.2006.144;10.1109/tvcg.2019.2934670;10.1109/vast.2015.7347636;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467449;10.1109/tvcg.2021.3114876;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598585;10.1109/tvcg.2022.3209474;10.1109/tvcg.2014.2346318;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864841;10.1109/tvcg.2018.2865152;10.1109/tvcg.2010.180;10.1109/tvcg.2010.177;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Urban visual analytics,Urban analytics,Urban data,Visualization toolkit",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 295,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 17,
                "i": [
                    17
                ]
            }
        },
        {
            "name": "Md Nafiul Alam Nipu",
            "value": 0,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 174,
            "x": -128.36789829725404,
            "y": 31.17182520716166,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics",
                "DOI": "10.1109/tvcg.2023.3326598",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326598",
                "FirstPage": 1402,
                "LastPage": 1412,
                "PaperType": "J",
                "Abstract": "While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org.",
                "AuthorNamesDeduped": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda 0001",
                "AuthorNames": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda",
                "AuthorAffiliation": "University of Illinois Chicago, USA;Massachusetts Institute of Technology, USA;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil;Universidade Federal de Pernambuco, Brazil;University of Illinois Chicago, USA",
                "InternalReferences": "10.1109/vast.2009.5332584;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70574;10.1109/tvcg.2015.2467619;10.1109/tvcg.2006.144;10.1109/tvcg.2019.2934670;10.1109/vast.2015.7347636;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467449;10.1109/tvcg.2021.3114876;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598585;10.1109/tvcg.2022.3209474;10.1109/tvcg.2014.2346318;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864841;10.1109/tvcg.2018.2865152;10.1109/tvcg.2010.180;10.1109/tvcg.2010.177;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Urban visual analytics,Urban analytics,Urban data,Visualization toolkit",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 295,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 17,
                "i": [
                    17
                ]
            }
        },
        {
            "name": "Marcos Lage",
            "value": 68,
            "numPapers": 68,
            "cluster": "3",
            "visible": 1,
            "index": 175,
            "x": 73.80880967868885,
            "y": -110.0102704924185,
            "vy": 0,
            "vx": 0,
            "r": 1.0782959124928038,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics",
                "DOI": "10.1109/tvcg.2023.3326598",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326598",
                "FirstPage": 1402,
                "LastPage": 1412,
                "PaperType": "J",
                "Abstract": "While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org.",
                "AuthorNamesDeduped": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda 0001",
                "AuthorNames": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda",
                "AuthorAffiliation": "University of Illinois Chicago, USA;Massachusetts Institute of Technology, USA;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil;Universidade Federal de Pernambuco, Brazil;University of Illinois Chicago, USA",
                "InternalReferences": "10.1109/vast.2009.5332584;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70574;10.1109/tvcg.2015.2467619;10.1109/tvcg.2006.144;10.1109/tvcg.2019.2934670;10.1109/vast.2015.7347636;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467449;10.1109/tvcg.2021.3114876;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598585;10.1109/tvcg.2022.3209474;10.1109/tvcg.2014.2346318;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864841;10.1109/tvcg.2018.2865152;10.1109/tvcg.2010.180;10.1109/tvcg.2010.177;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Urban visual analytics,Urban analytics,Urban data,Visualization toolkit",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 295,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 17,
                "i": [
                    17
                ]
            }
        },
        {
            "name": "Fabio Miranda 0001",
            "value": 30,
            "numPapers": 61,
            "cluster": "3",
            "visible": 1,
            "index": 176,
            "x": 19.94312716290488,
            "y": 131.34790321495126,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics",
                "DOI": "10.1109/tvcg.2023.3326598",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326598",
                "FirstPage": 1402,
                "LastPage": 1412,
                "PaperType": "J",
                "Abstract": "While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org.",
                "AuthorNamesDeduped": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda 0001",
                "AuthorNames": "Gustavo Moreira;Maryam Hosseini;Md Nafiul Alam Nipu;Marcos Lage;Nivan Ferreira;Fabio Miranda",
                "AuthorAffiliation": "University of Illinois Chicago, USA;Massachusetts Institute of Technology, USA;University of Illinois Chicago, USA;Universidade Federal Fluminense, Brazil;Universidade Federal de Pernambuco, Brazil;University of Illinois Chicago, USA",
                "InternalReferences": "10.1109/vast.2009.5332584;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70574;10.1109/tvcg.2015.2467619;10.1109/tvcg.2006.144;10.1109/tvcg.2019.2934670;10.1109/vast.2015.7347636;10.1109/tvcg.2013.226;10.1109/tvcg.2015.2467449;10.1109/tvcg.2021.3114876;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598585;10.1109/tvcg.2022.3209474;10.1109/tvcg.2014.2346318;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864841;10.1109/tvcg.2018.2865152;10.1109/tvcg.2010.180;10.1109/tvcg.2010.177;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Urban visual analytics,Urban analytics,Urban data,Visualization toolkit",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 295,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 17,
                "i": [
                    17
                ]
            }
        },
        {
            "name": "Remco Chang",
            "value": 812,
            "numPapers": 171,
            "cluster": "5",
            "visible": 1,
            "index": 177,
            "x": -103.72226316190095,
            "y": -83.6163388613336,
            "vy": 0,
            "vx": 0,
            "r": 1.934945308002303,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Legible Cities: Focus-Dependent Multi-Resolution Visualization of Urban Relationships",
                "DOI": "10.1109/tvcg.2007.70574",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70574",
                "FirstPage": 1169,
                "LastPage": 1175,
                "PaperType": "J",
                "Abstract": "Numerous systems have been developed to display large collections of data for urban contexts; however, most have focused on layering of single dimensions of data and manual calculations to understand relationships within the urban environment. Furthermore, these systems often limit the user's perspectives on the data, thereby diminishing the user's spatial understanding of the viewing region. In this paper, we introduce a highly interactive urban visualization tool that provides intuitive understanding of the urban data. Our system utilizes an aggregation method that combines buildings and city blocks into legible clusters, thus providing continuous levels of abstraction while preserving the user's mental model of the city. In conjunction with a 3D view of the urban model, a separate but integrated information visualization view displays multiple disparate dimensions of the urban data, allowing the user to understand the urban environment both spatially and cognitively in one glance. For our evaluation, expert users from various backgrounds viewed a real city model with census data and confirmed that our system allowed them to gain more intuitive and deeper understanding of the urban model from different perspectives and levels of abstraction than existing commercial urban visualization systems.",
                "AuthorNamesDeduped": "Remco Chang;Ginette Wessel;Robert Kosara;Eric Sauda;William Ribarsky",
                "AuthorNames": "Remco Chang;Ginette Wessel;Robert Kosara;Eric Sauda;William Ribarsky",
                "AuthorAffiliation": "Department of Computer Science, UNC-Charlotte, USA;UNC Charlotte College of Architecture, USA;Department of Computer Science, UNC-Charlotte, USA;UNC Charlotte College of Architecture, USA;Department of Computer Science, UNC-Charlotte, USA",
                "InternalReferences": "0.1109/infvis.2004.12;10.1109/visual.1990.146402;10.1109/infvis.2005.1532149",
                "AuthorKeywords": "Urban models, information visualization, multi-resolution",
                "AminerCitationCount": 91,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 718,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2106,
                "i": [
                    2106
                ]
            }
        },
        {
            "name": "Xiaoru Yuan",
            "value": 676,
            "numPapers": 330,
            "cluster": "3",
            "visible": 1,
            "index": 178,
            "x": 133.33761344975866,
            "y": -8.43094535166321,
            "vy": 0,
            "vx": 0,
            "r": 1.7783534830166956,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data",
                "DOI": "10.1109/tvcg.2015.2467619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467619",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.",
                "AuthorNamesDeduped": "Siming Chen 0001;Xiaoru Yuan;Zhenhuang Wang;Cong Guo 0004;Jie Liang 0004;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorNames": "Siming Chen;Xiaoru Yuan;Zhenhuang Wang;Cong Guo;Jie Liang;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;College of Information Sciences and Technology, Pennsylvania State University;School of Computer Science and Technology, and School of Computer Software, Tianjin University",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2008.4677356;10.1109/tvcg.2009.182;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/tvcg.2009.143;10.1109/infvis.2004.27;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spatial temporal visual analytics, Geo-tagged social media, Sparsely sampling, Uncertainty, Movement",
                "AminerCitationCount": 141,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2690,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1103,
                "i": [
                    1103
                ]
            }
        },
        {
            "name": "Zuchao Wang",
            "value": 296,
            "numPapers": 54,
            "cluster": "3",
            "visible": 1,
            "index": 179,
            "x": -92.88307325371532,
            "y": 96.55430960317074,
            "vy": 0,
            "vx": 0,
            "r": 1.3408175014392631,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data",
                "DOI": "10.1109/tvcg.2015.2467619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467619",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.",
                "AuthorNamesDeduped": "Siming Chen 0001;Xiaoru Yuan;Zhenhuang Wang;Cong Guo 0004;Jie Liang 0004;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorNames": "Siming Chen;Xiaoru Yuan;Zhenhuang Wang;Cong Guo;Jie Liang;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;College of Information Sciences and Technology, Pennsylvania State University;School of Computer Science and Technology, and School of Computer Software, Tianjin University",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2008.4677356;10.1109/tvcg.2009.182;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/tvcg.2009.143;10.1109/infvis.2004.27;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spatial temporal visual analytics, Geo-tagged social media, Sparsely sampling, Uncertainty, Movement",
                "AminerCitationCount": 141,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2690,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1103,
                "i": [
                    1103
                ]
            }
        },
        {
            "name": "Jiawan Zhang",
            "value": 138,
            "numPapers": 77,
            "cluster": "1",
            "visible": 1,
            "index": 180,
            "x": 3.2766777877770235,
            "y": -134.31032492952687,
            "vy": 0,
            "vx": 0,
            "r": 1.1588946459412781,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data",
                "DOI": "10.1109/tvcg.2015.2467619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467619",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.",
                "AuthorNamesDeduped": "Siming Chen 0001;Xiaoru Yuan;Zhenhuang Wang;Cong Guo 0004;Jie Liang 0004;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorNames": "Siming Chen;Xiaoru Yuan;Zhenhuang Wang;Cong Guo;Jie Liang;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;College of Information Sciences and Technology, Pennsylvania State University;School of Computer Science and Technology, and School of Computer Software, Tianjin University",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2008.4677356;10.1109/tvcg.2009.182;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/tvcg.2009.143;10.1109/infvis.2004.27;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spatial temporal visual analytics, Geo-tagged social media, Sparsely sampling, Uncertainty, Movement",
                "AminerCitationCount": 141,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2690,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1103,
                "i": [
                    1103
                ]
            }
        },
        {
            "name": "Harish Doraiswamy",
            "value": 138,
            "numPapers": 66,
            "cluster": "11",
            "visible": 1,
            "index": 181,
            "x": 88.55348663056576,
            "y": 101.52970011563224,
            "vy": 0,
            "vx": 0,
            "r": 1.1588946459412781,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Urbane: A 3D framework to support data driven decision making in urban development",
                "DOI": "10.1109/vast.2015.7347636",
                "Link": "http://dx.doi.org/10.1109/VAST.2015.7347636",
                "FirstPage": 97,
                "LastPage": 104,
                "PaperType": "C",
                "Abstract": "Architects working with developers and city planners typically rely on experience, precedent and data analyzed in isolation when making decisions that impact the character of a city. These decisions are critical in enabling vibrant, sustainable environments but must also negotiate a range of complex political and social forces. This requires those shaping the built environment to balance maximizing the value of a new development with its impact on the character of a neighborhood. As a result architects are focused on two issues throughout the decision making process: a) what defines the character of a neighborhood? and b) how will a new development change its neighborhood? In the first, character can be influenced by a variety of factors and understanding the interplay between diverse data sets is crucial; including safety, transportation access, school quality and access to entertainment. In the second, the impact of a new development is measured, for example, by how it impacts the view from the buildings that surround it. In this paper, we work in collaboration with architects to design Urbane, a 3-dimensional multi-resolution framework that enables a data-driven approach for decision making in the design of new urban development. This is accomplished by integrating multiple data layers and impact analysis techniques facilitating architects to explore and assess the effect of these attributes on the character and value of a neighborhood. Several of these data layers, as well as impact analysis, involve working in 3-dimensions and operating in real time. Efficient computation and visualization is accomplished through the use of techniques from computer graphics. We demonstrate the effectiveness of Urbane through a case study of development in Manhattan depicting how a data-driven understanding of the value and impact of speculative buildings can benefit the design-development process between architects, planners and developers.",
                "AuthorNamesDeduped": "Nivan Ferreira;Marcos Lage;Harish Doraiswamy;Huy T. Vo;Luc Wilson;Heidi Werner;Muchan Park;Cláudio T. Silva",
                "AuthorNames": "Nivan Ferreira;Marcos Lage;Harish Doraiswamy;Huy Vo;Luc Wilson;Heidi Werner;Muchan Park;Cláudio Silva",
                "AuthorAffiliation": "New York University, New York, NY, US;Universidade Federal Fluminense, Niteroi, Rio de Janeiro, BR;New York University, New York, NY, US;New York University, New York, NY, US;Kohn Pedersen Fox Associates PC;Kohn Pedersen Fox Associates PC;Kohn Pedersen Fox Associates PC;New York University, New York, NY, US",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2014.2346446;10.1109/tvcg.2007.70574;10.1109/tvcg.2013.226;10.1109/tvcg.2007.70523;10.1109/tvcg.2013.228;10.1109/tvcg.2014.2346893;10.1109/tvcg.2014.2346898",
                "AuthorKeywords": null,
                "AminerCitationCount": 85,
                "CitationCountCrossRef": 30,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1214,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1122,
                "i": [
                    1122
                ]
            }
        },
        {
            "name": "Huy T. Vo",
            "value": 421,
            "numPapers": 27,
            "cluster": "3",
            "visible": 1,
            "index": 182,
            "x": -134.247217912671,
            "y": -15.089217431922178,
            "vy": 0,
            "vx": 0,
            "r": 1.4847438111686817,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "ARGUS: Visualization of AI-Assisted Task Guidance in AR",
                "DOI": "10.1109/tvcg.2023.3327396",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327396",
                "FirstPage": 1313,
                "LastPage": 1323,
                "PaperType": "J",
                "Abstract": "The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple of modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive the 3D environment, reason about physical tasks, and model the performer, all in real-time. Within this framework, a wide variety of sensors are needed to generate data across different modalities, such as audio, video, depth, speech, and time-of-flight. The required sensors are typically part of the AR headset, providing performer sensing and interaction through visual, audio, and haptic feedback. AI assistants not only record the performer as they perform activities, but also require machine learning (ML) models to understand and assist the performer as they interact with the physical world. Therefore, developing such assistants is a challenging task. We propose ARGUS, a visual analytics system to support the development of intelligent AR assistants. Our system was designed as part of a multi-year-long collaboration between visualization researchers and ML and AR experts. This co-design process has led to advances in the visualization of ML in AR. Our system allows for online visualization of object, action, and step detection as well as offline analysis of previously recorded AR sessions. It visualizes not only the multimodal sensor data streams but also the output of the ML models. This allows developers to gain insights into the performer activities as well as the ML models, helping them troubleshoot, improve, and fine-tune the components of the AR assistant.",
                "AuthorNamesDeduped": "Sonia Castelo;João Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Irán R. Román;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He 0001;Qi Sun 0003;Huy T. Vo;Juan Pablo Bello;Michael Krone;Cláudio T. Silva",
                "AuthorNames": "Sonia Castelo;Joao Rulff;Erin McGowan;Bea Steers;Guande Wu;Shaoyu Chen;Iran Roman;Roque Lopez;Ethan Brewer;Chen Zhao;Jing Qian;Kyunghyun Cho;He He;Qi Sun;Huy Vo;Juan Bello;Michael Krone;Claudio Silva",
                "AuthorAffiliation": "New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York;New York University, New York",
                "InternalReferences": "10.1109/tvcg.2017.2746018;10.1109/tvcg.2018.2865152;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Data Models,Image and Video Data,Temporal Data,Application Motivated Visualization,AR/VR/Immersive",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 467,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 5,
                "i": [
                    5
                ]
            }
        },
        {
            "name": "Luc Wilson",
            "value": 62,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 183,
            "x": 109.48106143244704,
            "y": -79.7740383058596,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Urbane: A 3D framework to support data driven decision making in urban development",
                "DOI": "10.1109/vast.2015.7347636",
                "Link": "http://dx.doi.org/10.1109/VAST.2015.7347636",
                "FirstPage": 97,
                "LastPage": 104,
                "PaperType": "C",
                "Abstract": "Architects working with developers and city planners typically rely on experience, precedent and data analyzed in isolation when making decisions that impact the character of a city. These decisions are critical in enabling vibrant, sustainable environments but must also negotiate a range of complex political and social forces. This requires those shaping the built environment to balance maximizing the value of a new development with its impact on the character of a neighborhood. As a result architects are focused on two issues throughout the decision making process: a) what defines the character of a neighborhood? and b) how will a new development change its neighborhood? In the first, character can be influenced by a variety of factors and understanding the interplay between diverse data sets is crucial; including safety, transportation access, school quality and access to entertainment. In the second, the impact of a new development is measured, for example, by how it impacts the view from the buildings that surround it. In this paper, we work in collaboration with architects to design Urbane, a 3-dimensional multi-resolution framework that enables a data-driven approach for decision making in the design of new urban development. This is accomplished by integrating multiple data layers and impact analysis techniques facilitating architects to explore and assess the effect of these attributes on the character and value of a neighborhood. Several of these data layers, as well as impact analysis, involve working in 3-dimensions and operating in real time. Efficient computation and visualization is accomplished through the use of techniques from computer graphics. We demonstrate the effectiveness of Urbane through a case study of development in Manhattan depicting how a data-driven understanding of the value and impact of speculative buildings can benefit the design-development process between architects, planners and developers.",
                "AuthorNamesDeduped": "Nivan Ferreira;Marcos Lage;Harish Doraiswamy;Huy T. Vo;Luc Wilson;Heidi Werner;Muchan Park;Cláudio T. Silva",
                "AuthorNames": "Nivan Ferreira;Marcos Lage;Harish Doraiswamy;Huy Vo;Luc Wilson;Heidi Werner;Muchan Park;Cláudio Silva",
                "AuthorAffiliation": "New York University, New York, NY, US;Universidade Federal Fluminense, Niteroi, Rio de Janeiro, BR;New York University, New York, NY, US;New York University, New York, NY, US;Kohn Pedersen Fox Associates PC;Kohn Pedersen Fox Associates PC;Kohn Pedersen Fox Associates PC;New York University, New York, NY, US",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2014.2346446;10.1109/tvcg.2007.70574;10.1109/tvcg.2013.226;10.1109/tvcg.2007.70523;10.1109/tvcg.2013.228;10.1109/tvcg.2014.2346893;10.1109/tvcg.2014.2346898",
                "AuthorKeywords": null,
                "AminerCitationCount": 85,
                "CitationCountCrossRef": 30,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1214,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1122,
                "i": [
                    1122
                ]
            }
        },
        {
            "name": "Jorge Poco",
            "value": 320,
            "numPapers": 32,
            "cluster": "3",
            "visible": 1,
            "index": 184,
            "x": -26.914376621306783,
            "y": 133.13758421680353,
            "vy": 0,
            "vx": 0,
            "r": 1.3684513529073115,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Extracting and Retargeting Color Mappings from Bitmap Images of Visualizations",
                "DOI": "10.1109/tvcg.2017.2744320",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744320",
                "FirstPage": 637,
                "LastPage": 646,
                "PaperType": "J",
                "Abstract": "Visualization designers regularly use color to encode quantitative or categorical data. However, visualizations “in the wild” often violate perceptual color design principles and may only be available as bitmap images. In this work, we contribute a method to semi-automatically extract color encodings from a bitmap visualization image. Given an image and a legend location, we classify the legend as describing either a discrete or continuous color encoding, identify the colors used, and extract legend text using OCR methods. We then combine this information to recover the specific color mapping. Users can also correct interpretation errors using an annotation interface. We evaluate our techniques using a corpus of images extracted from scientific papers and demonstrate accurate automatic inference of color mappings across a variety of chart types. In addition, we present two applications of our method: automatic recoloring to improve perceptual effectiveness, and interactive overlays to enable improved reading of static visualizations.",
                "AuthorNamesDeduped": "Jorge Poco;Angela Mayhua;Jeffrey Heer",
                "AuthorNames": "Jorge Poco;Angela Mayhua;Jeffrey Heer",
                "AuthorAffiliation": "Universidad Católica San Pablo and University of Washington;Universidad Católica San Pablo;University of Washington",
                "InternalReferences": "0.1109/tvcg.2011.192;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598918;10.1109/tvcg.2012.229",
                "AuthorKeywords": "Visualization,color,chart understanding,information extraction,redesign,computer vision",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1512,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 783,
                "i": [
                    783
                ]
            }
        },
        {
            "name": "Juliana Freire",
            "value": 475,
            "numPapers": 55,
            "cluster": "3",
            "visible": 1,
            "index": 185,
            "x": -70.27700411494467,
            "y": -116.66680201594644,
            "vy": 0,
            "vx": 0,
            "r": 1.5469199769717905,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Visual Exploration of Big Spatio-Temporal Urban Data: A Study of New York City Taxi Trips",
                "DOI": "10.1109/tvcg.2013.226",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.226",
                "FirstPage": 2149,
                "LastPage": 2158,
                "PaperType": "J",
                "Abstract": "As increasing volumes of urban data are captured and become available, new opportunities arise for data-driven analysis that can lead to improvements in the lives of citizens through evidence-based decision making and policies. In this paper, we focus on a particularly important urban data set: taxi trips. Taxis are valuable sensors and information associated with taxi trips can provide unprecedented insight into many different aspects of city life, from economic activity and human behavior to mobility patterns. But analyzing these data presents many challenges. The data are complex, containing geographical and temporal components in addition to multiple variables associated with each trip. Consequently, it is hard to specify exploratory queries and to perform comparative analyses (e.g., compare different regions over time). This problem is compounded due to the size of the data-there are on average 500,000 taxi trips each day in NYC. We propose a new model that allows users to visually query taxi trips. Besides standard analytics queries, the model supports origin-destination queries that enable the study of mobility across the city. We show that this model is able to express a wide range of spatio-temporal queries, and it is also flexible in that not only can queries be composed but also different aggregations and visual representations can be applied, allowing users to explore and compare results. We have built a scalable system that implements this model which supports interactive response times; makes use of an adaptive level-of-detail rendering strategy to generate clutter-free visualization for large results; and shows hidden details to the users in a summary through the use of overlay heat maps. We present a series of case studies motivated by traffic engineers and economists that show how our model and system enable domain experts to perform tasks that were previously unattainable for them.",
                "AuthorNamesDeduped": "Nivan Ferreira;Jorge Poco;Huy T. Vo;Juliana Freire;Cláudio T. Silva",
                "AuthorNames": "Nivan Ferreira;Jorge Poco;Huy T. Vo;Juliana Freire;Cláudio T. Silva",
                "AuthorAffiliation": "Polytechnic Institute of New York University, USA;Polytechnic Institute of New York University, USA;CUSP, New York University, USA;Polytechnic Institute of New York University, USA;Polytechnic Institute of New York University, USA",
                "InternalReferences": "0.1109/infvis.2004.12;10.1109/vast.2008.4677356;10.1109/vast.2011.6102454;10.1109/tvcg.2007.70535;10.1109/vast.2010.5652467;10.1109/infvis.2005.1532150;10.1109/vast.2008.4677370;10.1109/infvis.2000.885086",
                "AuthorKeywords": "Spatio-temporal queries, urban data, taxi movement data, visual exploration",
                "AminerCitationCount": 600,
                "CitationCountCrossRef": 373,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 9511,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1365,
                "i": [
                    1365
                ]
            }
        },
        {
            "name": "M. Eduard Gröller",
            "value": 1387,
            "numPapers": 283,
            "cluster": "6",
            "visible": 1,
            "index": 186,
            "x": 130.97899018779077,
            "y": 38.65881696827539,
            "vy": 0,
            "vx": 0,
            "r": 2.597006332757628,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "ViSlang: A System for Interpreted Domain-Specific Languages for Scientific Visualization",
                "DOI": "10.1109/tvcg.2014.2346318",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346318",
                "FirstPage": 2388,
                "LastPage": 2396,
                "PaperType": "J",
                "Abstract": "Researchers from many domains use scientific visualization in their daily practice. Existing implementations of algorithms usually come with a graphical user interface (high-level interface), or as software library or source code (low-level interface). In this paper we present a system that integrates domain-specific languages (DSLs) and facilitates the creation of new DSLs. DSLs provide an effective interface for domain scientists avoiding the difficulties involved with low-level interfaces and at the same time offering more flexibility than high-level interfaces. We describe the design and implementation of ViSlang, an interpreted language specifically tailored for scientific visualization. A major contribution of our design is the extensibility of the ViSlang language. Novel DSLs that are tailored to the problems of the domain can be created and integrated into ViSlang. We show that our approach can be added to existing user interfaces to increase the flexibility for expert users on demand, but at the same time does not interfere with the user experience of novice users. To demonstrate the flexibility of our approach we present new DSLs for volume processing, querying and visualization. We report the implementation effort for new DSLs and compare our approach with Matlab and Python implementations in terms of run-time performance.",
                "AuthorNamesDeduped": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorNames": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorAffiliation": "KAUST;University of Bergen;Vienna University of Technology, VrVis Research Center;KAUST",
                "InternalReferences": "0.1109/visual.2005.1532792;10.1109/visual.1992.235219;10.1109/tvcg.2009.174;10.1109/tvcg.2014.2346322;10.1109/visual.2004.95;10.1109/tvcg.2011.185;10.1109/visual.2005.1532788;10.1109/visual.1992.235202;10.1109/tvcg.2008.184",
                "AuthorKeywords": "Domain-specific languages, Volume visualization, Volume visualization framework",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 767,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1220,
                "i": [
                    1220
                ]
            }
        },
        {
            "name": "Niklas Elmqvist",
            "value": 915,
            "numPapers": 270,
            "cluster": "5",
            "visible": 1,
            "index": 187,
            "x": -123.02198390187536,
            "y": 60.12978859805446,
            "vy": 0,
            "vx": 0,
            "r": 2.0535405872193437,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Information Olfactation: Harnessing Scent to Convey Data",
                "DOI": "10.1109/tvcg.2018.2865237",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865237",
                "FirstPage": 726,
                "LastPage": 736,
                "PaperType": "J",
                "Abstract": "Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present viScent: A six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. Finally, we present three applications that make use of the viScent system: A 2D graph visualization, a 2D line and point chart, and an immersive analytics graph visualization in 3D virtual reality. We close the paper with a review of possible extensions of viScent and applications of information olfactation for general visualization beyond the examples in this paper.",
                "AuthorNamesDeduped": "Biswaksen Patnaik;Andrea Batch;Niklas Elmqvist",
                "AuthorNames": "Biswaksen Patnaik;Andrea Batch;Niklas Elmqvist",
                "AuthorAffiliation": "University of Maryland at College Park, College Park, MD, US;University of Maryland at College Park, College Park, MD, US;University of Maryland at College Park, College Park, MD, US",
                "InternalReferences": "0.1109/tvcg.2016.2599107",
                "AuthorKeywords": "Olfaction,smell,scent,olfactory display,immersive analytics,immersion",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 104,
                "DownloadsXplore": 1412,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 666,
                "i": [
                    666
                ]
            }
        },
        {
            "name": "Kanit Wongsuphasawat",
            "value": 672,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 188,
            "x": 50.22890399388278,
            "y": -127.77737359788432,
            "vy": 0,
            "vx": 0,
            "r": 1.773747841105354,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Vega-Lite: A Grammar of Interactive Graphics",
                "DOI": "10.1109/tvcg.2016.2599030",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599030",
                "FirstPage": 341,
                "LastPage": 350,
                "PaperType": "J",
                "Abstract": "We present Vega-Lite, a high-level grammar that enables rapid specification of interactive data visualizations. Vega-Lite combines a traditional grammar of graphics, providing visual encoding rules and a composition algebra for layered and multi-view displays, with a novel grammar of interaction. Users specify interactive semantics by composing selections. In Vega-Lite, a selection is an abstraction that defines input event processing, points of interest, and a predicate function for inclusion testing. Selections parameterize visual encodings by serving as input data, defining scale extents, or by driving conditional logic. The Vega-Lite compiler automatically synthesizes requisite data flow and event handling logic, which users can override for further customization. In contrast to existing reactive specifications, Vega-Lite selections decompose an interaction design into concise, enumerable semantic units. We evaluate Vega-Lite through a range of examples, demonstrating succinct specification of both customized interaction methods and common techniques such as panning, zooming, and linked selection.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Dominik Moritz;Kanit Wongsuphasawat;Jeffrey Heer",
                "AuthorNames": "Arvind Satyanarayan;Dominik Moritz;Kanit Wongsuphasawat;Jeffrey Heer",
                "AuthorAffiliation": "Stanford University;University of Washington;University of Washington;University of Washington",
                "InternalReferences": "0.1109/tvcg.2015.2467091;10.1109/tvcg.2009.174;10.1109/tvcg.2015.2467191;10.1109/tvcg.2014.2346260;10.1109/infvis.2000.885086;10.1109/tvcg.2007.70515;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Information visualization;interaction;systems;toolkits;declarative specification",
                "AminerCitationCount": 641,
                "CitationCountCrossRef": 449,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 6043,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 888,
                "i": [
                    888
                ]
            }
        },
        {
            "name": "Han-Wei Shen",
            "value": 1050,
            "numPapers": 301,
            "cluster": "6",
            "visible": 1,
            "index": 189,
            "x": 49.40567552092713,
            "y": 128.48766176688275,
            "vy": 0,
            "vx": 0,
            "r": 2.208981001727116,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Adaptively Placed Multi-Grid Scene Representation Networks for Large-Scale Data Visualization",
                "DOI": "10.1109/tvcg.2023.3327194",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327194",
                "FirstPage": 965,
                "LastPage": 974,
                "PaperType": "J",
                "Abstract": "Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN.",
                "AuthorNamesDeduped": "Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo 0001;Tom Peterka",
                "AuthorNames": "Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo;Tom Peterka",
                "AuthorAffiliation": "The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA",
                "InternalReferences": "10.1109/tvcg.2012.274",
                "AuthorKeywords": "Scene representation network,deep learning,scientific visualization,volume rendering",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 20,
                "i": [
                    20
                ]
            }
        },
        {
            "name": "Rüdiger Westermann",
            "value": 722,
            "numPapers": 180,
            "cluster": "6",
            "visible": 1,
            "index": 190,
            "x": -123.54707319125762,
            "y": -61.53146110628313,
            "vy": 0,
            "vx": 0,
            "r": 1.8313183649971214,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Turbulence Visualization at the Terascale on Desktop PCs",
                "DOI": "10.1109/tvcg.2012.274",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.274",
                "FirstPage": 2169,
                "LastPage": 2177,
                "PaperType": "J",
                "Abstract": "Despite the ongoing efforts in turbulence research, the universal properties of the turbulence small-scale structure and the relationships between small- and large-scale turbulent motions are not yet fully understood. The visually guided exploration of turbulence features, including the interactive selection and simultaneous visualization of multiple features, can further progress our understanding of turbulence. Accomplishing this task for flow fields in which the full turbulence spectrum is well resolved is challenging on desktop computers. This is due to the extreme resolution of such fields, requiring memory and bandwidth capacities going beyond what is currently available. To overcome these limitations, we present a GPU system for feature-based turbulence visualization that works on a compressed flow field representation. We use a wavelet-based compression scheme including run-length and entropy encoding, which can be decoded on the GPU and embedded into brick-based volume ray-casting. This enables a drastic reduction of the data to be streamed from disk to GPU memory. Our system derives turbulence properties directly from the velocity gradient tensor, and it either renders these properties in turn or generates and renders scalar feature volumes. The quality and efficiency of the system is demonstrated in the visualization of two unsteady turbulence simulations, each comprising a spatio-temporal resolution of 10244. On a desktop computer, the system can visualize each time step in 5 seconds, and it achieves about three times this rate for the visualization of a scalar feature volume.",
                "AuthorNamesDeduped": "Marc Treib;Kai Bürger;Florian Reichl;Charles Meneveau;Alexander S. Szalay;Rüdiger Westermann",
                "AuthorNames": "Marc Treib;Kai Bürger;Florian Reichl;Charles Meneveau;Alex Szalay;Rüdiger Westermann",
                "AuthorAffiliation": "Technische Universität München, Munich, Germany;Technische Universität München, Munich, Germany;Technische Universität München, Munich, Germany;Johns Hopkins University, Baltimore, MD, USA;Johns Hopkins University, Baltimore, MD, USA;Technische Universität München, Munich, Germany",
                "InternalReferences": "0.1109/visual.2002.1183757;10.1109/visual.2001.964520;10.1109/tvcg.2006.143;10.1109/visual.2005.1532808;10.1109/visual.2003.1250384;10.1109/visual.2001.964531;10.1109/visual.2004.55;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Visualization system and toolkit design, vector fields, volume rendering, data streaming, data compression",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 689,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1455,
                "i": [
                    1455
                ]
            }
        },
        {
            "name": "Frank van Ham",
            "value": 670,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 191,
            "x": 133.01141437270286,
            "y": -38.18329014861265,
            "vy": 0,
            "vx": 0,
            "r": 1.7714450201496834,
            "node": {
                "Conference": "InfoVis",
                "Year": 2006,
                "Title": "ASK-graphView: a large scale graph visualization system",
                "DOI": "10.1109/tvcg.2006.120",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.120",
                "FirstPage": 669,
                "LastPage": 676,
                "PaperType": "J",
                "Abstract": "We describe ASK-GraphView, a node-link-based graph visualization system that allows clustering and interactive navigation of large graphs, ranging in size up to 16 million edges. The system uses a scalable architecture and a series of increasingly sophisticated clustering algorithms to construct a hierarchy on an arbitrary, weighted undirected input graph. By lowering the interactivity requirements we can scale to substantially bigger graphs. The user is allowed to navigate this hierarchy in a top down manner by interactively expanding individual clusters. ASK-GraphView also provides facilities for filtering and coloring, annotation and cluster labeling",
                "AuthorNamesDeduped": "James Abello;Frank van Ham;Neeraj Krishnan",
                "AuthorNames": "James Abello;Frank Van Ham;Neeraj Krishnan",
                "AuthorAffiliation": "Ask.com and DIMACS, Rutgers University, USA;IBM;Ask.com",
                "InternalReferences": "0.1109/infvis.2004.46;10.1109/infvis.2005.1532127;10.1109/infvis.2004.66;10.1109/infvis.1997.636718;10.1109/infvis.2004.43",
                "AuthorKeywords": "Information visualization, graph visualization, graph clustering",
                "AminerCitationCount": 399,
                "CitationCountCrossRef": 178,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 2884,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2218,
                "i": [
                    2218
                ]
            }
        },
        {
            "name": "Pierre Dragicevic",
            "value": 783,
            "numPapers": 112,
            "cluster": "5",
            "visible": 1,
            "index": 192,
            "x": -72.47452612341579,
            "y": 118.3107901384583,
            "vy": 0,
            "vx": 0,
            "r": 1.9015544041450778,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Embedded Data Representations",
                "DOI": "10.1109/tvcg.2016.2598608",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598608",
                "FirstPage": 461,
                "LastPage": 470,
                "PaperType": "J",
                "Abstract": "We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles are making it increasingly easier to display data in-context. While researchers and artists have already begun to create embedded data representations, the benefits, trade-offs, and even the language necessary to describe and compare these approaches remain unexplored. In this paper, we formalize the notion of physical data referents - the real-world entities and spaces to which data corresponds - and examine the relationship between referents and the visual and physical representations of their data. We differentiate situated representations, which display data in proximity to data referents, and embedded representations, which display data so that it spatially coincides with data referents. Drawing on examples from visualization, ubiquitous computing, and art, we explore the role of spatial indirection, scale, and interaction for embedded representations. We also examine the tradeoffs between non-situated, situated, and embedded data displays, including both visualizations and physicalizations. Based on our observations, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research and applications.",
                "AuthorNamesDeduped": "Wesley Willett;Yvonne Jansen;Pierre Dragicevic",
                "AuthorNames": "Wesley Willett;Yvonne Jansen;Pierre Dragicevic",
                "AuthorAffiliation": "University of Calgary;University of Copenhagen;Inria",
                "InternalReferences": "0.1109/tvcg.2013.134;10.1109/infvis.1998.729560",
                "AuthorKeywords": "augmented reality;Information visualization;data physicalization;ambient displays;ubiquitous computing",
                "AminerCitationCount": 192,
                "CitationCountCrossRef": 160,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 3740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 889,
                "i": [
                    889
                ]
            }
        },
        {
            "name": "Ben Shneiderman",
            "value": 1059,
            "numPapers": 32,
            "cluster": "1",
            "visible": 1,
            "index": 193,
            "x": -26.546013631462888,
            "y": -136.54782737296918,
            "vy": 0,
            "vx": 0,
            "r": 2.219343696027634,
            "node": {
                "Conference": "InfoVis",
                "Year": 2006,
                "Title": "Network Visualization by Semantic Substrates",
                "DOI": "10.1109/tvcg.2006.166",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.166",
                "FirstPage": 733,
                "LastPage": 740,
                "PaperType": "J",
                "Abstract": "Networks have remained a challenge for information visualization designers because of the complex issues of node and link layout coupled with the rich set of tasks that users present. This paper offers a strategy based on two principles: (1) layouts are based on user-defined semantic substrates, which are non-overlapping regions in which node placement is based on node attributes, (2) users interactively adjust sliders to control link visibility to limit clutter and thus ensure comprehensibility of source and destination. Scalability is further facilitated by user control of which nodes are visible. We illustrate our semantic substrates approach as implemented in NVSS 1.0 with legal precedent data for up to 1122 court cases in three regions with 7645 legal citations",
                "AuthorNamesDeduped": "Ben Shneiderman;Aleks Aris",
                "AuthorNames": "Ben Shneiderman;Aleks Aris",
                "AuthorAffiliation": "Computer Science Department and the Human-Computer Interaction Laboratory, University of Maryland, College Park, USA;Computer Science Department and the Human-Computer Interaction Laboratory, University of Maryland, College Park, USA",
                "InternalReferences": "0.1109/infvis.2004.1;10.1109/infvis.2005.1532124;10.1109/infvis.2005.1532126",
                "AuthorKeywords": "Network visualization, semantic substrate, information visualization, graphical user interfaces",
                "AminerCitationCount": 491,
                "CitationCountCrossRef": 192,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 2317,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2217,
                "i": [
                    2217
                ]
            }
        },
        {
            "name": "George G. Robertson",
            "value": 478,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 194,
            "x": 112.09948121356452,
            "y": 82.96810418256942,
            "vy": 0,
            "vx": 0,
            "r": 1.5503742084052965,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Animated Transitions in Statistical Data Graphics",
                "DOI": "10.1109/tvcg.2007.70539",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70539",
                "FirstPage": 1240,
                "LastPage": 1247,
                "PaperType": "J",
                "Abstract": "In this paper we investigate the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots. We extend theoretical models of data graphics to include such transitions, introducing a taxonomy of transition types. We then propose design principles for creating effective transitions and illustrate the application of these principles in &lt;i&gt;DynaVis&lt;/i&gt;, a visualization system featuring animated data graphics. Two controlled experiments were conducted to assess the efficacy of various transition types, finding that animated transitions can significantly improve graphical perception.",
                "AuthorNamesDeduped": "Jeffrey Heer;George G. Robertson",
                "AuthorNames": "Jeffrey Heer;George Robertson",
                "AuthorAffiliation": "Computer Science Division, University of California, Berkeley, USA;Microsoft Research Limited, USA",
                "InternalReferences": "0.1109/infvis.1999.801854;10.1109/infvis.2001.963279;10.1109/infvis.2002.1173148",
                "AuthorKeywords": "Statistical data graphics, animation, transitions, information visualization, design, experiment",
                "AminerCitationCount": 595,
                "CitationCountCrossRef": 297,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 2863,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2089,
                "i": [
                    2089
                ]
            }
        },
        {
            "name": "Adam Perer",
            "value": 719,
            "numPapers": 79,
            "cluster": "1",
            "visible": 1,
            "index": 195,
            "x": -139.0589231856688,
            "y": 14.581353930354735,
            "vy": 0,
            "vx": 0,
            "r": 1.8278641335636154,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models",
                "DOI": "10.1109/tvcg.2018.2865044",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865044",
                "FirstPage": 353,
                "LastPage": 363,
                "PaperType": "J",
                "Abstract": "Neural sequence-to-sequence models have proven to be accurate and robust for many sequence prediction tasks, and have become the standard approach for automatic translation of text. The models work with a five-stage blackbox pipeline that begins with encoding a source sequence to a vector space and then decoding out to a new target sequence. This process is now standard, but like many deep learning methods remains quite difficult to understand or debug. In this work, we present a visual analysis tool that allows interaction and “what if”-style exploration of trained sequence-to-sequence models through each stage of the translation process. The aim is to identify which patterns have been learned, to detect model errors, and to probe the model with counterfactual scenario. We demonstrate the utility of our tool through several real-world sequence-to-sequence use cases on large-scale models.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Sebastian Gehrmann;Michael Behrisch 0001;Adam Perer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorNames": "Hendrik Strobelt;Sebastian Gehrmann;Michael Behrisch;Adam Perer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorAffiliation": "IBM Reseatch, MIT-IBM Watson AI Lab.;Harvard NLP group;Hatvatd Visual Computing group;IBM Reseatch, MIT-IBM Watson AI Lab.;Hatvatd Visual Computing group;Harvard NLP group",
                "InternalReferences": "0.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744478;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Explainable AI,Visual Debugging,Visual Analytics,Machine Learning,Deep Learning,NLP",
                "AminerCitationCount": 180,
                "CitationCountCrossRef": 108,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2314,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 730,
                "i": [
                    730
                ]
            }
        },
        {
            "name": "Marian Dörk",
            "value": 326,
            "numPapers": 40,
            "cluster": "1",
            "visible": 1,
            "index": 196,
            "x": 92.92491086900829,
            "y": -104.95218406487247,
            "vy": 0,
            "vx": 0,
            "r": 1.3753598157743236,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "PivotPaths: Strolling through Faceted Information Spaces",
                "DOI": "10.1109/tvcg.2012.252",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.252",
                "FirstPage": 2709,
                "LastPage": 2718,
                "PaperType": "J",
                "Abstract": "We present PivotPaths, an interactive visualization for exploring faceted information resources. During both work and leisure, we increasingly interact with information spaces that contain multiple facets and relations, such as authors, keywords, and citations of academic publications, or actors and genres of movies. To navigate these interlinked resources today, one typically selects items from facet lists resulting in abrupt changes from one subset of data to another. While filtering is useful to retrieve results matching specific criteria, it can be difficult to see how facets and items relate and to comprehend the effect of filter operations. In contrast, the PivotPaths interface exposes faceted relations as visual paths in arrangements that invite the viewer to `take a stroll' through an information space. PivotPaths supports pivot operations as lightweight interaction techniques that trigger gradual transitions between views. We designed the interface to allow for casual traversal of large collections in an aesthetically pleasing manner that encourages exploration and serendipitous discoveries. This paper shares the findings from our iterative design-and-evaluation process that included semi-structured interviews and a two-week deployment of PivotPaths applied to a large database of academic publications.",
                "AuthorNamesDeduped": "Marian Dörk;Nathalie Henry Riche;Gonzalo A. Ramos;Susan T. Dumais",
                "AuthorNames": "Marian Dörk;Nathalie Henry Riche;Gonzalo Ramos;Susan Dumais",
                "AuthorAffiliation": "University of Calgary, Canada;Microsoft, USA;Microsoft, USA;Microsoft, USA",
                "InternalReferences": "0.1109/vast.2009.5333443;10.1109/tvcg.2010.154;10.1109/vast.2006.261426;10.1109/vast.2007.4389006;10.1109/vast.2008.4677370;10.1109/tvcg.2007.70539;10.1109/tvcg.2008.175;10.1109/tvcg.2009.108",
                "AuthorKeywords": "Information visualization, interactivity, node-link diagrams, animation, information seeking, exploratory search",
                "AminerCitationCount": 217,
                "CitationCountCrossRef": 126,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 1541,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1402,
                "i": [
                    1402
                ]
            }
        },
        {
            "name": "Anushka Anand",
            "value": 559,
            "numPapers": 32,
            "cluster": "5",
            "visible": 1,
            "index": 197,
            "x": 2.3802780920942648,
            "y": 140.51453403902494,
            "vy": 0,
            "vx": 0,
            "r": 1.6436384571099598,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations",
                "DOI": "10.1109/tvcg.2015.2467191",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467191",
                "FirstPage": 649,
                "LastPage": 658,
                "PaperType": "J",
                "Abstract": "General visualization tools typically require manual specification of views: analysts must select data variables and then choose which transformations and visual encodings to apply. These decisions often involve both domain and visualization design expertise, and may impose a tedious specification process that impedes exploration. In this paper, we seek to complement manual chart construction with interactive navigation of a gallery of automatically-generated visualizations. We contribute Voyager, a mixed-initiative system that supports faceted browsing of recommended charts chosen according to statistical and perceptual measures. We describe Voyager's architecture, motivating design principles, and methods for generating and interacting with visualization recommendations. In a study comparing Voyager to a manual visualization specification tool, we find that Voyager facilitates exploration of previously unseen data and leads to increased data variable coverage. We then distill design implications for visualization tools, in particular the need to balance rapid exploration and targeted question-answering.",
                "AuthorNamesDeduped": "Kanit Wongsuphasawat;Dominik Moritz;Anushka Anand;Jock D. Mackinlay;Bill Howe;Jeffrey Heer",
                "AuthorNames": "Kanit Wongsuphasawat;Dominik Moritz;Anushka Anand;Jock Mackinlay;Bill Howe;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington;Tableau Research;Tableau Research;Tableau Research;University of Washington;University of Washington",
                "InternalReferences": "0.1109/tvcg.2014.2346297;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70594;10.1109/tvcg.2014.2346291;10.1109/infvis.2000.885086",
                "AuthorKeywords": "User interfaces, information visualization, exploratory analysis, visualization recommendation, mixed-initiative systems",
                "AminerCitationCount": 487,
                "CitationCountCrossRef": 284,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 3914,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1003,
                "i": [
                    1003
                ]
            }
        },
        {
            "name": "Melanie Tory",
            "value": 769,
            "numPapers": 156,
            "cluster": "5",
            "visible": 1,
            "index": 198,
            "x": -96.91577550689638,
            "y": -102.26109943618279,
            "vy": 0,
            "vx": 0,
            "r": 1.8854346574553829,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Heuristics for Supporting Cooperative Dashboard Design",
                "DOI": "10.1109/tvcg.2023.3327158",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327158",
                "FirstPage": 370,
                "LastPage": 380,
                "PaperType": "J",
                "Abstract": "Dashboards are no longer mere static displays of metrics; through functionality such as interaction and storytelling, they have evolved to support analytic and communicative goals like monitoring and reporting. Existing dashboard design guidelines, however, are often unable to account for this expanded scope as they largely focus on best practices for visual design. In contrast, we frame dashboard design as facilitating an analytical conversation: a cooperative, interactive experience where a user may interact with, reason about, or freely query the underlying data. By drawing on established principles of conversational flow and communication, we define the concept of a cooperative dashboard as one that enables a fruitful and productive analytical conversation, and derive a set of 39 dashboard design heuristics to support effective analytical conversations. To assess the utility of this framing, we asked 52 computer science and engineering graduate students to apply our heuristics to critique and design dashboards as part of an ungraded, opt-in homework assignment. Feedback from participants demonstrates that our heuristics surface new reasons dashboards may fail, and encourage a more fluid, supportive, and responsive style of dashboard design. Our approach suggests several compelling directions for future work, including dashboard authoring tools that better anticipate conversational turn-taking, repair, and refinement and extending cooperative principles to other analytical workflows.",
                "AuthorNamesDeduped": "Vidya Setlur;Michael Correll;Arvind Satyanarayan;Melanie Tory",
                "AuthorNames": "Vidya Setlur;Michael Correll;Arvind Satyanarayan;Melanie Tory",
                "AuthorAffiliation": "Tableau Research, USA;Tableau Research, USA;MIT CSAIL, USA;Northeastern University, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209448;10.1109/tvcg.2021.3114760;10.1109/tvcg.2020.3030338;10.1109/tvcg.2019.2934283;10.1109/tvcg.2016.2599058;10.1109/tvcg.2017.2744684;10.1109/tvcg.2017.2745240;10.1109/tvcg.2021.3114860;10.1109/tvcg.2017.2744319;10.1109/tvcg.2022.3209500;10.1109/tvcg.2022.3209451;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2022.3209409;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2745219;10.1109/vast47406.2019.8986918;10.1109/vast.2017.8585669;10.1109/tvcg.2021.3114862;10.1109/tvcg.2021.3114826;10.1109/tvcg.2022.3209493",
                "AuthorKeywords": "Gricean maxims,interactive visualization,conversation initiation,grounding,turn-taking,repair and refinement",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 99,
                "DownloadsXplore": 526,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 24,
                "i": [
                    24
                ]
            }
        },
        {
            "name": "Heidi Lam",
            "value": 242,
            "numPapers": 60,
            "cluster": "5",
            "visible": 1,
            "index": 199,
            "x": 140.8926147375733,
            "y": 9.963488967713056,
            "vy": 0,
            "vx": 0,
            "r": 1.2786413356361543,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Bridging from Goals to Tasks with Design Study Analysis Reports",
                "DOI": "10.1109/tvcg.2017.2744319",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744319",
                "FirstPage": 435,
                "LastPage": 445,
                "PaperType": "J",
                "Abstract": "Visualization researchers and practitioners engaged in generating or evaluating designs are faced with the difficult problem of transforming the questions asked and actions taken by target users from domain-specific language and context into more abstract forms. Existing abstract task classifications aim to provide support for this endeavour by providing a carefully delineated suite of actions. Our experience is that this bottom-up approach is part of the challenge: low-level actions are difficult to interpret without a higher-level context of analysis goals and the analysis process. To bridge this gap, we propose a framework based on analysis reports derived from open-coding 20 design study papers published at IEEE InfoVis 2009-2015, to build on the previous work of abstractions that collectively encompass a broad variety of domains. The framework is organized in two axes illustrated by nine analysis goals. It helps situate the analysis goals by placing each goal under axes of specificity (Explore, Describe, Explain, Confirm) and number of data populations (Single, Multiple). The single-population types are Discover Observation, Describe Observation, Identify Main Cause, and Collect Evidence. The multiple-population types are Compare Entities, Explain Differences, and Evaluate Hypothesis. Each analysis goal is scoped by an input and an output and is characterized by analysis steps reported in the design study papers. We provide examples of how we and others have used the framework in a top-down approach to abstracting domain problems: visualization designers or researchers first identify the analysis goals of each unit of analysis in an analysis stream, and then encode the individual steps using existing task classifications with the context of the goal, the level of specificity, and the number of populations involved in the analysis.",
                "AuthorNamesDeduped": "Heidi Lam;Melanie Tory;Tamara Munzner",
                "AuthorNames": "Heidi Lam;Melanie Tory;Tamara Munzner",
                "AuthorAffiliation": "Tableau Research;Tableau Research;University of British Columbia",
                "InternalReferences": "0.1109/tvcg.2014.2346312;10.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2011.176;10.1109/tvcg.2013.214;10.1109/vast.2008.4677365;10.1109/tvcg.2010.164;10.1109/tvcg.2014.2346456;10.1109/tvcg.2013.126;10.1109/tvcg.2010.193;10.1109/tvcg.2012.286;10.1109/tvcg.2013.154;10.1109/tvcg.2009.180;10.1109/tvcg.2014.2346573;10.1109/tvcg.2015.2467811;10.1109/tvcg.2010.137;10.1109/tvcg.2009.111;10.1109/tvcg.2009.116;10.1109/tvcg.2014.2346311;10.1109/tvcg.2013.192;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/tvcg.2011.253;10.1109/tvcg.2015.2467754;10.1109/tvcg.2013.130;10.1109/tvcg.2013.120;10.1109/tvcg.2014.2346321;10.1109/tvcg.2015.2467911;10.1109/visual.1990.146375;10.1109/tvcg.2011.174;10.1109/tvcg.2012.226",
                "AuthorKeywords": "Framework,Data Analysis,Analysis Goals,Design Studies,Open Coding,Task Classifications",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 3365,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 789,
                "i": [
                    789
                ]
            }
        },
        {
            "name": "Alexander Lex",
            "value": 730,
            "numPapers": 119,
            "cluster": "4",
            "visible": 1,
            "index": 200,
            "x": -110.89696608018109,
            "y": 88.04466431426248,
            "vy": 0,
            "vx": 0,
            "r": 1.8405296488198044,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Data Hunches: Incorporating Personal Knowledge into Visualizations",
                "DOI": "10.1109/tvcg.2022.3209451",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209451",
                "FirstPage": 504,
                "LastPage": 514,
                "PaperType": "J",
                "Abstract": "The trouble with data is that it frequently provides only an imperfect representation of a phenomenon of interest. Experts who are familiar with their datasets will often make implicit, mental corrections when analyzing a dataset, or will be cautious not to be overly confident about their findings if caveats are present. However, personal knowledge about the caveats of a dataset is typically not incorporated in a structured way, which is problematic if others who lack that knowledge interpret the data. In this work, we define such analysts' knowledge about datasets as data hunches. We differentiate data hunches from uncertainty and discuss types of hunches. We then explore ways of recording data hunches, and, based on a prototypical design, develop recommendations for designing visualizations that support data hunches. We conclude by discussing various challenges associated with data hunches, including the potential for harm and challenges for trust and privacy. We envision that data hunches will empower analysts to externalize their knowledge, facilitate collaboration and communication, and support the ability to learn from others' data hunches.",
                "AuthorNamesDeduped": "Haihan Lin;Derya Akbaba;Miriah Meyer;Alexander Lex",
                "AuthorNames": "Haihan Lin;Derya Akbaba;Miriah Meyer;Alexander Lex",
                "AuthorAffiliation": "University of Utah, USA;University of Utah, USA;Linköping University, Sweden;University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2014.2346298;10.1109/tvcg.2016.2599058;10.1109/tvcg.2019.2934287;10.1109/tvcg.2017.2745240;10.1109/tvcg.2018.2864913;10.1109/tvcg.2016.2598839;10.1109/tvcg.2017.2745958;10.1109/tvcg.2012.262;10.1109/tvcg.2020.3030376;10.1109/tvcg.2018.2864889;10.1109/tvcg.2007.70577;10.1109/tvcg.2011.251",
                "AuthorKeywords": "Data Visualization,Uncertainty,Data Hunches",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 662,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 154,
                "i": [
                    154
                ]
            }
        },
        {
            "name": "Fanny Chevalier",
            "value": 377,
            "numPapers": 147,
            "cluster": "5",
            "visible": 1,
            "index": 201,
            "x": 22.35419371031219,
            "y": -140.17949216473085,
            "vy": 0,
            "vx": 0,
            "r": 1.4340817501439262,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "TimeSplines: Sketch-Based Authoring of Flexible and Idiosyncratic Timelines",
                "DOI": "10.1109/tvcg.2023.3326520",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326520",
                "FirstPage": 34,
                "LastPage": 44,
                "PaperType": "J",
                "Abstract": "Timelines are essential for visually communicating chronological narratives and reflecting on the personal and cultural significance of historical events. Existing visualization tools tend to support conventional linear representations, but fail to capture personal idiosyncratic conceptualizations of time. In response, we built TimeSplines, a visualization authoring tool that allows people to sketch multiple free-form temporal axes and populate them with heterogeneous, time-oriented data via incremental and lazy data binding. Authors can bend, compress, and expand temporal axes to emphasize or de-emphasize intervals based on their personal importance; they can also annotate the axes with text and figurative elements to convey contextual information. The results of two user studies show how people appropriate the concepts in TimeSplines to express their own conceptualization of time, while our curated gallery of images demonstrates the expressive potential of our approach.",
                "AuthorNamesDeduped": "Anna Offenwanger;Matthew Brehmer;Fanny Chevalier;Theophanis Tsandilas",
                "AuthorNames": "Anna Offenwanger;Matthew Brehmer;Fanny Chevalier;Theophanis Tsandilas",
                "AuthorAffiliation": "Université Paris Saclay, CRNS, Inria, LISN, France;Tableau Research, USA;Departments of Computer Science and Statistical Sciences, University of Toronto, Canada;Université Paris Saclay, CRNS, Inria, LISN, France",
                "InternalReferences": "10.1109/tvcg.2015.2467851;10.1109/tvcg.2016.2598609;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598876;10.1109/tvcg.2017.2744118;10.1109/tvcg.2013.191;10.1109/tvcg.2022.3209451;10.1109/tvcg.2013.200;10.1109/tvcg.2021.3114959;10.1109/tvcg.2017.2743918;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2015.2467153;10.1109/tvcg.2012.212;10.1109/tvcg.2020.3030476;10.1109/infvis.1999.801851;10.1109/tvcg.2015.2467751;10.1109/tvcg.2018.2865076;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Temporal Data,interaction design,communication / presentation,storytelling,sketch-based interface,lazy data binding",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 520,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 25,
                "i": [
                    25
                ]
            }
        },
        {
            "name": "Danyel Fisher",
            "value": 495,
            "numPapers": 32,
            "cluster": "5",
            "visible": 1,
            "index": 202,
            "x": 78.40042054172021,
            "y": 118.75762737138788,
            "vy": 0,
            "vx": 0,
            "r": 1.5699481865284974,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "A Deeper Understanding of Sequence in Narrative Visualization",
                "DOI": "10.1109/tvcg.2013.119",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.119",
                "FirstPage": 2406,
                "LastPage": 2415,
                "PaperType": "J",
                "Abstract": "Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence.",
                "AuthorNamesDeduped": "Jessica Hullman;Steven Mark Drucker;Nathalie Henry Riche;Bongshin Lee;Danyel Fisher;Eytan Adar",
                "AuthorNames": "Jessica Hullman;Steven Drucker;Nathalie Henry Riche;Bongshin Lee;Danyel Fisher;Eytan Adar",
                "AuthorAffiliation": "University of Michigan, USA;Microsoft Research, USA;Microsoft Research, USA;Microsoft Research, USA;Microsoft Research, USA;University of Michigan, USA",
                "InternalReferences": "0.1109/visual.2005.1532788;10.1109/tvcg.2007.70577;10.1109/tvcg.2007.70594;10.1109/tvcg.2010.179;10.1109/tvcg.2008.137;10.1109/tvcg.2011.255;10.1109/tvcg.2007.70584;10.1109/tvcg.2007.70539;10.1109/infvis.2000.885086",
                "AuthorKeywords": "Data storytelling, narrative visualization, narrative structure",
                "AminerCitationCount": 215,
                "CitationCountCrossRef": 139,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3924,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1300,
                "i": [
                    1300
                ]
            }
        },
        {
            "name": "Theophanis Tsandilas",
            "value": 54,
            "numPapers": 42,
            "cluster": "5",
            "visible": 1,
            "index": 203,
            "x": -138.3700490348064,
            "y": -34.69480552049924,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "TimeSplines: Sketch-Based Authoring of Flexible and Idiosyncratic Timelines",
                "DOI": "10.1109/tvcg.2023.3326520",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326520",
                "FirstPage": 34,
                "LastPage": 44,
                "PaperType": "J",
                "Abstract": "Timelines are essential for visually communicating chronological narratives and reflecting on the personal and cultural significance of historical events. Existing visualization tools tend to support conventional linear representations, but fail to capture personal idiosyncratic conceptualizations of time. In response, we built TimeSplines, a visualization authoring tool that allows people to sketch multiple free-form temporal axes and populate them with heterogeneous, time-oriented data via incremental and lazy data binding. Authors can bend, compress, and expand temporal axes to emphasize or de-emphasize intervals based on their personal importance; they can also annotate the axes with text and figurative elements to convey contextual information. The results of two user studies show how people appropriate the concepts in TimeSplines to express their own conceptualization of time, while our curated gallery of images demonstrates the expressive potential of our approach.",
                "AuthorNamesDeduped": "Anna Offenwanger;Matthew Brehmer;Fanny Chevalier;Theophanis Tsandilas",
                "AuthorNames": "Anna Offenwanger;Matthew Brehmer;Fanny Chevalier;Theophanis Tsandilas",
                "AuthorAffiliation": "Université Paris Saclay, CRNS, Inria, LISN, France;Tableau Research, USA;Departments of Computer Science and Statistical Sciences, University of Toronto, Canada;Université Paris Saclay, CRNS, Inria, LISN, France",
                "InternalReferences": "10.1109/tvcg.2015.2467851;10.1109/tvcg.2016.2598609;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598876;10.1109/tvcg.2017.2744118;10.1109/tvcg.2013.191;10.1109/tvcg.2022.3209451;10.1109/tvcg.2013.200;10.1109/tvcg.2021.3114959;10.1109/tvcg.2017.2743918;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2015.2467153;10.1109/tvcg.2012.212;10.1109/tvcg.2020.3030476;10.1109/infvis.1999.801851;10.1109/tvcg.2015.2467751;10.1109/tvcg.2018.2865076;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Temporal Data,interaction design,communication / presentation,storytelling,sketch-based interface,lazy data binding",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 520,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 25,
                "i": [
                    25
                ]
            }
        },
        {
            "name": "Kwan-Liu Ma",
            "value": 1755,
            "numPapers": 399,
            "cluster": "6",
            "visible": 1,
            "index": 204,
            "x": 125.77366413533228,
            "y": -68.05134392480893,
            "vy": 0,
            "vx": 0,
            "r": 3.0207253886010363,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "A Declarative Grammar of Flexible Volume Visualization Pipelines",
                "DOI": "10.1109/tvcg.2018.2864841",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864841",
                "FirstPage": 1050,
                "LastPage": 1059,
                "PaperType": "J",
                "Abstract": "This paper presents a declarative grammar for conveniently and effectively specifying advanced volume visualizations. Existing methods for creating volume visualizations either lack the flexibility to specify sophisticated visualizations or are difficult to use for those unfamiliar with volume rendering implementation and parameterization. Our design provides the ability to quickly create expressive visualizations without knowledge of the volume rendering implementation. It attempts to capture aspects of those difficult but powerful methods while remaining flexible and easy to use. As a proof of concept, our current implementation of the grammar allows users to combine multiple data variables in various ways and define transfer functions for diverse input data. The grammar also has the ability to describe advanced shading effects and create animations. We demonstrate the power and flexibility of our approach using multiple practical volume visualizations.",
                "AuthorNamesDeduped": "Min Shih;Charles Rozhon;Kwan-Liu Ma",
                "AuthorNames": "Min Shih;Charles Rozhon;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US",
                "InternalReferences": "0.1109/visual.2005.1532788;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70555;10.1109/tvcg.2014.2346322;10.1109/tvcg.2009.189;10.1109/tvcg.2015.2467449;10.1109/visual.1992.235219;10.1109/visual.2004.95;10.1109/tvcg.2007.70534;10.1109/tvcg.2014.2346318;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/scivis.2015.7429514;10.1109/tvcg.2016.2599041",
                "AuthorKeywords": "Volume visualization,direct volume rendering,declarative specification,multivariate/multimodal volume data,animation",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 715,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 721,
                "i": [
                    721
                ]
            }
        },
        {
            "name": "Donghao Ren",
            "value": 484,
            "numPapers": 56,
            "cluster": "5",
            "visible": 1,
            "index": 205,
            "x": -46.887784608444356,
            "y": 135.46783992709166,
            "vy": 0,
            "vx": 0,
            "r": 1.5572826712723087,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Critical Reflections on Visualization Authoring Systems",
                "DOI": "10.1109/tvcg.2019.2934281",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934281",
                "FirstPage": 461,
                "LastPage": 471,
                "PaperType": "J",
                "Abstract": "An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed —Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John T. Stasko;John Thompson 0002;Matthew Brehmer;Zhicheng Liu 0001",
                "AuthorNames": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John Stasko;John Thompson;Matthew Brehmer;Zhicheng Liu",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Microsoft Research;University of California, Santa Barbara;University of Washington;Georgia Institute of Technology;Georgia Institute of Technology;Microsoft Research;Adobe Research",
                "InternalReferences": "0.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Critical reflection,visualization authoring,expressivity,learnability,reusability",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 529,
                "i": [
                    529
                ]
            }
        },
        {
            "name": "Jian Zhao 0010",
            "value": 462,
            "numPapers": 152,
            "cluster": "5",
            "visible": 1,
            "index": 206,
            "x": -57.07197442818704,
            "y": -131.8817263113748,
            "vy": 0,
            "vx": 0,
            "r": 1.531951640759931,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Exploratory Analysis of Time-Series with ChronoLenses",
                "DOI": "10.1109/tvcg.2011.195",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.195",
                "FirstPage": 2422,
                "LastPage": 2431,
                "PaperType": "J",
                "Abstract": "Visual representations of time-series are useful for tasks such as identifying trends, patterns and anomalies in the data. Many techniques have been devised to make these visual representations more scalable, enabling the simultaneous display of multiple variables, as well as the multi-scale display of time-series of very high resolution or that span long time periods. There has been comparatively little research on how to support the more elaborate tasks associated with the exploratory visual analysis of timeseries, e.g., visualizing derived values, identifying correlations, or discovering anomalies beyond obvious outliers. Such tasks typically require deriving new time-series from the original data, trying different functions and parameters in an iterative manner. We introduce a novel visualization technique called ChronoLenses, aimed at supporting users in such exploratory tasks. ChronoLenses perform on-the-fly transformation of the data points in their focus area, tightly integrating visual analysis with user actions, and enabling the progressive construction of advanced visual analysis pipelines.",
                "AuthorNamesDeduped": "Jian Zhao 0010;Fanny Chevalier;Emmanuel Pietriga;Ravin Balakrishnan",
                "AuthorNames": "Jian Zhao;Fanny Chevalier;Emmanuel Pietriga;Ravin Balakrishnan",
                "AuthorAffiliation": "DGP, University of Toronto, Canada;OCAD University, Canada;INRIA, France;DGP, University of Toronto, Canada",
                "InternalReferences": "0.1109/tvcg.2010.162;10.1109/infvis.1999.801851;10.1109/vast.2007.4389007;10.1109/infvis.2001.963273;10.1109/infvis.2005.1532148;10.1109/tvcg.2007.70583;10.1109/tvcg.2010.193",
                "AuthorKeywords": "Time-series Data, Exploratory Visualization, Focus+Context, Lens, Interaction Techniques",
                "AminerCitationCount": 113,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1937,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1557,
                "i": [
                    1557
                ]
            }
        },
        {
            "name": "Ashley Suh 0001",
            "value": 8,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 207,
            "x": 131.48513817335242,
            "y": 58.83586014952461,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Knowledge Graphs in Practice: Characterizing their Users, Challenges, and Visualization Opportunities",
                "DOI": "10.1109/tvcg.2023.3326904",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326904",
                "FirstPage": 584,
                "LastPage": 594,
                "PaperType": "J",
                "Abstract": "This study presents insights from interviews with nineteen Knowledge Graph (KG) practitioners who work in both enterprise and academic settings on a wide variety of use cases. Through this study, we identify critical challenges experienced by KG practitioners when creating, exploring, and analyzing KGs that could be alleviated through visualization design. Our findings reveal three major personas among KG practitioners – KG Builders, Analysts, and Consumers – each of whom have their own distinct expertise and needs. We discover that KG Builders would benefit from schema enforcers, while KG Analysts need customizable query builders that provide interim query results. For KG Consumers, we identify a lack of efficacy for node-link diagrams, and the need for tailored domain-specific visualizations to promote KG adoption and comprehension. Lastly, we find that implementing KGs effectively in practice requires both technical and social solutions that are not addressed with current tools, technologies, and collaborative workflows. From the analysis of our interviews, we distill several visualization research directions to improve KG usability, including knowledge cards that balance digestibility and discoverability, timeline views to track temporal changes, interfaces that support organic discovery, and semantic explanations for AI and machine learning predictions.",
                "AuthorNamesDeduped": "Harry X. Li;Gabriel Appleby;Camelia Daniela Brumar;Remco Chang;Ashley Suh 0001",
                "AuthorNames": "Harry Li;Gabriel Appleby;Camelia Daniela Brumar;Remco Chang;Ashley Suh",
                "AuthorAffiliation": "MIT Lincoln Laboratory, USA;Tufts University, USA;Tufts University, USA;Tufts University, USA;MIT Lincoln Laboratory, USA",
                "InternalReferences": "10.1109/tvcg.2018.2865040;10.1109/tvcg.2011.185;10.1109/tvcg.2020.3030443;10.1109/tvcg.2008.178;10.1109/tvcg.2022.3209453;10.1109/tvcg.2012.219;10.1109/tvcg.2021.3114863;10.1109/tvcg.2014.2346452;10.1109/tvcg.2020.3030378;10.1109/tvcg.2018.2865149;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934802",
                "AuthorKeywords": "Knowledge graphs,visualization techniques and methodologies,human factors,visual communication",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 490,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 26,
                "i": [
                    26
                ]
            }
        },
        {
            "name": "Zoya Bylinskii",
            "value": 296,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 208,
            "x": -137.0250936004584,
            "y": 45.54254849902024,
            "vy": 0,
            "vx": 0,
            "r": 1.3408175014392631,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Beyond Memorability: Visualization Recognition and Recall",
                "DOI": "10.1109/tvcg.2015.2467732",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467732",
                "FirstPage": 519,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.",
                "AuthorNamesDeduped": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorNames": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorAffiliation": "University of British Columbia, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;University of Michigan;School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT)",
                "InternalReferences": "0.1109/tvcg.2012.197;10.1109/tvcg.2013.234;10.1109/tvcg.2011.193;10.1109/tvcg.2012.233;10.1109/tvcg.2011.175;10.1109/tvcg.2013.234;10.1109/tvcg.2012.215;10.1109/vast.2010.5653598;10.1109/tvcg.2012.245;10.1109/tvcg.2012.221",
                "AuthorKeywords": "Information visualization, memorability, recognition, recall, eye-tracking study",
                "AminerCitationCount": 295,
                "CitationCountCrossRef": 188,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 5067,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1004,
                "i": [
                    1004
                ]
            }
        },
        {
            "name": "Aude Oliva",
            "value": 296,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 209,
            "x": 70.44281242270891,
            "y": -126.4429127234067,
            "vy": 0,
            "vx": 0,
            "r": 1.3408175014392631,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Beyond Memorability: Visualization Recognition and Recall",
                "DOI": "10.1109/tvcg.2015.2467732",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467732",
                "FirstPage": 519,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.",
                "AuthorNamesDeduped": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorNames": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorAffiliation": "University of British Columbia, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;University of Michigan;School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT)",
                "InternalReferences": "0.1109/tvcg.2012.197;10.1109/tvcg.2013.234;10.1109/tvcg.2011.193;10.1109/tvcg.2012.233;10.1109/tvcg.2011.175;10.1109/tvcg.2013.234;10.1109/tvcg.2012.215;10.1109/vast.2010.5653598;10.1109/tvcg.2012.245;10.1109/tvcg.2012.221",
                "AuthorKeywords": "Information visualization, memorability, recognition, recall, eye-tracking study",
                "AminerCitationCount": 295,
                "CitationCountCrossRef": 188,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 5067,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1004,
                "i": [
                    1004
                ]
            }
        },
        {
            "name": "Xiaoyu Zhang",
            "value": 100,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 210,
            "x": 33.54840504703213,
            "y": 141.1541870395642,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Bei Chen",
            "value": 81,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 211,
            "x": -120.37073467709416,
            "y": -81.61425263577802,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Lei Fang 0004",
            "value": 81,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 212,
            "x": 144.22702700833193,
            "y": -21.17934560693239,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Jian-Guang Lou",
            "value": 105,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 213,
            "x": -92.25839410432938,
            "y": 113.30661374028546,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements",
                "DOI": "10.1109/tvcg.2019.2934785",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934785",
                "FirstPage": 906,
                "LastPage": 916,
                "PaperType": "J",
                "Abstract": "Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.",
                "AuthorNamesDeduped": "Weiwei Cui;Xiaoyu Zhang;Yun Wang 0012;He Huang;Bei Chen;Lei Fang 0004;Haidong Zhang;Jian-Guang Lou;Dongmei Zhang 0001",
                "AuthorNames": "Weiwei Cui;Xiaoyu Zhang;Yun Wang;He Huang;Bei Chen;Lei Fang;Haidong Zhang;Jian-Guan Lou;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia;ViDi Research Group, University of California, Davis;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia;Microsoft Research Asia",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2016.2598876;10.1109/tvcg.2015.2467321;10.1109/tvcg.2016.2598620;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.221;10.1109/tvcg.2018.2865240;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2010.179;10.1109/tvcg.2015.2467471;10.1109/tvcg.2018.2865145;10.1109/tvcg.2007.70577;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization for the masses,infographic,automatic visualization,presentation,and dissemination",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 2302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 522,
                "i": [
                    522
                ]
            }
        },
        {
            "name": "Miriah Meyer",
            "value": 88,
            "numPapers": 92,
            "cluster": "5",
            "visible": 1,
            "index": 214,
            "x": -8.528953539277508,
            "y": -146.2096335797503,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Data Hunches: Incorporating Personal Knowledge into Visualizations",
                "DOI": "10.1109/tvcg.2022.3209451",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209451",
                "FirstPage": 504,
                "LastPage": 514,
                "PaperType": "J",
                "Abstract": "The trouble with data is that it frequently provides only an imperfect representation of a phenomenon of interest. Experts who are familiar with their datasets will often make implicit, mental corrections when analyzing a dataset, or will be cautious not to be overly confident about their findings if caveats are present. However, personal knowledge about the caveats of a dataset is typically not incorporated in a structured way, which is problematic if others who lack that knowledge interpret the data. In this work, we define such analysts' knowledge about datasets as data hunches. We differentiate data hunches from uncertainty and discuss types of hunches. We then explore ways of recording data hunches, and, based on a prototypical design, develop recommendations for designing visualizations that support data hunches. We conclude by discussing various challenges associated with data hunches, including the potential for harm and challenges for trust and privacy. We envision that data hunches will empower analysts to externalize their knowledge, facilitate collaboration and communication, and support the ability to learn from others' data hunches.",
                "AuthorNamesDeduped": "Haihan Lin;Derya Akbaba;Miriah Meyer;Alexander Lex",
                "AuthorNames": "Haihan Lin;Derya Akbaba;Miriah Meyer;Alexander Lex",
                "AuthorAffiliation": "University of Utah, USA;University of Utah, USA;Linköping University, Sweden;University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2014.2346298;10.1109/tvcg.2016.2599058;10.1109/tvcg.2019.2934287;10.1109/tvcg.2017.2745240;10.1109/tvcg.2018.2864913;10.1109/tvcg.2016.2598839;10.1109/tvcg.2017.2745958;10.1109/tvcg.2012.262;10.1109/tvcg.2020.3030376;10.1109/tvcg.2018.2864889;10.1109/tvcg.2007.70577;10.1109/tvcg.2011.251",
                "AuthorKeywords": "Data Visualization,Uncertainty,Data Hunches",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 662,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 154,
                "i": [
                    154
                ]
            }
        },
        {
            "name": "Marc Streit",
            "value": 881,
            "numPapers": 150,
            "cluster": "4",
            "visible": 1,
            "index": 215,
            "x": 105.29676543304284,
            "y": 102.28680848153759,
            "vy": 0,
            "vx": 0,
            "r": 2.014392630972942,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Characterizing Guidance in Visual Analytics",
                "DOI": "10.1109/tvcg.2016.2598468",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598468",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.",
                "AuthorNamesDeduped": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorNames": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany",
                "InternalReferences": "0.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "Visual analytics;guidance model;assistance;user support",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3130,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 958,
                "i": [
                    958
                ]
            }
        },
        {
            "name": "Bingchang Chen",
            "value": 4,
            "numPapers": 25,
            "cluster": "5",
            "visible": 1,
            "index": 216,
            "x": -147.07636556877492,
            "y": -4.306122511039656,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Supporting Guided Exploratory Visual Analysis on Time Series Data with Reinforcement Learning",
                "DOI": "10.1109/tvcg.2023.3327200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327200",
                "FirstPage": 1172,
                "LastPage": 1182,
                "PaperType": "J",
                "Abstract": "The exploratory visual analysis (EVA) of time series data uses visualization as the main output medium and input interface for exploring new data. However, for users who lack visual analysis expertise, interpreting and manipulating EVA can be challenging. Thus, providing guidance on EVA is necessary and two relevant questions need to be answered. First, how to recommend interesting insights to provide a first glance at data and help develop an exploration goal. Second, how to provide step-by-step EVA suggestions to help identify which parts of the data to explore. In this work, we present a reinforcement learning (RL)-based system, Visail, which generates EVA sequences to guide the exploration of time series data. As a user uploads a time series dataset, Visail can generate step-by-step EVA suggestions, while each step is visualized as an annotated chart combined with textual descriptions. The RL-based algorithm uses exploratory data analysis knowledge to construct the state and action spaces for the agent to imitate human analysis behaviors in data exploration tasks. In this way, the agent learns the strategy of generating coherent EVA sequences through a well-designed network. To evaluate the effectiveness of our system, we conducted an ablation study, a user study, and two case studies. The results of our evaluation suggested that Visail can provide effective guidance on supporting EVA on time series data.",
                "AuthorNamesDeduped": "Yang Shi 0007;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao 0001",
                "AuthorNames": "Yang Shi;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Huawei Cloud Computing Technologies Co., Ltd., China;Huawei Cloud Computing Technologies Co., Ltd., China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China",
                "InternalReferences": "10.1109/tvcg.2018.2865040;10.1109/vast.2014.7042480;10.1109/tvcg.2016.2598876;10.1109/tvcg.2016.2598468;10.1109/tvcg.2022.3209468;10.1109/tvcg.2021.3114875;10.1109/tvcg.2020.3028889;10.1109/tvcg.2018.2865077;10.1109/tvcg.2012.229;10.1109/tvcg.2018.2864526;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030403;10.1109/tvcg.2022.3209409;10.1109/tvcg.2022.3209486;10.1109/tvcg.2012.191;10.1109/tvcg.2018.2865145;10.1109/tvcg.2015.2467751;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/vast.2009.5332595;10.1109/tvcg.2021.3114826;10.1109/tvcg.2023.3326913;10.1109/tvcg.2021.3114774;10.1109/tvcg.2011.195;10.1109/tvcg.2021.3114865",
                "AuthorKeywords": "Time Series Data,Exploratory Visual Analysis,Reinforcement Learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 77,
                "DownloadsXplore": 448,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 29,
                "i": [
                    29
                ]
            }
        },
        {
            "name": "Ying Chen",
            "value": 4,
            "numPapers": 25,
            "cluster": "5",
            "visible": 1,
            "index": 217,
            "x": 111.6151608170312,
            "y": -96.39531044500171,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Supporting Guided Exploratory Visual Analysis on Time Series Data with Reinforcement Learning",
                "DOI": "10.1109/tvcg.2023.3327200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327200",
                "FirstPage": 1172,
                "LastPage": 1182,
                "PaperType": "J",
                "Abstract": "The exploratory visual analysis (EVA) of time series data uses visualization as the main output medium and input interface for exploring new data. However, for users who lack visual analysis expertise, interpreting and manipulating EVA can be challenging. Thus, providing guidance on EVA is necessary and two relevant questions need to be answered. First, how to recommend interesting insights to provide a first glance at data and help develop an exploration goal. Second, how to provide step-by-step EVA suggestions to help identify which parts of the data to explore. In this work, we present a reinforcement learning (RL)-based system, Visail, which generates EVA sequences to guide the exploration of time series data. As a user uploads a time series dataset, Visail can generate step-by-step EVA suggestions, while each step is visualized as an annotated chart combined with textual descriptions. The RL-based algorithm uses exploratory data analysis knowledge to construct the state and action spaces for the agent to imitate human analysis behaviors in data exploration tasks. In this way, the agent learns the strategy of generating coherent EVA sequences through a well-designed network. To evaluate the effectiveness of our system, we conducted an ablation study, a user study, and two case studies. The results of our evaluation suggested that Visail can provide effective guidance on supporting EVA on time series data.",
                "AuthorNamesDeduped": "Yang Shi 0007;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao 0001",
                "AuthorNames": "Yang Shi;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Huawei Cloud Computing Technologies Co., Ltd., China;Huawei Cloud Computing Technologies Co., Ltd., China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China",
                "InternalReferences": "10.1109/tvcg.2018.2865040;10.1109/vast.2014.7042480;10.1109/tvcg.2016.2598876;10.1109/tvcg.2016.2598468;10.1109/tvcg.2022.3209468;10.1109/tvcg.2021.3114875;10.1109/tvcg.2020.3028889;10.1109/tvcg.2018.2865077;10.1109/tvcg.2012.229;10.1109/tvcg.2018.2864526;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030403;10.1109/tvcg.2022.3209409;10.1109/tvcg.2022.3209486;10.1109/tvcg.2012.191;10.1109/tvcg.2018.2865145;10.1109/tvcg.2015.2467751;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/vast.2009.5332595;10.1109/tvcg.2021.3114826;10.1109/tvcg.2023.3326913;10.1109/tvcg.2021.3114774;10.1109/tvcg.2011.195;10.1109/tvcg.2021.3114865",
                "AuthorKeywords": "Time Series Data,Exploratory Visual Analysis,Reinforcement Learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 77,
                "DownloadsXplore": 448,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 29,
                "i": [
                    29
                ]
            }
        },
        {
            "name": "Zhuochen Jin",
            "value": 115,
            "numPapers": 40,
            "cluster": "3",
            "visible": 1,
            "index": 218,
            "x": -17.226915356666936,
            "y": 146.81019510679164,
            "vy": 0,
            "vx": 0,
            "r": 1.132412204951065,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Supporting Guided Exploratory Visual Analysis on Time Series Data with Reinforcement Learning",
                "DOI": "10.1109/tvcg.2023.3327200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327200",
                "FirstPage": 1172,
                "LastPage": 1182,
                "PaperType": "J",
                "Abstract": "The exploratory visual analysis (EVA) of time series data uses visualization as the main output medium and input interface for exploring new data. However, for users who lack visual analysis expertise, interpreting and manipulating EVA can be challenging. Thus, providing guidance on EVA is necessary and two relevant questions need to be answered. First, how to recommend interesting insights to provide a first glance at data and help develop an exploration goal. Second, how to provide step-by-step EVA suggestions to help identify which parts of the data to explore. In this work, we present a reinforcement learning (RL)-based system, Visail, which generates EVA sequences to guide the exploration of time series data. As a user uploads a time series dataset, Visail can generate step-by-step EVA suggestions, while each step is visualized as an annotated chart combined with textual descriptions. The RL-based algorithm uses exploratory data analysis knowledge to construct the state and action spaces for the agent to imitate human analysis behaviors in data exploration tasks. In this way, the agent learns the strategy of generating coherent EVA sequences through a well-designed network. To evaluate the effectiveness of our system, we conducted an ablation study, a user study, and two case studies. The results of our evaluation suggested that Visail can provide effective guidance on supporting EVA on time series data.",
                "AuthorNamesDeduped": "Yang Shi 0007;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao 0001",
                "AuthorNames": "Yang Shi;Bingchang Chen;Ying Chen;Zhuochen Jin;Ke Xu;Xiaohan Jiao;Tian Gao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Huawei Cloud Computing Technologies Co., Ltd., China;Huawei Cloud Computing Technologies Co., Ltd., China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China",
                "InternalReferences": "10.1109/tvcg.2018.2865040;10.1109/vast.2014.7042480;10.1109/tvcg.2016.2598876;10.1109/tvcg.2016.2598468;10.1109/tvcg.2022.3209468;10.1109/tvcg.2021.3114875;10.1109/tvcg.2020.3028889;10.1109/tvcg.2018.2865077;10.1109/tvcg.2012.229;10.1109/tvcg.2018.2864526;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030403;10.1109/tvcg.2022.3209409;10.1109/tvcg.2022.3209486;10.1109/tvcg.2012.191;10.1109/tvcg.2018.2865145;10.1109/tvcg.2015.2467751;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/vast.2009.5332595;10.1109/tvcg.2021.3114826;10.1109/tvcg.2023.3326913;10.1109/tvcg.2021.3114774;10.1109/tvcg.2011.195;10.1109/tvcg.2021.3114865",
                "AuthorKeywords": "Time Series Data,Exploratory Visual Analysis,Reinforcement Learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 77,
                "DownloadsXplore": 448,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 29,
                "i": [
                    29
                ]
            }
        },
        {
            "name": "Tian Gao",
            "value": 19,
            "numPapers": 31,
            "cluster": "5",
            "visible": 1,
            "index": 219,
            "x": -86.66390809187024,
            "y": -120.16391735560168,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Breaking the Fourth Wall of Data Stories through Interaction",
                "DOI": "10.1109/tvcg.2022.3209409",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209409",
                "FirstPage": 972,
                "LastPage": 982,
                "PaperType": "J",
                "Abstract": "Interaction is increasingly integrating into data stories to support data exploration and explanation. Interaction can also be combined with the narrative device, breaking the fourth wall (BTFW), to build a deeper connection between readers and data stories. BTFW interaction directly addresses readers by requiring their input. Such user input is then integrated into the narrative or visuals of data stories to encourage readers to inspect the stories more closely. In this work, we explore the design patterns of BTFW interaction commonly used in data stories. Six design patterns were identified through the analysis of 58 high-quality data stories collected from a range of online sources. Specifically, the data stories were categorized using a coding framework, including the input of BTFW interaction provided by readers and the output of BTFW interaction generated by data stories to respond to the input. To explore the benefits as well as concerns of using BTFW interaction, we conducted a three-session user study including the reading, interview, and recall sessions. The results of our user study suggested that BTFW interaction has a positive impact on self-story connection, user engagement, and information recall. We also discussed design implications to address the possible negative effects on the interactivity-comprehensibility balance, information privacy, and the learning curve of interaction brought by BTFW interaction.",
                "AuthorNamesDeduped": "Yang Shi 0007;Tian Gao;Xiaohan Jiao;Nan Cao 0001",
                "AuthorNames": "Yang Shi;Tian Gao;Xiaohan Jiao;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2015.2467201;10.1109/tvcg.2013.124;10.1109/tvcg.2019.2934283;10.1109/tvcg.2013.130;10.1109/tvcg.2013.120;10.1109/tvcg.2010.179;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Interaction,data-driven storytelling,narrative devices",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 991,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 185,
                "i": [
                    185
                ]
            }
        },
        {
            "name": "Yunhai Wang",
            "value": 439,
            "numPapers": 246,
            "cluster": "5",
            "visible": 1,
            "index": 220,
            "x": 145.40291507231137,
            "y": 30.132910388381106,
            "vy": 0,
            "vx": 0,
            "r": 1.5054691997697178,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "KD-Box: Line-segment-based KD-tree for Interactive Exploration of Large-scale Time-Series Data",
                "DOI": "10.1109/tvcg.2021.3114865",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114865",
                "FirstPage": 890,
                "LastPage": 900,
                "PaperType": "J",
                "Abstract": "Time-series data-usually presented in the form of lines-plays an important role in many domains such as finance, meteorology, health, and urban informatics. Yet, little has been done to support interactive exploration of large-scale time-series data, which requires a clutter-free visual representation with low-latency interactions. In this paper, we contribute a novel line-segment-based KD-tree method to enable interactive analysis of many time series. Our method enables not only fast queries over time series in selected regions of interest but also a line splatting method for efficient computation of the density field and selection of representative lines. Further, we develop KD-Box, an interactive system that provides rich interactions, e.g., timebox, attribute filtering, and coordinated multiple views. We demonstrate the effectiveness of KD-Box in supporting efficient line query and density field computation through a quantitative comparison and show its usefulness for interactive visual analysis on several real-world datasets.",
                "AuthorNamesDeduped": "Yue Zhao;Yunhai Wang;Jian Zhang 0070;Chi-Wing Fu;Mingliang Xu;Dominik Moritz",
                "AuthorNames": "Yue Zhao;Yunhai Wang;Jian Zhang;Chi-Wing Fu;Mingliang Xu;Dominik Moritz",
                "AuthorAffiliation": "Shandong University, Qingdao, China;Shandong University, Qingdao, China;CNIC, CAS., United States;Chinese University of Hong Kong, China;Zhengzhou University, China;Carnegie Mellon University, United States",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.226;10.1109/vast.2008.4677357;10.1109/tvcg.2010.176;10.1109/tvcg.2010.162;10.1109/tvcg.2013.179;10.1109/tvcg.2014.2346452;10.1109/visual.2005.1532779;10.1109/tvcg.2006.170;10.1109/tvcg.2011.181;10.1109/infvis.1999.801851;10.1109/infvis.2001.963273;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Many time series,density-based visualization,interactive visualization for large-scale data",
                "AminerCitationCount": 4,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1015,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 275,
                "i": [
                    275
                ]
            }
        },
        {
            "name": "Chi-Wing Fu",
            "value": 361,
            "numPapers": 151,
            "cluster": "2",
            "visible": 1,
            "index": 221,
            "x": -127.85902016896397,
            "y": 76.17132637306814,
            "vy": 0,
            "vx": 0,
            "r": 1.4156591824985607,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "LassoNet: Deep Lasso-Selection of 3D Point Clouds",
                "DOI": "10.1109/tvcg.2019.2934332",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934332",
                "FirstPage": 195,
                "LastPage": 204,
                "PaperType": "J",
                "Abstract": "Selection is a fundamental task in exploratory analysis and visualization of 3D point clouds. Prior researches on selection methods were developed mainly based on heuristics such as local point density, thus limiting their applicability in general data. Specific challenges root in the great variabilities implied by point clouds (e.g., dense vs. sparse), viewpoint (e.g., occluded vs. non-occluded), and lasso (e.g., small vs. large). In this work, we introduce LassoNet, a new deep neural network for lasso selection of 3D point clouds, attempting to learn a latent mapping from viewpoint and lasso to point cloud regions. To achieve this, we couple user-target points with viewpoint and lasso information through 3D coordinate transform and naive selection, and improve the method scalability via an intention filtering and farthest point sampling. A hierarchical network is trained using a dataset with over 30K lasso-selection records on two different point cloud data. We conduct a formal user study to compare LassoNet with two state-of-the-art lasso-selection methods. The evaluations confirm that our approach improves the selection effectiveness and efficiency across different combinations of 3D point clouds, viewpoints, and lasso selections. Project Website: https://LassoNet.github.io",
                "AuthorNamesDeduped": "Zhutian Chen;Wei Zeng 0004;Zhiguang Yang;Lingyun Yu 0005;Chi-Wing Fu;Huamin Qu",
                "AuthorNames": "Zhutian Chen;Wei Zeng;Zhiguang Yang;Lingyun Yu;Chi-Wing Fu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences;Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences;University of Groningen;Chinese University of Hong Kong;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2018.2865138;10.1109/tvcg.2018.2865191;10.1109/tvcg.2016.2599049;10.1109/tvcg.2012.292;10.1109/infvis.1996.559216;10.1109/tvcg.2012.217;10.1109/tvcg.2015.2467202",
                "AuthorKeywords": "Point Clouds,Lasso Selection,Deep Learning",
                "AminerCitationCount": 33,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 1362,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 576,
                "i": [
                    576
                ]
            }
        },
        {
            "name": "Danielle Albers Szafir",
            "value": 195,
            "numPapers": 117,
            "cluster": "5",
            "visible": 1,
            "index": 222,
            "x": 42.92283501872332,
            "y": -142.85527723523364,
            "vy": 0,
            "vx": 0,
            "r": 1.224525043177893,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Color Crafting: Automating the Construction of Designer Quality Color Ramps",
                "DOI": "10.1109/tvcg.2019.2934284",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934284",
                "FirstPage": 1215,
                "LastPage": 1225,
                "PaperType": "J",
                "Abstract": "Visualizations often encode numeric data using sequential and diverging color ramps. Effective ramps use colors that are sufficiently discriminable, align well with the data, and are aesthetically pleasing. Designers rely on years of experience to create high-quality color ramps. However, it is challenging for novice visualization developers that lack this experience to craft effective ramps as most guidelines for constructing ramps are loosely defined qualitative heuristics that are often difficult to apply. Our goal is to enable visualization developers to readily create effective color encodings using a single seed color. We do this using an algorithmic approach that models designer practices by analyzing patterns in the structure of designer-crafted color ramps. We construct these models from a corpus of 222 expert-designed color ramps, and use the results to automatically generate ramps that mimic designer practices. We evaluate our approach through an empirical study comparing the outputs of our approach with designer-crafted color ramps. Our models produce ramps that support accurate and aesthetically pleasing visualizations at least as well as designer ramps and that outperform conventional mathematical approaches.",
                "AuthorNamesDeduped": "Stephen Smart;Keke Wu;Danielle Albers Szafir",
                "AuthorNames": "Stephen Smart;Keke Wu;Danielle Albers Szafir",
                "AuthorAffiliation": "University of Colorado Boulder;University of Colorado Boulder;University of Colorado Boulder",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/tvcg.2017.2743978;10.1109/tvcg.2014.2346978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2008.174;10.1109/tvcg.2012.279;10.1109/tvcg.2018.2865240;10.1109/tvcg.2016.2599106;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865147;10.1109/tvcg.2017.2744359;10.1109/tvcg.2014.2346277;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization,Aesthetics in Visualization,Color Perception,Visual Design,Design Mining",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 998,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 531,
                "i": [
                    531
                ]
            }
        },
        {
            "name": "Jinwook Seo",
            "value": 208,
            "numPapers": 68,
            "cluster": "5",
            "visible": 1,
            "index": 223,
            "x": 64.99295165945037,
            "y": 134.63252294520947,
            "vy": 0,
            "vx": 0,
            "r": 1.2394933793897525,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering",
                "DOI": "10.1109/tvcg.2023.3327201",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327201",
                "FirstPage": 770,
                "LastPage": 780,
                "PaperType": "J",
                "Abstract": "Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.",
                "AuthorNamesDeduped": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen 0001;Danielle Albers Szafir;Jinwook Seo",
                "AuthorNames": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen;Danielle Albers Szafir;Jinwook Seo",
                "AuthorAffiliation": "Seoul National University, South Korea;University of North Carolina, Chapel Hill, USA;UNIST, South Korea;University of Utah, USA;Seoul National University, South Korea;Seoul National University, South Korea",
                "InternalReferences": "10.1109/infvis.2005.1532136;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/tvcg.2014.2346572;10.1109/tvcg.2021.3114833;10.1109/tvcg.2017.2744718;10.1109/tvcg.2019.2934811;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864912;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Cluster,scatterplot,perception,cluster analysis,cluster ambiguity,visual quality measure",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 382,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 30,
                "i": [
                    30
                ]
            }
        },
        {
            "name": "Hyunwook Lee",
            "value": 0,
            "numPapers": 30,
            "cluster": "1",
            "visible": 1,
            "index": 224,
            "x": -139.1770595192963,
            "y": -55.495460206779626,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering",
                "DOI": "10.1109/tvcg.2023.3327201",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327201",
                "FirstPage": 770,
                "LastPage": 780,
                "PaperType": "J",
                "Abstract": "Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.",
                "AuthorNamesDeduped": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen 0001;Danielle Albers Szafir;Jinwook Seo",
                "AuthorNames": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen;Danielle Albers Szafir;Jinwook Seo",
                "AuthorAffiliation": "Seoul National University, South Korea;University of North Carolina, Chapel Hill, USA;UNIST, South Korea;University of Utah, USA;Seoul National University, South Korea;Seoul National University, South Korea",
                "InternalReferences": "10.1109/infvis.2005.1532136;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/tvcg.2014.2346572;10.1109/tvcg.2021.3114833;10.1109/tvcg.2017.2744718;10.1109/tvcg.2019.2934811;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864912;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Cluster,scatterplot,perception,cluster analysis,cluster ambiguity,visual quality measure",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 382,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 30,
                "i": [
                    30
                ]
            }
        },
        {
            "name": "Duen Horng (Polo) Chau",
            "value": 360,
            "numPapers": 37,
            "cluster": "1",
            "visible": 1,
            "index": 225,
            "x": 140.42318253930216,
            "y": -53.21024154741102,
            "vy": 0,
            "vx": 0,
            "r": 1.4145077720207253,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations",
                "DOI": "10.1109/tvcg.2019.2934659",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934659",
                "FirstPage": 1096,
                "LastPage": 1106,
                "PaperType": "J",
                "Abstract": "Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons. As predictions are often computed from millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture. We present Summit, an interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. Summit introduces two new scalable summarization techniques: (1) activation aggregation discovers important neurons, and (2) neuron-influence aggregation identifies relationships among such neurons. Summit combines these techniques to create the novel attribution graph that reveals and summarizes crucial neuron associations and substructures that contribute to a model's outcomes. Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models into compact, interactive visualizations. We present neural network exploration scenarios where Summit helps us discover multiple surprising insights into a prevalent, large-scale image classifier's learned representations and informs future neural network architecture design. The Summit visualization runs in modern web browsers and is open-sourced.",
                "AuthorNamesDeduped": "Fred Hohman;Haekyu Park;Caleb Robinson;Duen Horng (Polo) Chau",
                "AuthorNames": "Fred Hohman;Haekyu Park;Caleb Robinson;Duen Horng Polo Chau",
                "AuthorAffiliation": "Georgia Tech.;Georgia Tech.;Georgia Tech.;Georgia Tech.",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2009.108;10.1109/tvcg.2017.2744878",
                "AuthorKeywords": "Deep learning interpretability,visual analytics,scalable summarization,attribution graph",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2485,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 597,
                "i": [
                    597
                ]
            }
        },
        {
            "name": "Paul Rosen 0001",
            "value": 50,
            "numPapers": 74,
            "cluster": "5",
            "visible": 1,
            "index": 226,
            "x": -67.75040768548475,
            "y": 134.38706135060252,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Persistent Homology Guided Force-Directed Graph Layouts",
                "DOI": "10.1109/tvcg.2019.2934802",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934802",
                "FirstPage": 697,
                "LastPage": 707,
                "PaperType": "J",
                "Abstract": "Graphs are commonly used to encode relationships among entities, yet their abstractness makes them difficult to analyze. Node-link diagrams are popular for drawing graphs, and force-directed layouts provide a flexible method for node arrangements that use local relationships in an attempt to reveal the global shape of the graph. However, clutter and overlap of unrelated structures can lead to confusing graph visualizations. This paper leverages the persistent homology features of an undirected graph as derived information for interactive manipulation of force-directed layouts. We first discuss how to efficiently extract 0-dimensional persistent homology features from both weighted and unweighted undirected graphs. We then introduce the interactive persistence barcode used to manipulate the force-directed graph layout. In particular, the user adds and removes contracting and repulsing forces generated by the persistent homology features, eventually selecting the set of persistent homology features that most improve the layout. Finally, we demonstrate the utility of our approach across a variety of synthetic and real datasets.",
                "AuthorNamesDeduped": "Ashley Suh 0001;Mustafa Hajij;Bei Wang 0001;Carlos Scheidegger;Paul Rosen 0001",
                "AuthorNames": "Ashley Suh;Mustafa Hajij;Bei Wang;Carlos Scheidegger;Paul Rosen",
                "AuthorAffiliation": "University of South Florida, Tufts University;Ohio State University;University of Utah;University of Arizona;University of South Florida",
                "InternalReferences": "0.1109/tvcg.2016.2598958;10.1109/tvcg.2009.122;10.1109/tvcg.2012.208;10.1109/tvcg.2013.151;10.1109/tvcg.2011.223;10.1109/infvis.2002.1173159;10.1109/tvcg.2008.158;10.1109/tvcg.2017.2744321;10.1109/tvcg.2011.190;10.1109/tvcg.2014.2346441;10.1109/tvcg.2017.2745919;10.1109/tvcg.2018.2864911;10.1109/infvis.2003.1249008",
                "AuthorKeywords": "Graph drawing,force-directed layout,Topological Data Analysis,persistent homology",
                "AminerCitationCount": 18,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 882,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 553,
                "i": [
                    553
                ]
            }
        },
        {
            "name": "Hyeon Jeon",
            "value": 18,
            "numPapers": 31,
            "cluster": "1",
            "visible": 1,
            "index": 227,
            "x": -40.91012489722213,
            "y": -145.1770012119471,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering",
                "DOI": "10.1109/tvcg.2023.3327201",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327201",
                "FirstPage": 770,
                "LastPage": 780,
                "PaperType": "J",
                "Abstract": "Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.",
                "AuthorNamesDeduped": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen 0001;Danielle Albers Szafir;Jinwook Seo",
                "AuthorNames": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen;Danielle Albers Szafir;Jinwook Seo",
                "AuthorAffiliation": "Seoul National University, South Korea;University of North Carolina, Chapel Hill, USA;UNIST, South Korea;University of Utah, USA;Seoul National University, South Korea;Seoul National University, South Korea",
                "InternalReferences": "10.1109/infvis.2005.1532136;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/tvcg.2014.2346572;10.1109/tvcg.2021.3114833;10.1109/tvcg.2017.2744718;10.1109/tvcg.2019.2934811;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864912;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Cluster,scatterplot,perception,cluster analysis,cluster ambiguity,visual quality measure",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 382,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 30,
                "i": [
                    30
                ]
            }
        },
        {
            "name": "Oliver Deussen",
            "value": 575,
            "numPapers": 245,
            "cluster": "5",
            "visible": 1,
            "index": 228,
            "x": 128.51302663490463,
            "y": 79.58895642698361,
            "vy": 0,
            "vx": 0,
            "r": 1.6620610247553254,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Optimizing Color Assignment for Perception of Class Separability in Multiclass Scatterplots",
                "DOI": "10.1109/tvcg.2018.2864912",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864912",
                "FirstPage": 820,
                "LastPage": 829,
                "PaperType": "J",
                "Abstract": "Appropriate choice of colors significantly aids viewers in understanding the structures in multiclass scatterplots and becomes more important with a growing number of data points and groups. An appropriate color mapping is also an important parameter for the creation of an aesthetically pleasing scatterplot. Currently, users of visualization software routinely rely on color mappings that have been pre-defined by the software. A default color mapping, however, cannot ensure an optimal perceptual separability between groups, and sometimes may even lead to a misinterpretation of the data. In this paper, we present an effective approach for color assignment based on a set of given colors that is designed to optimize the perception of scatterplots. Our approach takes into account the spatial relationships, density, degree of overlap between point clusters, and also the background color. For this purpose, we use a genetic algorithm that is able to efficiently find good color assignments. We implemented an interactive color assignment system with three extensions of the basic method that incorporates top K suggestions, user-defined color subsets, and classes of interest for the optimization. To demonstrate the effectiveness of our assignment technique, we conducted a numerical study and a controlled user study to compare our approach with default color assignments; our findings were verified by two expert studies. The results show that our approach is able to support users in distinguishing cluster numbers faster and more precisely than default assignment methods.",
                "AuthorNamesDeduped": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorNames": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorAffiliation": "Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Universitat Stuttgart, Stuttgart, Baden-Württemberg, DE;Chinese University of Hong Kong, New Territories, HK;Universitat Konstanz, Konstanz, Baden-Württemberg, DE;Peking University, Beijing, Beijing, CN",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/tvcg.2016.2599214;10.1109/tvcg.2013.183;10.1109/tvcg.2016.2598918;10.1109/visual.1996.568118;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.153;10.1109/tvcg.2015.2467471;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2008.118",
                "AuthorKeywords": "Color perception,visual design,scatterplots",
                "AminerCitationCount": 40,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1169,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 667,
                "i": [
                    667
                ]
            }
        },
        {
            "name": "Baoquan Chen",
            "value": 318,
            "numPapers": 75,
            "cluster": "2",
            "visible": 1,
            "index": 229,
            "x": -148.84771485100728,
            "y": 28.184353525195814,
            "vy": 0,
            "vx": 0,
            "r": 1.3661485319516409,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Optimizing Color Assignment for Perception of Class Separability in Multiclass Scatterplots",
                "DOI": "10.1109/tvcg.2018.2864912",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864912",
                "FirstPage": 820,
                "LastPage": 829,
                "PaperType": "J",
                "Abstract": "Appropriate choice of colors significantly aids viewers in understanding the structures in multiclass scatterplots and becomes more important with a growing number of data points and groups. An appropriate color mapping is also an important parameter for the creation of an aesthetically pleasing scatterplot. Currently, users of visualization software routinely rely on color mappings that have been pre-defined by the software. A default color mapping, however, cannot ensure an optimal perceptual separability between groups, and sometimes may even lead to a misinterpretation of the data. In this paper, we present an effective approach for color assignment based on a set of given colors that is designed to optimize the perception of scatterplots. Our approach takes into account the spatial relationships, density, degree of overlap between point clusters, and also the background color. For this purpose, we use a genetic algorithm that is able to efficiently find good color assignments. We implemented an interactive color assignment system with three extensions of the basic method that incorporates top K suggestions, user-defined color subsets, and classes of interest for the optimization. To demonstrate the effectiveness of our assignment technique, we conducted a numerical study and a controlled user study to compare our approach with default color assignments; our findings were verified by two expert studies. The results show that our approach is able to support users in distinguishing cluster numbers faster and more precisely than default assignment methods.",
                "AuthorNamesDeduped": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorNames": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorAffiliation": "Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Universitat Stuttgart, Stuttgart, Baden-Württemberg, DE;Chinese University of Hong Kong, New Territories, HK;Universitat Konstanz, Konstanz, Baden-Württemberg, DE;Peking University, Beijing, Beijing, CN",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/tvcg.2016.2599214;10.1109/tvcg.2013.183;10.1109/tvcg.2016.2598918;10.1109/visual.1996.568118;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.153;10.1109/tvcg.2015.2467471;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2008.118",
                "AuthorKeywords": "Color perception,visual design,scatterplots",
                "AminerCitationCount": 40,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1169,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 667,
                "i": [
                    667
                ]
            }
        },
        {
            "name": "Changjian Chen",
            "value": 63,
            "numPapers": 78,
            "cluster": "1",
            "visible": 1,
            "index": 230,
            "x": 90.91484184756672,
            "y": -121.59149448802711,
            "vy": 0,
            "vx": 0,
            "r": 1.072538860103627,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "VideoPro: A Visual Analytics Approach for Interactive Video Programming",
                "DOI": "10.1109/tvcg.2023.3326586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326586",
                "FirstPage": 87,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.",
                "AuthorNamesDeduped": "Jianben He;Xingbo Wang 0001;Kamkwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorNames": "Jianben He;Xingbo Wang;Kam Kwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Tsinghua University, Beijing, China;Hong Kong University of Science and Technology, Hong Kong, China;Sichuang University, Chengdu, China;Sichuang University, Chengdu, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "10.1109/vast.2016.7883520;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114806;10.1109/tvcg.2023.3327168;10.1109/tvcg.2022.3209466;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114793;10.1109/tvcg.2019.2934266;10.1109/tvcg.2016.2598695;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114789;10.1109/tvcg.2011.208;10.1109/tvcg.2021.3114822;10.1109/vast47406.2019.8986917;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209452;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Interactive machine learning,data programming,video exploration and analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 381,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 31,
                "i": [
                    31
                ]
            }
        },
        {
            "name": "Daniel Weiskopf",
            "value": 910,
            "numPapers": 245,
            "cluster": "6",
            "visible": 1,
            "index": 231,
            "x": 15.128810437382747,
            "y": 151.39722287660936,
            "vy": 0,
            "vx": 0,
            "r": 2.0477835348301667,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visual analysis and coding of data-rich user behavior",
                "DOI": "10.1109/vast.2016.7883520",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883520",
                "FirstPage": 141,
                "LastPage": 150,
                "PaperType": "C",
                "Abstract": "Investigating user behavior involves abstracting low-level events to higher-level concepts. This requires an analyst to study individual user activities, assign codes which categorize behavior, and develop a consistent classification scheme. To better support this reasoning process of an analyst, we suggest a novel visual analytics approach which integrates rich user data including transcripts, videos, eye movement data, and interaction logs. Word-sized visualizations embedded into a tabular representation provide a space-efficient and detailed overview of user activities. An analyst assigns codes, grouped into code categories, as part of an interactive process. Filtering and searching helps to select specific activities and focus an analysis. A comparison visualization summarizes results of coding and reveals relationships between codes. Editing features support efficient assignment, refinement, and aggregation of codes. We demonstrate the practical applicability and usefulness of our approach in a case study and describe expert feedback.",
                "AuthorNamesDeduped": "Tanja Blascheck;Fabian Beck 0001;Sebastian Baltes;Thomas Ertl;Daniel Weiskopf",
                "AuthorNames": "Tanja Blascheck;Fabian Beck;Sebastian Baltes;Thomas Ertl;Daniel Weiskopf",
                "AuthorAffiliation": "University of Stuttgart, Germany;University of Stuttgart, Germany;University of Trier, Germany;University of Stuttgart, Germany;University of Stuttgart, Germany",
                "InternalReferences": "0.1109/vast.2009.5333443;10.1109/visual.1990.146402;10.1109/tvcg.2011.226;10.1109/tvcg.2014.2346452;10.1109/tvcg.2008.137;10.1109/tvcg.2015.2467611;10.1109/tvcg.2015.2467757;10.1109/tvcg.2010.194;10.1109/tvcg.2014.2346677;10.1109/vast.2008.4677365;10.1109/tvcg.2013.124",
                "AuthorKeywords": null,
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 627,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 994,
                "i": [
                    994
                ]
            }
        },
        {
            "name": "Thomas Ertl",
            "value": 987,
            "numPapers": 297,
            "cluster": "6",
            "visible": 1,
            "index": 232,
            "x": -113.66757842777213,
            "y": -101.63504127202522,
            "vy": 0,
            "vx": 0,
            "r": 2.1364421416234887,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning",
                "DOI": "10.1109/tvcg.2020.3030467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030467",
                "FirstPage": 294,
                "LastPage": 303,
                "PaperType": "J",
                "Abstract": "Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.",
                "AuthorNamesDeduped": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch 0001;Lingyun Yu 0001;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorNames": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch;Lingyun Yu;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;Department of Computer Science and Software Engineering, Xi 'an Jiaotong-Liverpool University.;Alibaba Group;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2019.2934396;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2019.2934798",
                "AuthorKeywords": "Storyline visualization,reinforcement learning,mixed-initiative design",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 376,
                "i": [
                    376
                ]
            }
        },
        {
            "name": "Liu Ren",
            "value": 553,
            "numPapers": 177,
            "cluster": "1",
            "visible": 1,
            "index": 233,
            "x": 152.7959559750796,
            "y": -1.8427798733249952,
            "vy": 0,
            "vx": 0,
            "r": 1.6367299942429476,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Sequence Synopsis: Optimize Visual Summary of Temporal Event Data",
                "DOI": "10.1109/tvcg.2017.2745083",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745083",
                "FirstPage": 45,
                "LastPage": 55,
                "PaperType": "J",
                "Abstract": "Event sequences analysis plays an important role in many application domains such as customer behavior analysis, electronic health record analysis and vehicle fault diagnosis. Real-world event sequence data is often noisy and complex with high event cardinality, making it a challenging task to construct concise yet comprehensive overviews for such data. In this paper, we propose a novel visualization technique based on the minimum description length (MDL) principle to construct a coarse-level overview of event sequence data while balancing the information loss in it. The method addresses a fundamental trade-off in visualization design: reducing visual clutter vs. increasing the information content in a visualization. The method enables simultaneous sequence clustering and pattern extraction and is highly tolerant to noises such as missing or additional events in the data. Based on this approach we propose a visual analytics framework with multiple levels-of-detail to facilitate interactive data exploration. We demonstrate the usability and effectiveness of our approach through case studies with two real-world datasets. One dataset showcases a new application domain for event sequence visualization, i.e., fault development path analysis in vehicles for predictive maintenance. We also discuss the strengths and limitations of the proposed method based on user feedback.",
                "AuthorNamesDeduped": "Yuanzhe Chen;Panpan Xu;Liu Ren",
                "AuthorNames": "Yuanzhe Chen;Panpan Xu;Liu Ren",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Bosch Research North America, Palo Alto, CA;Bosch Research North America, Palo Alto, CA",
                "InternalReferences": "0.1109/vast.2016.7883512;10.1109/tvcg.2013.214;10.1109/tvcg.2014.2346682;10.1109/tvcg.2015.2467622;10.1109/tvcg.2011.179;10.1109/tvcg.2016.2598797;10.1109/tvcg.2015.2467991;10.1109/tvcg.2013.200;10.1109/vast.2015.7347682;10.1109/infvis.2000.885091;10.1109/tvcg.2016.2598591;10.1109/tvcg.2016.2598591;10.1109/tvcg.2009.117;10.1109/tvcg.2009.187;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2009.5332595;10.1109/tvcg.2013.167",
                "AuthorKeywords": "Time Series Data,Data Transformation and Representation,Visual Knowledge Representation,Visual Analytics",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 71,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2330,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 846,
                "i": [
                    846
                ]
            }
        },
        {
            "name": "Yuanzhe Chen",
            "value": 245,
            "numPapers": 47,
            "cluster": "3",
            "visible": 1,
            "index": 234,
            "x": -111.6605392481811,
            "y": 104.79467531704752,
            "vy": 0,
            "vx": 0,
            "r": 1.2820955670696603,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Sequence Synopsis: Optimize Visual Summary of Temporal Event Data",
                "DOI": "10.1109/tvcg.2017.2745083",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745083",
                "FirstPage": 45,
                "LastPage": 55,
                "PaperType": "J",
                "Abstract": "Event sequences analysis plays an important role in many application domains such as customer behavior analysis, electronic health record analysis and vehicle fault diagnosis. Real-world event sequence data is often noisy and complex with high event cardinality, making it a challenging task to construct concise yet comprehensive overviews for such data. In this paper, we propose a novel visualization technique based on the minimum description length (MDL) principle to construct a coarse-level overview of event sequence data while balancing the information loss in it. The method addresses a fundamental trade-off in visualization design: reducing visual clutter vs. increasing the information content in a visualization. The method enables simultaneous sequence clustering and pattern extraction and is highly tolerant to noises such as missing or additional events in the data. Based on this approach we propose a visual analytics framework with multiple levels-of-detail to facilitate interactive data exploration. We demonstrate the usability and effectiveness of our approach through case studies with two real-world datasets. One dataset showcases a new application domain for event sequence visualization, i.e., fault development path analysis in vehicles for predictive maintenance. We also discuss the strengths and limitations of the proposed method based on user feedback.",
                "AuthorNamesDeduped": "Yuanzhe Chen;Panpan Xu;Liu Ren",
                "AuthorNames": "Yuanzhe Chen;Panpan Xu;Liu Ren",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Bosch Research North America, Palo Alto, CA;Bosch Research North America, Palo Alto, CA",
                "InternalReferences": "0.1109/vast.2016.7883512;10.1109/tvcg.2013.214;10.1109/tvcg.2014.2346682;10.1109/tvcg.2015.2467622;10.1109/tvcg.2011.179;10.1109/tvcg.2016.2598797;10.1109/tvcg.2015.2467991;10.1109/tvcg.2013.200;10.1109/vast.2015.7347682;10.1109/infvis.2000.885091;10.1109/tvcg.2016.2598591;10.1109/tvcg.2016.2598591;10.1109/tvcg.2009.117;10.1109/tvcg.2009.187;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2009.5332595;10.1109/tvcg.2013.167",
                "AuthorKeywords": "Time Series Data,Data Transformation and Representation,Visual Knowledge Representation,Visual Analytics",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 71,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2330,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 846,
                "i": [
                    846
                ]
            }
        },
        {
            "name": "Panpan Xu",
            "value": 533,
            "numPapers": 124,
            "cluster": "3",
            "visible": 1,
            "index": 235,
            "x": 11.571815230778894,
            "y": -153.02317828441778,
            "vy": 0,
            "vx": 0,
            "r": 1.6137017846862407,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Sequence Synopsis: Optimize Visual Summary of Temporal Event Data",
                "DOI": "10.1109/tvcg.2017.2745083",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745083",
                "FirstPage": 45,
                "LastPage": 55,
                "PaperType": "J",
                "Abstract": "Event sequences analysis plays an important role in many application domains such as customer behavior analysis, electronic health record analysis and vehicle fault diagnosis. Real-world event sequence data is often noisy and complex with high event cardinality, making it a challenging task to construct concise yet comprehensive overviews for such data. In this paper, we propose a novel visualization technique based on the minimum description length (MDL) principle to construct a coarse-level overview of event sequence data while balancing the information loss in it. The method addresses a fundamental trade-off in visualization design: reducing visual clutter vs. increasing the information content in a visualization. The method enables simultaneous sequence clustering and pattern extraction and is highly tolerant to noises such as missing or additional events in the data. Based on this approach we propose a visual analytics framework with multiple levels-of-detail to facilitate interactive data exploration. We demonstrate the usability and effectiveness of our approach through case studies with two real-world datasets. One dataset showcases a new application domain for event sequence visualization, i.e., fault development path analysis in vehicles for predictive maintenance. We also discuss the strengths and limitations of the proposed method based on user feedback.",
                "AuthorNamesDeduped": "Yuanzhe Chen;Panpan Xu;Liu Ren",
                "AuthorNames": "Yuanzhe Chen;Panpan Xu;Liu Ren",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Bosch Research North America, Palo Alto, CA;Bosch Research North America, Palo Alto, CA",
                "InternalReferences": "0.1109/vast.2016.7883512;10.1109/tvcg.2013.214;10.1109/tvcg.2014.2346682;10.1109/tvcg.2015.2467622;10.1109/tvcg.2011.179;10.1109/tvcg.2016.2598797;10.1109/tvcg.2015.2467991;10.1109/tvcg.2013.200;10.1109/vast.2015.7347682;10.1109/infvis.2000.885091;10.1109/tvcg.2016.2598591;10.1109/tvcg.2016.2598591;10.1109/tvcg.2009.117;10.1109/tvcg.2009.187;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2009.5332595;10.1109/tvcg.2013.167",
                "AuthorKeywords": "Time Series Data,Data Transformation and Representation,Visual Knowledge Representation,Visual Analytics",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 71,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2330,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 846,
                "i": [
                    846
                ]
            }
        },
        {
            "name": "Xijie Huang",
            "value": 8,
            "numPapers": 20,
            "cluster": "1",
            "visible": 1,
            "index": 236,
            "x": 95.03410592249321,
            "y": 120.90706642505369,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "VideoPro: A Visual Analytics Approach for Interactive Video Programming",
                "DOI": "10.1109/tvcg.2023.3326586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326586",
                "FirstPage": 87,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.",
                "AuthorNamesDeduped": "Jianben He;Xingbo Wang 0001;Kamkwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorNames": "Jianben He;Xingbo Wang;Kam Kwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Tsinghua University, Beijing, China;Hong Kong University of Science and Technology, Hong Kong, China;Sichuang University, Chengdu, China;Sichuang University, Chengdu, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "10.1109/vast.2016.7883520;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114806;10.1109/tvcg.2023.3327168;10.1109/tvcg.2022.3209466;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114793;10.1109/tvcg.2019.2934266;10.1109/tvcg.2016.2598695;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114789;10.1109/tvcg.2011.208;10.1109/tvcg.2021.3114822;10.1109/vast47406.2019.8986917;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209452;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Interactive machine learning,data programming,video exploration and analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 381,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 31,
                "i": [
                    31
                ]
            }
        },
        {
            "name": "Zixin Chen",
            "value": 8,
            "numPapers": 20,
            "cluster": "1",
            "visible": 1,
            "index": 237,
            "x": -152.06722116163633,
            "y": -25.011202453660367,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "VideoPro: A Visual Analytics Approach for Interactive Video Programming",
                "DOI": "10.1109/tvcg.2023.3326586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326586",
                "FirstPage": 87,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.",
                "AuthorNamesDeduped": "Jianben He;Xingbo Wang 0001;Kamkwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorNames": "Jianben He;Xingbo Wang;Kam Kwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Tsinghua University, Beijing, China;Hong Kong University of Science and Technology, Hong Kong, China;Sichuang University, Chengdu, China;Sichuang University, Chengdu, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "10.1109/vast.2016.7883520;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114806;10.1109/tvcg.2023.3327168;10.1109/tvcg.2022.3209466;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114793;10.1109/tvcg.2019.2934266;10.1109/tvcg.2016.2598695;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114789;10.1109/tvcg.2011.208;10.1109/tvcg.2021.3114822;10.1109/vast47406.2019.8986917;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209452;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Interactive machine learning,data programming,video exploration and analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 381,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 31,
                "i": [
                    31
                ]
            }
        },
        {
            "name": "Fengjie Wang",
            "value": 8,
            "numPapers": 20,
            "cluster": "1",
            "visible": 1,
            "index": 238,
            "x": 129.2958058830074,
            "y": -84.45468951493268,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "VideoPro: A Visual Analytics Approach for Interactive Video Programming",
                "DOI": "10.1109/tvcg.2023.3326586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326586",
                "FirstPage": 87,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.",
                "AuthorNamesDeduped": "Jianben He;Xingbo Wang 0001;Kamkwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorNames": "Jianben He;Xingbo Wang;Kam Kwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Tsinghua University, Beijing, China;Hong Kong University of Science and Technology, Hong Kong, China;Sichuang University, Chengdu, China;Sichuang University, Chengdu, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "10.1109/vast.2016.7883520;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114806;10.1109/tvcg.2023.3327168;10.1109/tvcg.2022.3209466;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114793;10.1109/tvcg.2019.2934266;10.1109/tvcg.2016.2598695;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114789;10.1109/tvcg.2011.208;10.1109/tvcg.2021.3114822;10.1109/vast47406.2019.8986917;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209452;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Interactive machine learning,data programming,video exploration and analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 381,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 31,
                "i": [
                    31
                ]
            }
        },
        {
            "name": "Min Zhu",
            "value": 82,
            "numPapers": 25,
            "cluster": "1",
            "visible": 1,
            "index": 239,
            "x": -38.37056961782545,
            "y": 149.92564619571797,
            "vy": 0,
            "vx": 0,
            "r": 1.0944156591824985,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "VideoPro: A Visual Analytics Approach for Interactive Video Programming",
                "DOI": "10.1109/tvcg.2023.3326586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326586",
                "FirstPage": 87,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.",
                "AuthorNamesDeduped": "Jianben He;Xingbo Wang 0001;Kamkwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorNames": "Jianben He;Xingbo Wang;Kam Kwai Wong;Xijie Huang;Changjian Chen;Zixin Chen;Fengjie Wang;Min Zhu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China;Tsinghua University, Beijing, China;Hong Kong University of Science and Technology, Hong Kong, China;Sichuang University, Chengdu, China;Sichuang University, Chengdu, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "10.1109/vast.2016.7883520;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114806;10.1109/tvcg.2023.3327168;10.1109/tvcg.2022.3209466;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114793;10.1109/tvcg.2019.2934266;10.1109/tvcg.2016.2598695;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114789;10.1109/tvcg.2011.208;10.1109/tvcg.2021.3114822;10.1109/vast47406.2019.8986917;10.1109/tvcg.2021.3114781;10.1109/tvcg.2021.3114794;10.1109/tvcg.2022.3209452;10.1109/tvcg.2019.2934656;10.1109/tvcg.2022.3209483;10.1109/tvcg.2022.3209391",
                "AuthorKeywords": "Interactive machine learning,data programming,video exploration and analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 381,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 31,
                "i": [
                    31
                ]
            }
        },
        {
            "name": "Shuainan Ye",
            "value": 231,
            "numPapers": 52,
            "cluster": "3",
            "visible": 1,
            "index": 240,
            "x": -73.1322554843374,
            "y": -136.7540610284521,
            "vy": 0,
            "vx": 0,
            "r": 1.2659758203799654,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Augmenting Sports Videos with VisCommentator",
                "DOI": "10.1109/tvcg.2021.3114806",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114806",
                "FirstPage": 824,
                "LastPage": 834,
                "PaperType": "J",
                "Abstract": "Visualizing data in sports videos is gaining traction in sports analytics, given its ability to communicate insights and explicate player strategies engagingly. However, augmenting sports videos with such data visualizations is challenging, especially for sports analysts, as it requires considerable expertise in video editing. To ease the creation process, we present a design space that characterizes augmented sports videos at an element-level <i>(what the constituents are)</i> and clip-level <i>(how those constituents are organized)</i>. We do so by systematically reviewing 233 examples of augmented sports videos collected from TV channels, teams, and leagues. The design space guides selection of data insights and visualizations for various purposes. Informed by the design space and close collaboration with domain experts, we design VisCommentator, a fast prototyping tool, to eases the creation of augmented table tennis videos by leveraging machine learning-based data extractors and design space-based visualization recommendations. With VisCommentator, sports analysts can create an augmented video by <i>selecting the data</i> to visualize instead of manually <i>drawing the graphical marks</i>. Our system can be generalized to other racket sports <i>(e.g</i>., tennis, badminton) once the underlying datasets and models are available. A user study with seven domain experts shows high satisfaction with our system, confirms that the participants can reproduce augmented sports videos in a short period, and provides insightful implications into future improvements and opportunities.",
                "AuthorNamesDeduped": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang 0051;Huamin Qu;Yingcai Wu",
                "AuthorNames": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang;Huamin Qu;Yingcai Wu",
                "AuthorAffiliation": "Department of Cognitive Science and Design Lab, State Key Lab of CAD & CG, Zhejiang University and Hong Kong University of Science and Technology, University of California, San Diego, United States;State Key Lab of CAD & CG, Zhejiang University, China;State Key Lab of CAD & CG, Zhejiang University, China;Department of Cognitive Science and Design Lab, University of California, San Diego, United States;Department of Sport Science, Zhejiang University, China;Hong Kong University of Science and Technology, Hong Kong;State Key Lab of CAD & CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2019.2934810;10.1109/tvcg.2014.2346250;10.1109/tvcg.2018.2865240;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2019.2934656;10.1109/tvcg.2020.3030458",
                "AuthorKeywords": "Augmented Sports Videos,Video-based Visualization,Sports visualization,Intelligent Design Tool,Storytelling",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 1771,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 259,
                "i": [
                    259
                ]
            }
        },
        {
            "name": "Xiangtong Chu",
            "value": 231,
            "numPapers": 52,
            "cluster": "3",
            "visible": 1,
            "index": 241,
            "x": 146.6053356853214,
            "y": 51.54488867573809,
            "vy": 0,
            "vx": 0,
            "r": 1.2659758203799654,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Augmenting Sports Videos with VisCommentator",
                "DOI": "10.1109/tvcg.2021.3114806",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114806",
                "FirstPage": 824,
                "LastPage": 834,
                "PaperType": "J",
                "Abstract": "Visualizing data in sports videos is gaining traction in sports analytics, given its ability to communicate insights and explicate player strategies engagingly. However, augmenting sports videos with such data visualizations is challenging, especially for sports analysts, as it requires considerable expertise in video editing. To ease the creation process, we present a design space that characterizes augmented sports videos at an element-level <i>(what the constituents are)</i> and clip-level <i>(how those constituents are organized)</i>. We do so by systematically reviewing 233 examples of augmented sports videos collected from TV channels, teams, and leagues. The design space guides selection of data insights and visualizations for various purposes. Informed by the design space and close collaboration with domain experts, we design VisCommentator, a fast prototyping tool, to eases the creation of augmented table tennis videos by leveraging machine learning-based data extractors and design space-based visualization recommendations. With VisCommentator, sports analysts can create an augmented video by <i>selecting the data</i> to visualize instead of manually <i>drawing the graphical marks</i>. Our system can be generalized to other racket sports <i>(e.g</i>., tennis, badminton) once the underlying datasets and models are available. A user study with seven domain experts shows high satisfaction with our system, confirms that the participants can reproduce augmented sports videos in a short period, and provides insightful implications into future improvements and opportunities.",
                "AuthorNamesDeduped": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang 0051;Huamin Qu;Yingcai Wu",
                "AuthorNames": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang;Huamin Qu;Yingcai Wu",
                "AuthorAffiliation": "Department of Cognitive Science and Design Lab, State Key Lab of CAD & CG, Zhejiang University and Hong Kong University of Science and Technology, University of California, San Diego, United States;State Key Lab of CAD & CG, Zhejiang University, China;State Key Lab of CAD & CG, Zhejiang University, China;Department of Cognitive Science and Design Lab, University of California, San Diego, United States;Department of Sport Science, Zhejiang University, China;Hong Kong University of Science and Technology, Hong Kong;State Key Lab of CAD & CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2019.2934810;10.1109/tvcg.2014.2346250;10.1109/tvcg.2018.2865240;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2019.2934656;10.1109/tvcg.2020.3030458",
                "AuthorKeywords": "Augmented Sports Videos,Video-based Visualization,Sports visualization,Intelligent Design Tool,Storytelling",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 1771,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 259,
                "i": [
                    259
                ]
            }
        },
        {
            "name": "Hui Zhang 0051",
            "value": 478,
            "numPapers": 118,
            "cluster": "3",
            "visible": 1,
            "index": 242,
            "x": -143.21587943937084,
            "y": 61.14909546679815,
            "vy": 0,
            "vx": 0,
            "r": 1.5503742084052965,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Augmenting Sports Videos with VisCommentator",
                "DOI": "10.1109/tvcg.2021.3114806",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114806",
                "FirstPage": 824,
                "LastPage": 834,
                "PaperType": "J",
                "Abstract": "Visualizing data in sports videos is gaining traction in sports analytics, given its ability to communicate insights and explicate player strategies engagingly. However, augmenting sports videos with such data visualizations is challenging, especially for sports analysts, as it requires considerable expertise in video editing. To ease the creation process, we present a design space that characterizes augmented sports videos at an element-level <i>(what the constituents are)</i> and clip-level <i>(how those constituents are organized)</i>. We do so by systematically reviewing 233 examples of augmented sports videos collected from TV channels, teams, and leagues. The design space guides selection of data insights and visualizations for various purposes. Informed by the design space and close collaboration with domain experts, we design VisCommentator, a fast prototyping tool, to eases the creation of augmented table tennis videos by leveraging machine learning-based data extractors and design space-based visualization recommendations. With VisCommentator, sports analysts can create an augmented video by <i>selecting the data</i> to visualize instead of manually <i>drawing the graphical marks</i>. Our system can be generalized to other racket sports <i>(e.g</i>., tennis, badminton) once the underlying datasets and models are available. A user study with seven domain experts shows high satisfaction with our system, confirms that the participants can reproduce augmented sports videos in a short period, and provides insightful implications into future improvements and opportunities.",
                "AuthorNamesDeduped": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang 0051;Huamin Qu;Yingcai Wu",
                "AuthorNames": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang;Huamin Qu;Yingcai Wu",
                "AuthorAffiliation": "Department of Cognitive Science and Design Lab, State Key Lab of CAD & CG, Zhejiang University and Hong Kong University of Science and Technology, University of California, San Diego, United States;State Key Lab of CAD & CG, Zhejiang University, China;State Key Lab of CAD & CG, Zhejiang University, China;Department of Cognitive Science and Design Lab, University of California, San Diego, United States;Department of Sport Science, Zhejiang University, China;Hong Kong University of Science and Technology, Hong Kong;State Key Lab of CAD & CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2019.2934810;10.1109/tvcg.2014.2346250;10.1109/tvcg.2018.2865240;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2019.2934656;10.1109/tvcg.2020.3030458",
                "AuthorKeywords": "Augmented Sports Videos,Video-based Visualization,Sports visualization,Intelligent Design Tool,Storytelling",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 1771,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 259,
                "i": [
                    259
                ]
            }
        },
        {
            "name": "Liang Gou",
            "value": 268,
            "numPapers": 102,
            "cluster": "1",
            "visible": 1,
            "index": 243,
            "x": 64.42974724067375,
            "y": -142.1225093730859,
            "vy": 0,
            "vx": 0,
            "r": 1.3085780080598735,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Concept Programming: A Visual Analytics Approach to Injecting Human Intelligence at Scale",
                "DOI": "10.1109/tvcg.2022.3209466",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209466",
                "FirstPage": 74,
                "LastPage": 83,
                "PaperType": "J",
                "Abstract": "Data-centric AI has emerged as a new research area to systematically engineer the data to land AI models for real-world applications. As a core method for data-centric AI, data programming helps experts inject domain knowledge into data and label data at scale using carefully designed labeling functions (e.g., heuristic rules, logistics). Though data programming has shown great success in the NLP domain, it is challenging to program image data because of a) the challenge to describe images using visual vocabulary without human annotations and b) lacking efficient tools for data programming of images. We present Visual Concept Programming, a first-of-its-kind visual analytics approach of using visual concepts to program image data at scale while requiring a few human efforts. Our approach is built upon three unique components. It first uses a self-supervised learning approach to learn visual representation at the pixel level and extract a dictionary of visual concepts from images without using any human annotations. The visual concepts serve as building blocks of labeling functions for experts to inject their domain knowledge. We then design interactive visualizations to explore and understand visual concepts and compose labeling functions with concepts without writing code. Finally, with the composed labeling functions, users can label the image data at scale and use the labeled data to refine the pixel-wise visual representation and concept quality. We evaluate the learned pixel-wise visual representation for the downstream task of semantic segmentation to show the effectiveness and usefulness of our approach. In addition, we demonstrate how our approach tackles real-world problems of image retrieval for autonomous driving.",
                "AuthorNamesDeduped": "Md. Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Liu Ren",
                "AuthorNames": "Md Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Liu Ren",
                "AuthorAffiliation": "University of Maryland, USA;Bosch Research North America, USA;Robert Bosch GmbH, Germany;Bosch Research North America, USA;Bosch Research North America, USA",
                "InternalReferences": "0.1109/tvcg.2017.2744818;10.1109/tvcg.2020.3030350;10.1109/tvcg.2021.3114855;10.1109/tvcg.2019.2934659;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114858;10.1109/tvcg.2017.2744158;10.1109/tvcg.2019.2934619;10.1109/vast47406.2019.8986943;10.1109/tvcg.2021.3114837",
                "AuthorKeywords": "Visual concept programming,data-centric AI,data programming,self-supervised learning,semantic segmentation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1576,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 164,
                "i": [
                    164
                ]
            }
        },
        {
            "name": "Shixia Liu",
            "value": 2244,
            "numPapers": 468,
            "cluster": "1",
            "visible": 1,
            "index": 244,
            "x": 48.59335998081015,
            "y": 148.62262736802697,
            "vy": 0,
            "vx": 0,
            "r": 3.5837651122625216,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Revisiting Dimensionality Reduction Techniques for Visual Cluster Analysis: An Empirical Study",
                "DOI": "10.1109/tvcg.2021.3114694",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114694",
                "FirstPage": 529,
                "LastPage": 539,
                "PaperType": "J",
                "Abstract": "Dimensionality Reduction (DR) techniques can generate 2D projections and enable visual exploration of cluster structures of high-dimensional datasets. However, different DR techniques would yield various patterns, which significantly affect the performance of visual cluster analysis tasks. We present the results of a user study that investigates the influence of different DR techniques on visual cluster analysis. Our study focuses on the most concerned property types, namely the linearity and locality, and evaluates twelve representative DR techniques that cover the concerned properties. Four controlled experiments were conducted to evaluate how the DR techniques facilitate the tasks of 1) cluster identification, 2) membership identification, 3) distance comparison, and 4) density comparison, respectively. We also evaluated users' subjective preference of the DR techniques regarding the quality of projected clusters. The results show that: 1) Non-linear and Local techniques are preferred in cluster identification and membership identification; 2) Linear techniques perform better than non-linear techniques in density comparison; 3) UMAP (Uniform Manifold Approximation and Projection) and t-SNE (t-Distributed Stochastic Neighbor Embedding) perform the best in cluster identification and membership identification; 4) NMF (Nonnegative Matrix Factorization) has competitive performance in distance comparison; 5) t-SNLE (t-Distributed Stochastic Neighbor Linear Embedding) has competitive performance in density comparison.",
                "AuthorNamesDeduped": "Jiazhi Xia;Yuchen Zhang;Jie Song;Yang Chen;Yunhai Wang;Shixia Liu",
                "AuthorNames": "Jiazhi Xia;Yuchen Zhang;Jie Song;Yang Chen;Yunhai Wang;Shixia Liu",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;I4 data, United States;School of Computer Science and Technology, Shandong University, China;School of Software, Tsinghua University, China",
                "InternalReferences": "0.1109/tvcg.2015.2467552;10.1109/tvcg.2011.220;10.1109/infvis.2003.1249017;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2745258;10.1109/vast47406.2019.8986943;10.1109/tvcg.2020.3030432;10.1109/tvcg.2019.2934660;10.1109/tvcg.2018.2865020",
                "AuthorKeywords": "Dimensionality reduction,visual cluster analysis,perception-based evaluation",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 1446,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 260,
                "i": [
                    260
                ]
            }
        },
        {
            "name": "Min Chen 0001",
            "value": 613,
            "numPapers": 180,
            "cluster": "6",
            "visible": 1,
            "index": 245,
            "x": -136.50266681403141,
            "y": -76.9221811485967,
            "vy": 0,
            "vx": 0,
            "r": 1.7058146229130684,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Dashboard Design Patterns",
                "DOI": "10.1109/tvcg.2022.3209448",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209448",
                "FirstPage": 342,
                "LastPage": 352,
                "PaperType": "J",
                "Abstract": "This paper introduces design patterns for dashboards to inform dashboard design processes. Despite a growing number of public examples, case studies, and general guidelines there is surprisingly little design guidance for dashboards. Such guidance is necessary to inspire designs and discuss tradeoffs in, e.g., screenspace, interaction, or information shown. Based on a systematic review of 144 dashboards, we report on eight groups of design patterns that provide common solutions in dashboard design. We discuss combinations of these patterns in “dashboard genres” such as narrative, analytical, or embedded dashboard. We ran a 2-week dashboard design workshop with 23 participants of varying expertise working on their own data and dashboards. We discuss the application of patterns for the dashboard design processes, as well as general design tradeoffs and common challenges. Our work complements previous surveys and aims to support dashboard designers and researchers in co-creation, structured design decisions, as well as future user evaluations about dashboard design guidelines. Detailed pattern descriptions and workshop material can be found online: https://dashboarddesignpatterns.github.io",
                "AuthorNamesDeduped": "Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen 0001",
                "AuthorNames": "Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen",
                "AuthorAffiliation": "University of Edinburgh, Scotland;University of Glasgow, Scotland;King's College London, England;University of Warwick, England;University of Oxford, England;University of Oxford, England;University of Oxford, England",
                "InternalReferences": "0.1109/visual.1991.175794;10.1109/infvis.1997.636792;10.1109/tvcg.2020.3030424;10.1109/tvcg.2016.2599338;10.1109/tvcg.2021.3114828;10.1109/tvcg.2018.2864903;10.1109/tvcg.2013.120;10.1109/tvcg.2010.179;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Dashboards,Design Patterns,Data Visualization,Storytelling,Visual Analytics,Qualitative Evaluation,Education",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 4205,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 134,
                "i": [
                    134
                ]
            }
        },
        {
            "name": "Tan Tang",
            "value": 273,
            "numPapers": 149,
            "cluster": "3",
            "visible": 1,
            "index": 246,
            "x": 152.92350997179966,
            "y": -35.55840404046407,
            "vy": 0,
            "vx": 0,
            "r": 1.31433506044905,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization",
                "DOI": "10.1109/tvcg.2022.3209447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209447",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.",
                "AuthorNamesDeduped": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209468;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2468292;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.206;10.1109/tvcg.2017.2745258;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/vast50239.2020.00014;10.1109/tvcg.2022.3209360;10.1109/tvcg.2019.2934613;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Glyph-based visualization,metaphor,machine learning,automatic visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 152,
                "i": [
                    152
                ]
            }
        },
        {
            "name": "Yanhong Wu",
            "value": 263,
            "numPapers": 60,
            "cluster": "1",
            "visible": 1,
            "index": 247,
            "x": -88.92150138912545,
            "y": 129.78045534942373,
            "vy": 0,
            "vx": 0,
            "r": 1.3028209556706967,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Duet: Helping Data Analysis Novices Conduct Pairwise Comparisons by Minimal Specification",
                "DOI": "10.1109/tvcg.2018.2864526",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864526",
                "FirstPage": 427,
                "LastPage": 437,
                "PaperType": "J",
                "Abstract": "Data analysis novices often encounter barriers in executing low-level operations for pairwise comparisons. They may also run into barriers in interpreting the artifacts (e.g., visualizations) created as a result of the operations. We developed Duet, a visual analysis system designed to help data analysis novices conduct pairwise comparisons by addressing execution and interpretation barriers. To reduce the barriers in executing low-level operations during pairwise comparison, Duet employs minimal specification: when one object group (i.e. a group of records in a data table) is specified, Duet recommends object groups that are similar to or different from the specified one; when two object groups are specified, Duet recommends similar and different attributes between them. To lower the barriers in interpreting its recommendations, Duet explains the recommended groups and attributes using both visualizations and textual descriptions. We conducted a qualitative evaluation with eight participants to understand the effectiveness of Duet. The results suggest that minimal specification is easy to use and Duet's explanations are helpful for interpreting the recommendations despite some usability issues.",
                "AuthorNamesDeduped": "Po-Ming Law;Rahul C. Basole;Yanhong Wu",
                "AuthorNames": "Po-Ming Law;Rahul C. Basole;Yanhong Wu",
                "AuthorAffiliation": "Georgia Institute of Technology;Georgia Institute of Technology;Visa Research",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2016.2598468;10.1109/vast.2011.6102435;10.1109/tvcg.2017.2744199;10.1109/tvcg.2010.164;10.1109/tvcg.2017.2744684;10.1109/tvcg.2008.109;10.1109/tvcg.2015.2467195;10.1109/tvcg.2017.2745219;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Pairwise comparison,novices,data analysis,automatic insight generation",
                "AminerCitationCount": 33,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 614,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 756,
                "i": [
                    756
                ]
            }
        },
        {
            "name": "Yuhong Li",
            "value": 173,
            "numPapers": 30,
            "cluster": "3",
            "visible": 1,
            "index": 248,
            "x": -22.142086650044483,
            "y": -156.0760327493684,
            "vy": 0,
            "vx": 0,
            "r": 1.1991940126655152,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VideoModerator: A Risk-aware Framework for Multimodal Video Moderation in E-Commerce",
                "DOI": "10.1109/tvcg.2021.3114781",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114781",
                "FirstPage": 846,
                "LastPage": 856,
                "PaperType": "J",
                "Abstract": "Video moderation, which refers to remove deviant or explicit content from e-commerce livestreams, has become prevalent owing to social and engaging features. However, this task is tedious and time consuming due to the difficulties associated with watching and reviewing multimodal video content, including video frames and audio clips. To ensure effective video moderation, we propose VideoModerator, a risk-aware framework that seamlessly integrates human knowledge with machine insights. This framework incorporates a set of advanced machine learning models to extract the risk-aware features from multimodal video content and discover potentially deviant videos. Moreover, this framework introduces an interactive visualization interface with three views, namely, a video view, a frame view, and an audio view. In the video view, we adopt a segmented timeline and highlight high-risk periods that may contain deviant information. In the frame view, we present a novel visual summarization method that combines risk-aware features and video context to enable quick video navigation. In the audio view, we employ a storyline-based design to provide a multi-faceted overview which can be used to explore audio content. Furthermore, we report the usage of VideoModerator through a case scenario and conduct experiments and a controlled user study to validate its effectiveness.",
                "AuthorNamesDeduped": "Tan Tang;Yanhong Wu;Yingcai Wu;Lingyun Yu 0001;Yuhong Li",
                "AuthorNames": "Tan Tang;Yanhong Wu;Yingcai Wu;Lingyun Yu;Yuhong Li",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University and Zhejiang Lab, China;State Key Lab of CAD&CG, Zhejiang University and Zhejiang Lab, China;State Key Lab of CAD&CG, Zhejiang University and Zhejiang Lab, China;Department of Computing, Xi'an Jiaotong-Liverpool University, China;Alibaba Group, China",
                "InternalReferences": "0.1109/tvcg.2021.3114806;10.1109/visual.2003.1250401;10.1109/vast.2012.6400492;10.1109/tvcg.2021.3114853;10.1109/tvcg.2015.2467553;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.168;10.1109/tvcg.2008.185;10.1109/tvcg.2021.3114822;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864899;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/vast.2014.7042476;10.1109/tvcg.2019.2934656;10.1109/tvcg.2020.3030428;10.1109/tvcg.2020.3030458;10.1109/tvcg.2006.194",
                "AuthorKeywords": "video moderation,video visualization,e-commerce livestreaming",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 941,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 298,
                "i": [
                    298
                ]
            }
        },
        {
            "name": "Dongyu Liu",
            "value": 278,
            "numPapers": 87,
            "cluster": "3",
            "visible": 1,
            "index": 249,
            "x": 121.99946425484735,
            "y": 100.3301087487212,
            "vy": 0,
            "vx": 0,
            "r": 1.3200921128382268,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "RASIPAM: Interactive Pattern Mining of Multivariate Event Sequences in Racket Sports",
                "DOI": "10.1109/tvcg.2022.3209452",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209452",
                "FirstPage": 940,
                "LastPage": 950,
                "PaperType": "J",
                "Abstract": "Experts in racket sports like tennis and badminton use tactical analysis to gain insight into competitors' playing styles. Many data-driven methods apply pattern mining to racket sports data — which is often recorded as multivariate event sequences — to uncover sports tactics. However, tactics obtained in this way are often inconsistent with those deduced by experts through their domain knowledge, which can be confusing to those experts. This work introduces RASIPAM, a RAcket-Sports Interactive PAttern Mining system, which allows experts to incorporate their knowledge into data mining algorithms to discover meaningful tactics interactively. RASIPAM consists of a constraint-based pattern mining algorithm that responds to the analysis demands of experts: Experts provide suggestions for finding tactics in intuitive written language, and these suggestions are translated into constraints to run the algorithm. RASIPAM further introduces a tailored visual interface that allows experts to compare the new tactics with the original ones and decide whether to apply a given adjustment. This interactive workflow iteratively progresses until experts are satisfied with all tactics. We conduct a quantitative experiment to show that our algorithm supports real-time interaction. Two case studies in tennis and in badminton respectively, each involving two domain experts, are conducted to show the effectiveness and usefulness of the system.",
                "AuthorNamesDeduped": "Jiang Wu;Dongyu Liu;Ziyang Guo;Yingcai Wu",
                "AuthorNames": "Jiang Wu;Dongyu Liu;Ziyang Guo;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;MIT, USA;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2021.3114861;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2018.2865018;10.1109/tvcg.2015.2467325;10.1109/tvcg.2021.3114848;10.1109/tvcg.2012.271;10.1109/tvcg.2012.213;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2019.2934630;10.1109/vast50239.2020.00009;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209447;10.1109/tvcg.2019.2934668;10.1109/tvcg.2019.2934267;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Sports Analytics,Multivariate Event Sequence,Interactive Pattern Mining,Comparative Visual Design",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 500,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 215,
                "i": [
                    215
                ]
            }
        },
        {
            "name": "Tica Lin",
            "value": 8,
            "numPapers": 23,
            "cluster": "3",
            "visible": 1,
            "index": 250,
            "x": -158.04640010436498,
            "y": 8.446029484377705,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching",
                "DOI": "10.1109/tvcg.2023.3327161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327161",
                "FirstPage": 458,
                "LastPage": 468,
                "PaperType": "J",
                "Abstract": "Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence.",
                "AuthorNamesDeduped": "Tica Lin;Alexandre Aouididi;Zhutian Chen;Johanna Beyer;Hanspeter Pfister;Jui-Hsien Wang",
                "AuthorNames": "Tica Lin;Alexandre Aouididi;Chen Zhu-Tian;Johanna Beyer;Hanspeter Pfister;Jui-Hsien Wang",
                "AuthorAffiliation": "Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Harvard John A. Paulson School of Engineering and Applied Sciences, USA;Adobe Research, USA",
                "InternalReferences": "10.1109/tvcg.2021.3114861;10.1109/vast.2014.7042478;10.1109/tvcg.2019.2934395;10.1109/tvcg.2020.3030435;10.1109/tvcg.2022.3209353;10.1109/visual.2001.964496;10.1109/tvcg.2017.2745181;10.1109/tvcg.2009.108;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030427;10.1109/tvcg.2020.3030392",
                "AuthorKeywords": "Sports Analytics,Immersive Analytics,Data Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 375,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 32,
                "i": [
                    32
                ]
            }
        },
        {
            "name": "Xiao Xie",
            "value": 501,
            "numPapers": 204,
            "cluster": "3",
            "visible": 1,
            "index": 251,
            "x": 111.05428956788678,
            "y": -113.21194622729509,
            "vy": 0,
            "vx": 0,
            "r": 1.5768566493955096,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Compass: Towards Better Causal Analysis of Urban Time Series",
                "DOI": "10.1109/tvcg.2021.3114875",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114875",
                "FirstPage": 1051,
                "LastPage": 1061,
                "PaperType": "J",
                "Abstract": "The spatial time series generated by city sensors allow us to observe urban phenomena like environmental pollution and traffic congestion at an unprecedented scale. However, recovering causal relations from these observations to explain the sources of urban phenomena remains a challenging task because these causal relations tend to be time-varying and demand proper time series partitioning for effective analyses. The prior approaches extract one causal graph given long-time observations, which cannot be directly applied to capturing, interpreting, and validating dynamic urban causality. This paper presents Compass, a novel visual analytics approach for in-depth analyses of the dynamic causality in urban time series. To develop Compass, we identify and address three challenges: detecting urban causality, interpreting dynamic causal relations, and unveiling suspicious causal relations. First, multiple causal graphs over time among urban time series are obtained with a causal detection framework extended from the Granger causality test. Then, a dynamic causal graph visualization is designed to reveal the time-varying causal relations across these causal graphs and facilitate the exploration of the graphs along the time. Finally, a tailored multi-dimensional visualization is developed to support the identification of spurious causal relations, thereby improving the reliability of causal analyses. The effectiveness of Compass is evaluated with two case studies conducted on the real-world urban datasets, including the air pollution and traffic speed datasets, and positive feedback was received from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Xiao Xie;Jie Bao 0003;Yu Zheng 0004;Mingliang Xu;Wei Chen 0001;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Xiao Xie;Jie Bao;Yu Zheng;Mingliang Xu;Wei Chen;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University, Hangzhou and Zhejiang Lab, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou and Zhejiang Lab, Hangzhou, China;Department of Sport Science, Zhejiang University, Hangzhou, China;JD Intelligent Cities Research, JD Tech, Beijing, China;JD Intelligent Cities Research, JD Tech, Beijing, China;School of Information Engineering, Zhengzhou University, Zhengzhou, China and Henan Institute of Advanced Technology, Zhengzhou University, Zhengzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou and Zhejiang Lab, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou and Zhejiang Lab, Hangzhou, China",
                "InternalReferences": "0.1109/vast.2014.7042488;10.1109/vast.2011.6102454;10.1109/tvcg.2011.226;10.1109/tvcg.2017.2744419;10.1109/tvcg.2015.2467619;10.1109/tvcg.2019.2934670;10.1109/infvis.2003.1249025;10.1109/vast.2015.7347636;10.1109/tvcg.2015.2467771;10.1109/tvcg.2020.3030465;10.1109/tvcg.2007.70528;10.1109/tvcg.2015.2467671;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865018;10.1109/vast.2012.6400491;10.1109/tvcg.2016.2598585;10.1109/tvcg.2015.2467592;10.1109/tvcg.2015.2467112;10.1109/tvcg.2012.265;10.1109/tvcg.2018.2864844;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2021.3114790;10.1109/tvcg.2018.2865126;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2019.2934399;10.1109/tvcg.2020.3030392;10.1109/tvcg.2021.3114877;10.1109/tvcg.2020.3030428;10.1109/tvcg.2020.3030440;10.1109/tvcg.2020.3030458;10.1109/tvcg.2019.2934630",
                "AuthorKeywords": "Visual causal analysis,urban time series,causal graph analysis",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1920,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 270,
                "i": [
                    270
                ]
            }
        },
        {
            "name": "Haolin Lu",
            "value": 62,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 252,
            "x": -5.425159581362352,
            "y": 158.8098474387428,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations",
                "DOI": "10.1109/tvcg.2021.3114861",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114861",
                "FirstPage": 118,
                "LastPage": 128,
                "PaperType": "J",
                "Abstract": "Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.",
                "AuthorNamesDeduped": "Xiangtong Chu;Xiao Xie;Shuainan Ye;Haolin Lu;Hongguang Xiao;Zeqing Yuan;Zhutian Chen;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Xiangtong Chu;Xiao Xie;Shuainan Ye;Haolin Lu;Hongguang Xiao;Zeqing Yuan;Zhutian Chen;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China and Department of Sport Science, Zhejiang University, China;Department of Sport Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Cognitive Science and Design Lab, University of California San Diego, United States;Department of Sport Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2017.2744322;10.1109/tvcg.2019.2934803;10.1109/tvcg.2021.3114806;10.1109/tvcg.2019.2934415;10.1109/tvcg.2018.2864885;10.1109/tvcg.2018.2865191;10.1109/tvcg.2019.2934395;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.265;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030427;10.1109/tvcg.2018.2865192;10.1109/tvcg.2020.3030392;10.1109/tvcg.2018.2865076;10.1109/tvcg.2019.2934630;10.1109/vast50239.2020.00009",
                "AuthorKeywords": "Tactic analysis,stroke sequence visualization,immersive visualization",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2010,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 256,
                "i": [
                    256
                ]
            }
        },
        {
            "name": "Hongguang Xiao",
            "value": 62,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 253,
            "x": -103.47846796604473,
            "y": -121.00498612619343,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations",
                "DOI": "10.1109/tvcg.2021.3114861",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114861",
                "FirstPage": 118,
                "LastPage": 128,
                "PaperType": "J",
                "Abstract": "Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.",
                "AuthorNamesDeduped": "Xiangtong Chu;Xiao Xie;Shuainan Ye;Haolin Lu;Hongguang Xiao;Zeqing Yuan;Zhutian Chen;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Xiangtong Chu;Xiao Xie;Shuainan Ye;Haolin Lu;Hongguang Xiao;Zeqing Yuan;Zhutian Chen;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China and Department of Sport Science, Zhejiang University, China;Department of Sport Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Cognitive Science and Design Lab, University of California San Diego, United States;Department of Sport Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2017.2744322;10.1109/tvcg.2019.2934803;10.1109/tvcg.2021.3114806;10.1109/tvcg.2019.2934415;10.1109/tvcg.2018.2864885;10.1109/tvcg.2018.2865191;10.1109/tvcg.2019.2934395;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.265;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030427;10.1109/tvcg.2018.2865192;10.1109/tvcg.2020.3030392;10.1109/tvcg.2018.2865076;10.1109/tvcg.2019.2934630;10.1109/vast50239.2020.00009",
                "AuthorKeywords": "Tactic analysis,stroke sequence visualization,immersive visualization",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2010,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 256,
                "i": [
                    256
                ]
            }
        },
        {
            "name": "Zeqing Yuan",
            "value": 62,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 254,
            "x": 158.3509036897902,
            "y": 19.36469211288396,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations",
                "DOI": "10.1109/tvcg.2021.3114861",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114861",
                "FirstPage": 118,
                "LastPage": 128,
                "PaperType": "J",
                "Abstract": "Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.",
                "AuthorNamesDeduped": "Xiangtong Chu;Xiao Xie;Shuainan Ye;Haolin Lu;Hongguang Xiao;Zeqing Yuan;Zhutian Chen;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Xiangtong Chu;Xiao Xie;Shuainan Ye;Haolin Lu;Hongguang Xiao;Zeqing Yuan;Zhutian Chen;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China and Department of Sport Science, Zhejiang University, China;Department of Sport Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Cognitive Science and Design Lab, University of California San Diego, United States;Department of Sport Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2017.2744322;10.1109/tvcg.2019.2934803;10.1109/tvcg.2021.3114806;10.1109/tvcg.2019.2934415;10.1109/tvcg.2018.2864885;10.1109/tvcg.2018.2865191;10.1109/tvcg.2019.2934395;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.265;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030427;10.1109/tvcg.2018.2865192;10.1109/tvcg.2020.3030392;10.1109/tvcg.2018.2865076;10.1109/tvcg.2019.2934630;10.1109/vast50239.2020.00009",
                "AuthorKeywords": "Tactic analysis,stroke sequence visualization,immersive visualization",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2010,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 256,
                "i": [
                    256
                ]
            }
        },
        {
            "name": "Matthias Kraus 0002",
            "value": 149,
            "numPapers": 43,
            "cluster": "3",
            "visible": 1,
            "index": 255,
            "x": -130.09853533372367,
            "y": 92.86749218117099,
            "vy": 0,
            "vx": 0,
            "r": 1.1715601611974669,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "The Impact of Immersion on Cluster Identification Tasks",
                "DOI": "10.1109/tvcg.2019.2934395",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934395",
                "FirstPage": 525,
                "LastPage": 535,
                "PaperType": "J",
                "Abstract": "Recent developments in technology encourage the use of head-mounted displays (HMDs) as a medium to explore visualizations in virtual realities (VRs). VR environments (VREs) enable new, more immersive visualization design spaces compared to traditional computer screens. Previous studies in different domains, such as medicine, psychology, and geology, report a positive effect of immersion, e.g., on learning performance or phobia treatment effectiveness. Our work presented in this paper assesses the applicability of those findings to a common task from the information visualization (InfoVis) domain. We conducted a quantitative user study to investigate the impact of immersion on cluster identification tasks in scatterplot visualizations. The main experiment was carried out with 18 participants in a within-subjects setting using four different visualizations, (1) a 2D scatterplot matrix on a screen, (2) a 3D scatterplot on a screen, (3) a 3D scatterplot miniature in a VRE and (4) a fully immersive 3D scatterplot in a VRE. The four visualization design spaces vary in their level of immersion, as shown in a supplementary study. The results of our main study indicate that task performance differs between the investigated visualization design spaces in terms of accuracy, efficiency, memorability, sense of orientation, and user preference. In particular, the 2D visualization on the screen performed worse compared to the 3D visualizations with regard to the measured variables. The study shows that an increased level of immersion can be a substantial benefit in the context of 3D data and cluster detection.",
                "AuthorNamesDeduped": "Matthias Kraus 0002;Niklas Weiler;Daniela Oelke;Johannes Kehrer;Daniel A. Keim;Johannes Fuchs 0001",
                "AuthorNames": "M. Kraus;N. Weiler;D. Oelke;J. Kehrer;D. A. Keim;J. Fuchs",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Siemens Corporate Technology, Munich, Germany;Siemens Corporate Technology, Munich, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/tvcg.2018.2864477;10.1109/infvis.1998.729555;10.1109/tvcg.2008.153;10.1109/vast.2008.4677350;10.1109/tvcg.2013.153;10.1109/visual.2002.1183816;10.1109/infvis.1999.801851;10.1109/vast.2007.4389000;10.1109/tvcg.2015.2467202;10.1109/tvcg.2017.2745941",
                "AuthorKeywords": "Virtual reality,evaluation,visual analytics,clustering",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1309,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 523,
                "i": [
                    523
                ]
            }
        },
        {
            "name": "Daniela Oelke",
            "value": 199,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 256,
            "x": 33.26442768006231,
            "y": -156.6635817639757,
            "vy": 0,
            "vx": 0,
            "r": 1.2291306850892343,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "The Impact of Immersion on Cluster Identification Tasks",
                "DOI": "10.1109/tvcg.2019.2934395",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934395",
                "FirstPage": 525,
                "LastPage": 535,
                "PaperType": "J",
                "Abstract": "Recent developments in technology encourage the use of head-mounted displays (HMDs) as a medium to explore visualizations in virtual realities (VRs). VR environments (VREs) enable new, more immersive visualization design spaces compared to traditional computer screens. Previous studies in different domains, such as medicine, psychology, and geology, report a positive effect of immersion, e.g., on learning performance or phobia treatment effectiveness. Our work presented in this paper assesses the applicability of those findings to a common task from the information visualization (InfoVis) domain. We conducted a quantitative user study to investigate the impact of immersion on cluster identification tasks in scatterplot visualizations. The main experiment was carried out with 18 participants in a within-subjects setting using four different visualizations, (1) a 2D scatterplot matrix on a screen, (2) a 3D scatterplot on a screen, (3) a 3D scatterplot miniature in a VRE and (4) a fully immersive 3D scatterplot in a VRE. The four visualization design spaces vary in their level of immersion, as shown in a supplementary study. The results of our main study indicate that task performance differs between the investigated visualization design spaces in terms of accuracy, efficiency, memorability, sense of orientation, and user preference. In particular, the 2D visualization on the screen performed worse compared to the 3D visualizations with regard to the measured variables. The study shows that an increased level of immersion can be a substantial benefit in the context of 3D data and cluster detection.",
                "AuthorNamesDeduped": "Matthias Kraus 0002;Niklas Weiler;Daniela Oelke;Johannes Kehrer;Daniel A. Keim;Johannes Fuchs 0001",
                "AuthorNames": "M. Kraus;N. Weiler;D. Oelke;J. Kehrer;D. A. Keim;J. Fuchs",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Siemens Corporate Technology, Munich, Germany;Siemens Corporate Technology, Munich, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/tvcg.2018.2864477;10.1109/infvis.1998.729555;10.1109/tvcg.2008.153;10.1109/vast.2008.4677350;10.1109/tvcg.2013.153;10.1109/visual.2002.1183816;10.1109/infvis.1999.801851;10.1109/vast.2007.4389000;10.1109/tvcg.2015.2467202;10.1109/tvcg.2017.2745941",
                "AuthorKeywords": "Virtual reality,evaluation,visual analytics,clustering",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1309,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 523,
                "i": [
                    523
                ]
            }
        },
        {
            "name": "Christophe Hurter",
            "value": 451,
            "numPapers": 105,
            "cluster": "3",
            "visible": 1,
            "index": 257,
            "x": 81.4548939270201,
            "y": 138.25736962396581,
            "vy": 0,
            "vx": 0,
            "r": 1.5192861255037422,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Data Visceralization: Enabling Deeper Understanding of Data Using Virtual Reality",
                "DOI": "10.1109/tvcg.2020.3030435",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030435",
                "FirstPage": 1095,
                "LastPage": 1105,
                "PaperType": "J",
                "Abstract": "A fundamental part of data visualization is transforming data to map abstract information onto visual attributes. While this abstraction is a powerful basis for data visualization, the connection between the representation and the original underlying data (i.e., what the quantities and measurements actually correspond with in reality) can be lost. On the other hand, virtual reality (VR) is being increasingly used to represent real and abstract models as natural experiences to users. In this work, we explore the potential of using VR to help restore the basic understanding of units and measures that are often abstracted away in data visualization in an approach we call data visceralization. By building VR prototypes as design probes, we identify key themes and factors for data visceralization. We do this first through a critical reflection by the authors, then by involving external participants. We find that data visceralization is an engaging way of understanding the qualitative aspects of physical measures and their real-life form, which complements analytical and quantitative understanding commonly gained from data visualization. However, data visceralization is most effective when there is a one-to-one mapping between data and representation, with transformations such as scaling affecting this understanding. We conclude with a discussion of future directions for data visceralization.",
                "AuthorNamesDeduped": "Benjamin Lee;Dave Brown;Bongshin Lee;Christophe Hurter;Steven Mark Drucker;Tim Dwyer",
                "AuthorNames": "Benjamin Lee;Dave Brown;Bongshin Lee;Christophe Hurter;Steven Drucker;Tim Dwyer",
                "AuthorAffiliation": "Monash University;Microsoft Research;Microsoft Research;ENAC, French Civil Aviation University;Microsoft Research;Monash University",
                "InternalReferences": "0.1109/tvcg.2013.210;10.1109/infvis.1998.729560;10.1109/tvcg.2018.2865237;10.1109/tvcg.2010.179;10.1109/tvcg.2016.2598498;10.1109/visual.2001.964545",
                "AuthorKeywords": "Data visceralization,virtual reality,exploratory study",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1815,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 368,
                "i": [
                    368
                ]
            }
        },
        {
            "name": "Yalong Yang 0001",
            "value": 137,
            "numPapers": 33,
            "cluster": "3",
            "visible": 1,
            "index": 258,
            "x": -153.7514948834571,
            "y": -47.01571887254581,
            "vy": 0,
            "vx": 0,
            "r": 1.1577432354634427,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Origin-Destination Flow Maps in Immersive Environments",
                "DOI": "10.1109/tvcg.2018.2865192",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865192",
                "FirstPage": 693,
                "LastPage": 703,
                "PaperType": "J",
                "Abstract": "Immersive virtual- and augmented-reality headsets can overlay a flat image against any surface or hang virtual objects in the space around the user. The technology is rapidly improving and may, in the long term, replace traditional flat panel displays in many situations. When displays are no longer intrinsically flat, how should we use the space around the user for abstract data visualisation? In this paper, we ask this question with respect to origin-destination flow data in a global geographic context. We report on the findings of three studies exploring different spatial encodings for flow maps. The first experiment focuses on different 2D and 3D encodings for flows on flat maps. We find that participants are significantly more accurate with raised flow paths whose height is proportional to flow distance but fastest with traditional straight line 2D flows. In our second and third experiment we compared flat maps, 3D globes and a novel interactive design we call<i>MapsLink</i>, involving a pair of linked flat maps. We find that participants took significantly more time with MapsLink than other flow maps while the 3D globe with raised flows was the fastest, most accurate, and most preferred method. Our work suggests that<i>careful</i>use of the third spatial dimension can resolve visual clutter in complex flow maps.",
                "AuthorNamesDeduped": "Yalong Yang 0001;Tim Dwyer;Bernhard Jenny;Kim Marriott;Maxime Cordeil;Haohui Chen",
                "AuthorNames": "Yalong Yang;Tim Dwyer;Bernhard Jenny;Kim Marriott;Maxime Cordeil;Haohui Chen",
                "AuthorAffiliation": "Monash University, Clayton, VIC, AU;Monash University, Clayton, VIC, AU;Monash University, Clayton, VIC, AU;Monash University, Clayton, VIC, AU;Monash University, Clayton, VIC, AU;Commonwealth Scientific and Industrial Research Organisation, Canberra, ACT, AU",
                "InternalReferences": "0.1109/tvcg.2016.2598958;10.1109/tvcg.2011.202;10.1109/tvcg.2007.70521;10.1109/infvis.1995.528697;10.1109/infvis.1996.559226;10.1109/infvis.2005.1532150;10.1109/tvcg.2011.181;10.1109/tvcg.2014.2346441;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2016.2599107",
                "AuthorKeywords": "Origin-destination,Flow Map,Virtual Reality,Cartographic Information Visualisation,Immersive Analytics",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 68,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 1767,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 654,
                "i": [
                    654
                ]
            }
        },
        {
            "name": "Manuel Stein",
            "value": 150,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 259,
            "x": 145.4106742098334,
            "y": -69.32341470125152,
            "vy": 0,
            "vx": 0,
            "r": 1.1727115716753023,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis",
                "DOI": "10.1109/tvcg.2017.2745181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745181",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.",
                "AuthorNamesDeduped": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady L. Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorNames": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz;University of Zürich;University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz;Graz University of Technology;Fraunhofer IAIS, Germany and City University, London, UK;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/visual.2003.1250401;10.1109/tvcg.2013.207;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192",
                "AuthorKeywords": "visual analytics,sport analytics,immersive analytics",
                "AminerCitationCount": 140,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 4328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 842,
                "i": [
                    842
                ]
            }
        },
        {
            "name": "Halldor Janetzko",
            "value": 117,
            "numPapers": 14,
            "cluster": "3",
            "visible": 1,
            "index": 260,
            "x": -60.51026612793395,
            "y": 149.62789744271157,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis",
                "DOI": "10.1109/tvcg.2017.2745181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745181",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.",
                "AuthorNamesDeduped": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady L. Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorNames": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz;University of Zürich;University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz;Graz University of Technology;Fraunhofer IAIS, Germany and City University, London, UK;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/visual.2003.1250401;10.1109/tvcg.2013.207;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192",
                "AuthorKeywords": "visual analytics,sport analytics,immersive analytics",
                "AminerCitationCount": 140,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 4328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 842,
                "i": [
                    842
                ]
            }
        },
        {
            "name": "Andreas Lamprecht",
            "value": 91,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 261,
            "x": -56.56205826865811,
            "y": -151.4949951794214,
            "vy": 0,
            "vx": 0,
            "r": 1.1047783534830167,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis",
                "DOI": "10.1109/tvcg.2017.2745181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745181",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.",
                "AuthorNamesDeduped": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady L. Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorNames": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz;University of Zürich;University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz;Graz University of Technology;Fraunhofer IAIS, Germany and City University, London, UK;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/visual.2003.1250401;10.1109/tvcg.2013.207;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192",
                "AuthorKeywords": "visual analytics,sport analytics,immersive analytics",
                "AminerCitationCount": 140,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 4328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 842,
                "i": [
                    842
                ]
            }
        },
        {
            "name": "Thorsten Breitkreutz",
            "value": 91,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 262,
            "x": 144.31564956876423,
            "y": 73.64097561511284,
            "vy": 0,
            "vx": 0,
            "r": 1.1047783534830167,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis",
                "DOI": "10.1109/tvcg.2017.2745181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745181",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.",
                "AuthorNamesDeduped": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady L. Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorNames": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz;University of Zürich;University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz;Graz University of Technology;Fraunhofer IAIS, Germany and City University, London, UK;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/visual.2003.1250401;10.1109/tvcg.2013.207;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192",
                "AuthorKeywords": "visual analytics,sport analytics,immersive analytics",
                "AminerCitationCount": 140,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 4328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 842,
                "i": [
                    842
                ]
            }
        },
        {
            "name": "Philipp Zimmermann",
            "value": 91,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 263,
            "x": -156.45479326905237,
            "y": 43.265432658625805,
            "vy": 0,
            "vx": 0,
            "r": 1.1047783534830167,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis",
                "DOI": "10.1109/tvcg.2017.2745181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745181",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.",
                "AuthorNamesDeduped": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady L. Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorNames": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz;University of Zürich;University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz;Graz University of Technology;Fraunhofer IAIS, Germany and City University, London, UK;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/visual.2003.1250401;10.1109/tvcg.2013.207;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192",
                "AuthorKeywords": "visual analytics,sport analytics,immersive analytics",
                "AminerCitationCount": 140,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 4328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 842,
                "i": [
                    842
                ]
            }
        },
        {
            "name": "Bastian Goldlücke",
            "value": 91,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 264,
            "x": 86.30281335122679,
            "y": -137.8471051841979,
            "vy": 0,
            "vx": 0,
            "r": 1.1047783534830167,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis",
                "DOI": "10.1109/tvcg.2017.2745181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745181",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.",
                "AuthorNamesDeduped": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady L. Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorNames": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz;University of Zürich;University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz;Graz University of Technology;Fraunhofer IAIS, Germany and City University, London, UK;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/visual.2003.1250401;10.1109/tvcg.2013.207;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192",
                "AuthorKeywords": "visual analytics,sport analytics,immersive analytics",
                "AminerCitationCount": 140,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 4328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 842,
                "i": [
                    842
                ]
            }
        },
        {
            "name": "Tobias Schreck",
            "value": 631,
            "numPapers": 210,
            "cluster": "3",
            "visible": 1,
            "index": 265,
            "x": 29.533043241712807,
            "y": 160.2429385554419,
            "vy": 0,
            "vx": 0,
            "r": 1.7265400115141047,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Visual analytics for the big data era---A comparative review of state-of-the-art commercial systems",
                "DOI": "10.1109/vast.2012.6400554",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400554",
                "FirstPage": 173,
                "LastPage": 182,
                "PaperType": "C",
                "Abstract": "Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.",
                "AuthorNamesDeduped": "Leishi Zhang;Andreas Stoffel;Michael Behrisch 0001;Sebastian Mittelstädt;Tobias Schreck;René Pompl;Stefan Weber 0004;Holger Last;Daniel A. Keim",
                "AuthorNames": "Leishi Zhang;Andreas Stoffel;Michael Behrisch;Sebastian Mittelstadt;Tobias Schreck;René Pompl;Stefan Weber;Holger Last;Daniel Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;Siemens AG;Siemens AG;Siemens AG;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.12;10.1109/infvis.2004.64;10.1109/infvis.2000.885098",
                "AuthorKeywords": null,
                "AminerCitationCount": 229,
                "CitationCountCrossRef": 97,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 3861,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1490,
                "i": [
                    1490
                ]
            }
        },
        {
            "name": "Michael Grossniklaus",
            "value": 91,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 266,
            "x": -130.26392398269905,
            "y": -98.39364872099013,
            "vy": 0,
            "vx": 0,
            "r": 1.1047783534830167,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis",
                "DOI": "10.1109/tvcg.2017.2745181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745181",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.",
                "AuthorNamesDeduped": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady L. Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorNames": "Manuel Stein;Halldor Janetzko;Andreas Lamprecht;Thorsten Breitkreutz;Philipp Zimmermann;Bastian Goldlücke;Tobias Schreck;Gennady Andrienko;Michael Grossniklaus;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz;University of Zürich;University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz;Graz University of Technology;Fraunhofer IAIS, Germany and City University, London, UK;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/visual.2003.1250401;10.1109/tvcg.2013.207;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192",
                "AuthorKeywords": "visual analytics,sport analytics,immersive analytics",
                "AminerCitationCount": 140,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 4328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 842,
                "i": [
                    842
                ]
            }
        },
        {
            "name": "Jiachen Wang",
            "value": 344,
            "numPapers": 47,
            "cluster": "3",
            "visible": 1,
            "index": 267,
            "x": 162.8211417533952,
            "y": -15.468542210589199,
            "vy": 0,
            "vx": 0,
            "r": 1.3960852043753598,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "ForVizor: Visualizing Spatio-Temporal Team Formations in Soccer",
                "DOI": "10.1109/tvcg.2018.2865041",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865041",
                "FirstPage": 65,
                "LastPage": 75,
                "PaperType": "J",
                "Abstract": "Regarded as a high-level tactic in soccer, a team formation assigns players different tasks and indicates their active regions on the pitch, thereby influencing the team performance significantly. Analysis of formations in soccer has become particularly indispensable for soccer analysts. However, formations of a team are intrinsically time-varying and contain inherent spatial information. The spatio-temporal nature of formations and other characteristics of soccer data, such as multivariate features, make analysis of formations in soccer a challenging problem. In this study, we closely worked with domain experts to characterize domain problems of formation analysis in soccer and formulated several design goals. We design a novel spatio-temporal visual representation of changes in team formation, allowing analysts to visually analyze the evolution of formations and track the spatial flow of players within formations over time. Based on the new design, we further design and develop ForVizor, a visual analytics system, which empowers users to track the spatio-temporal changes in formation and understand how and why such changes occur. With ForVizor, domain experts conduct formation analysis of two games. Analysis results with insights and useful feedback are summarized in two case studies.",
                "AuthorNamesDeduped": "Yingcai Wu;Xiao Xie;Jiachen Wang;Dazhen Deng;Hongye Liang;Hui Zhang 0051;Shoubin Cheng;Wei Chen 0001",
                "AuthorNames": "Yingcai Wu;Xiao Xie;Jiachen Wang;Dazhen Deng;Hongye Liang;Hui Zhang;Shoubin Cheng;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Department of Sport Science, Zhejiang University;Department of Sport Science, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/vast.2014.7042477;10.1109/tvcg.2018.2865018;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.192;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2017.2744218",
                "AuthorKeywords": "Soccer data,formation analysis,spatio-temporal visualization",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2547,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 733,
                "i": [
                    733
                ]
            }
        },
        {
            "name": "Dazhen Deng",
            "value": 335,
            "numPapers": 152,
            "cluster": "3",
            "visible": 1,
            "index": 268,
            "x": -109.81508061194965,
            "y": 121.61680833746215,
            "vy": 0,
            "vx": 0,
            "r": 1.3857225100748418,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization",
                "DOI": "10.1109/tvcg.2022.3209447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209447",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.",
                "AuthorNamesDeduped": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209468;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2468292;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.206;10.1109/tvcg.2017.2745258;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/vast50239.2020.00014;10.1109/tvcg.2022.3209360;10.1109/tvcg.2019.2934613;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Glyph-based visualization,metaphor,machine learning,automatic visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 152,
                "i": [
                    152
                ]
            }
        },
        {
            "name": "Hongye Liang",
            "value": 186,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 269,
            "x": -1.1789401699316637,
            "y": -164.1603182869591,
            "vy": 0,
            "vx": 0,
            "r": 1.2141623488773747,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "ForVizor: Visualizing Spatio-Temporal Team Formations in Soccer",
                "DOI": "10.1109/tvcg.2018.2865041",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865041",
                "FirstPage": 65,
                "LastPage": 75,
                "PaperType": "J",
                "Abstract": "Regarded as a high-level tactic in soccer, a team formation assigns players different tasks and indicates their active regions on the pitch, thereby influencing the team performance significantly. Analysis of formations in soccer has become particularly indispensable for soccer analysts. However, formations of a team are intrinsically time-varying and contain inherent spatial information. The spatio-temporal nature of formations and other characteristics of soccer data, such as multivariate features, make analysis of formations in soccer a challenging problem. In this study, we closely worked with domain experts to characterize domain problems of formation analysis in soccer and formulated several design goals. We design a novel spatio-temporal visual representation of changes in team formation, allowing analysts to visually analyze the evolution of formations and track the spatial flow of players within formations over time. Based on the new design, we further design and develop ForVizor, a visual analytics system, which empowers users to track the spatio-temporal changes in formation and understand how and why such changes occur. With ForVizor, domain experts conduct formation analysis of two games. Analysis results with insights and useful feedback are summarized in two case studies.",
                "AuthorNamesDeduped": "Yingcai Wu;Xiao Xie;Jiachen Wang;Dazhen Deng;Hongye Liang;Hui Zhang 0051;Shoubin Cheng;Wei Chen 0001",
                "AuthorNames": "Yingcai Wu;Xiao Xie;Jiachen Wang;Dazhen Deng;Hongye Liang;Hui Zhang;Shoubin Cheng;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Department of Sport Science, Zhejiang University;Department of Sport Science, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/vast.2014.7042477;10.1109/tvcg.2018.2865018;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.192;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2017.2744218",
                "AuthorKeywords": "Soccer data,formation analysis,spatio-temporal visualization",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2547,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 733,
                "i": [
                    733
                ]
            }
        },
        {
            "name": "Shoubin Cheng",
            "value": 186,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 270,
            "x": 111.96516676843552,
            "y": 120.4732394788005,
            "vy": 0,
            "vx": 0,
            "r": 1.2141623488773747,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "ForVizor: Visualizing Spatio-Temporal Team Formations in Soccer",
                "DOI": "10.1109/tvcg.2018.2865041",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865041",
                "FirstPage": 65,
                "LastPage": 75,
                "PaperType": "J",
                "Abstract": "Regarded as a high-level tactic in soccer, a team formation assigns players different tasks and indicates their active regions on the pitch, thereby influencing the team performance significantly. Analysis of formations in soccer has become particularly indispensable for soccer analysts. However, formations of a team are intrinsically time-varying and contain inherent spatial information. The spatio-temporal nature of formations and other characteristics of soccer data, such as multivariate features, make analysis of formations in soccer a challenging problem. In this study, we closely worked with domain experts to characterize domain problems of formation analysis in soccer and formulated several design goals. We design a novel spatio-temporal visual representation of changes in team formation, allowing analysts to visually analyze the evolution of formations and track the spatial flow of players within formations over time. Based on the new design, we further design and develop ForVizor, a visual analytics system, which empowers users to track the spatio-temporal changes in formation and understand how and why such changes occur. With ForVizor, domain experts conduct formation analysis of two games. Analysis results with insights and useful feedback are summarized in two case studies.",
                "AuthorNamesDeduped": "Yingcai Wu;Xiao Xie;Jiachen Wang;Dazhen Deng;Hongye Liang;Hui Zhang 0051;Shoubin Cheng;Wei Chen 0001",
                "AuthorNames": "Yingcai Wu;Xiao Xie;Jiachen Wang;Dazhen Deng;Hongye Liang;Hui Zhang;Shoubin Cheng;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Department of Sport Science, Zhejiang University;Department of Sport Science, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/vast.2014.7042477;10.1109/tvcg.2018.2865018;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.192;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2017.2744218",
                "AuthorKeywords": "Soccer data,formation analysis,spatio-temporal visualization",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2547,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 733,
                "i": [
                    733
                ]
            }
        },
        {
            "name": "Yifan Wang",
            "value": 114,
            "numPapers": 12,
            "cluster": "3",
            "visible": 1,
            "index": 271,
            "x": -164.24088181672334,
            "y": -13.226214124423784,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization",
                "DOI": "10.1109/tvcg.2020.3030392",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030392",
                "FirstPage": 860,
                "LastPage": 869,
                "PaperType": "J",
                "Abstract": "We present ShuttleSpace, an immersive analytics system to assist experts in analyzing trajectory data in badminton. Trajectories in sports, such as the movement of players and balls, contain rich information on player behavior and thus have been widely analyzed by coaches and analysts to improve the players' performance. However, existing visual analytics systems often present the trajectories in court diagrams that are abstractions of reality, thereby causing difficulty for the experts to imagine the situation on the court and understand why the player acted in a certain way. With recent developments in immersive technologies, such as virtual reality (VR), experts gradually have the opportunity to see, feel, explore, and understand these 3D trajectories from the player's perspective. Yet, few research has studied how to support immersive analysis of sports data from such a perspective. Specific challenges are rooted in data presentation (e.g., how to seamlessly combine 2D and 3D visualizations) and interaction (e.g., how to naturally interact with data without keyboard and mouse) in VR. To address these challenges, we have worked closely with domain experts who have worked for a top national badminton team to design ShuttleSpace. Our system leverages 1) the peripheral vision to combine the 2D and 3D visualizations and 2) the VR controller to support natural interactions via a stroke metaphor. We demonstrate the effectiveness of ShuttleSpace through three case studies conducted by the experts with useful insights. We further conduct interviews with the experts whose feedback confirms that our first-person immersive analytics system is suitable and useful for analyzing badminton data.",
                "AuthorNamesDeduped": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou 0001;Yingcai Wu",
                "AuthorNames": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD CG, Zhejiang University;Hong Kong University of Science and Technology;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University;Zhejiang Lab;Chengdu Sports University;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2019.2934332;10.1109/vast.2014.7042478;10.1109/tvcg.2018.2865191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Movement trajectory,badminton analytics,virtual reality",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2111,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 365,
                "i": [
                    365
                ]
            }
        },
        {
            "name": "Siwei Fu",
            "value": 152,
            "numPapers": 57,
            "cluster": "3",
            "visible": 1,
            "index": 272,
            "x": 130.27955880340954,
            "y": -101.3767061902731,
            "vy": 0,
            "vx": 0,
            "r": 1.1750143926309728,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization",
                "DOI": "10.1109/tvcg.2020.3030392",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030392",
                "FirstPage": 860,
                "LastPage": 869,
                "PaperType": "J",
                "Abstract": "We present ShuttleSpace, an immersive analytics system to assist experts in analyzing trajectory data in badminton. Trajectories in sports, such as the movement of players and balls, contain rich information on player behavior and thus have been widely analyzed by coaches and analysts to improve the players' performance. However, existing visual analytics systems often present the trajectories in court diagrams that are abstractions of reality, thereby causing difficulty for the experts to imagine the situation on the court and understand why the player acted in a certain way. With recent developments in immersive technologies, such as virtual reality (VR), experts gradually have the opportunity to see, feel, explore, and understand these 3D trajectories from the player's perspective. Yet, few research has studied how to support immersive analysis of sports data from such a perspective. Specific challenges are rooted in data presentation (e.g., how to seamlessly combine 2D and 3D visualizations) and interaction (e.g., how to naturally interact with data without keyboard and mouse) in VR. To address these challenges, we have worked closely with domain experts who have worked for a top national badminton team to design ShuttleSpace. Our system leverages 1) the peripheral vision to combine the 2D and 3D visualizations and 2) the VR controller to support natural interactions via a stroke metaphor. We demonstrate the effectiveness of ShuttleSpace through three case studies conducted by the experts with useful insights. We further conduct interviews with the experts whose feedback confirms that our first-person immersive analytics system is suitable and useful for analyzing badminton data.",
                "AuthorNamesDeduped": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou 0001;Yingcai Wu",
                "AuthorNames": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD CG, Zhejiang University;Hong Kong University of Science and Technology;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University;Zhejiang Lab;Chengdu Sports University;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2019.2934332;10.1109/vast.2014.7042478;10.1109/tvcg.2018.2865191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Movement trajectory,badminton analytics,virtual reality",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2111,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 365,
                "i": [
                    365
                ]
            }
        },
        {
            "name": "Lejun Shen",
            "value": 110,
            "numPapers": 12,
            "cluster": "3",
            "visible": 1,
            "index": 273,
            "x": -27.635679547610934,
            "y": 163.05296445002696,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization",
                "DOI": "10.1109/tvcg.2020.3030392",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030392",
                "FirstPage": 860,
                "LastPage": 869,
                "PaperType": "J",
                "Abstract": "We present ShuttleSpace, an immersive analytics system to assist experts in analyzing trajectory data in badminton. Trajectories in sports, such as the movement of players and balls, contain rich information on player behavior and thus have been widely analyzed by coaches and analysts to improve the players' performance. However, existing visual analytics systems often present the trajectories in court diagrams that are abstractions of reality, thereby causing difficulty for the experts to imagine the situation on the court and understand why the player acted in a certain way. With recent developments in immersive technologies, such as virtual reality (VR), experts gradually have the opportunity to see, feel, explore, and understand these 3D trajectories from the player's perspective. Yet, few research has studied how to support immersive analysis of sports data from such a perspective. Specific challenges are rooted in data presentation (e.g., how to seamlessly combine 2D and 3D visualizations) and interaction (e.g., how to naturally interact with data without keyboard and mouse) in VR. To address these challenges, we have worked closely with domain experts who have worked for a top national badminton team to design ShuttleSpace. Our system leverages 1) the peripheral vision to combine the 2D and 3D visualizations and 2) the VR controller to support natural interactions via a stroke metaphor. We demonstrate the effectiveness of ShuttleSpace through three case studies conducted by the experts with useful insights. We further conduct interviews with the experts whose feedback confirms that our first-person immersive analytics system is suitable and useful for analyzing badminton data.",
                "AuthorNamesDeduped": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou 0001;Yingcai Wu",
                "AuthorNames": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD CG, Zhejiang University;Hong Kong University of Science and Technology;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University;Zhejiang Lab;Chengdu Sports University;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2019.2934332;10.1109/vast.2014.7042478;10.1109/tvcg.2018.2865191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Movement trajectory,badminton analytics,virtual reality",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2111,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 365,
                "i": [
                    365
                ]
            }
        },
        {
            "name": "Kun Zhou 0001",
            "value": 110,
            "numPapers": 12,
            "cluster": "3",
            "visible": 1,
            "index": 274,
            "x": -89.92695574412664,
            "y": -139.15150962384092,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization",
                "DOI": "10.1109/tvcg.2020.3030392",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030392",
                "FirstPage": 860,
                "LastPage": 869,
                "PaperType": "J",
                "Abstract": "We present ShuttleSpace, an immersive analytics system to assist experts in analyzing trajectory data in badminton. Trajectories in sports, such as the movement of players and balls, contain rich information on player behavior and thus have been widely analyzed by coaches and analysts to improve the players' performance. However, existing visual analytics systems often present the trajectories in court diagrams that are abstractions of reality, thereby causing difficulty for the experts to imagine the situation on the court and understand why the player acted in a certain way. With recent developments in immersive technologies, such as virtual reality (VR), experts gradually have the opportunity to see, feel, explore, and understand these 3D trajectories from the player's perspective. Yet, few research has studied how to support immersive analysis of sports data from such a perspective. Specific challenges are rooted in data presentation (e.g., how to seamlessly combine 2D and 3D visualizations) and interaction (e.g., how to naturally interact with data without keyboard and mouse) in VR. To address these challenges, we have worked closely with domain experts who have worked for a top national badminton team to design ShuttleSpace. Our system leverages 1) the peripheral vision to combine the 2D and 3D visualizations and 2) the VR controller to support natural interactions via a stroke metaphor. We demonstrate the effectiveness of ShuttleSpace through three case studies conducted by the experts with useful insights. We further conduct interviews with the experts whose feedback confirms that our first-person immersive analytics system is suitable and useful for analyzing badminton data.",
                "AuthorNamesDeduped": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou 0001;Yingcai Wu",
                "AuthorNames": "Shuainan Ye;Zhutian Chen;Xiangtong Chu;Yifan Wang;Siwei Fu;Lejun Shen;Kun Zhou;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD CG, Zhejiang University;Hong Kong University of Science and Technology;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University;Zhejiang Lab;Chengdu Sports University;State Key Lab of CAD CG, Zhejiang University;State Key Lab of CAD CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2019.2934332;10.1109/vast.2014.7042478;10.1109/tvcg.2018.2865191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Movement trajectory,badminton analytics,virtual reality",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2111,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 365,
                "i": [
                    365
                ]
            }
        },
        {
            "name": "Evanthia Dimara",
            "value": 154,
            "numPapers": 184,
            "cluster": "5",
            "visible": 1,
            "index": 275,
            "x": 160.5965614299588,
            "y": 41.93738733962182,
            "vy": 0,
            "vx": 0,
            "r": 1.1773172135866437,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "What is Interaction for Data Visualization?",
                "DOI": "10.1109/tvcg.2019.2934283",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934283",
                "FirstPage": 119,
                "LastPage": 129,
                "PaperType": "J",
                "Abstract": "Interaction is fundamental to data visualization, but what “interaction” means in the context of visualization is ambiguous and confusing. We argue that this confusion is due to a lack of consensual definition. To tackle this problem, we start by synthesizing an inclusive view of interaction in the visualization community – including insights from information visualization, visual analytics and scientific visualization, as well as the input of both senior and junior visualization researchers. Once this view takes shape, we look at how interaction is defined in the field of human-computer interaction (HCI). By extracting commonalities and differences between the views of interaction in visualization and in HCI, we synthesize a definition of interaction for visualization. Our definition is meant to be a thinking tool and inspire novel and bolder interaction design practices. We hope that by better understanding what interaction in visualization is and what it can be, we will enrich the quality of interaction in visualization systems and empower those who use them.",
                "AuthorNamesDeduped": "Evanthia Dimara;Charles Perin",
                "AuthorNames": "Evanthia Dimara;Charles Perin",
                "AuthorAffiliation": "Sorbonne University;University of Victoria",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/infvis.2000.885092;10.1109/infvis.1998.729560;10.1109/infvis.1996.559213;10.1109/tvcg.2018.2865233;10.1109/vast.2008.4677365;10.1109/tvcg.2015.2467613;10.1109/vast.2011.6102473;10.1109/tvcg.2013.134;10.1109/tvcg.2016.2598620;10.1109/tvcg.2008.109;10.1109/tvcg.2018.2865159;10.1109/tvcg.2012.204;10.1109/tvcg.2013.191;10.1109/tvcg.2010.157;10.1109/tvcg.2010.177;10.1109/tvcg.2014.2346573;10.1109/tvcg.2018.2864913;10.1109/tvcg.2009.111;10.1109/tvcg.2014.2346311;10.1109/tvcg.2018.2865237;10.1109/tvcg.2007.70541;10.1109/tvcg.2013.130;10.1109/tvcg.2016.2598839;10.1109/tvcg.2013.120;10.1109/tvcg.2015.2467831;10.1109/tvcg.2007.70577;10.1109/tvcg.2017.2745958;10.1109/tvcg.2016.2598608;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "interaction,visualization,data,definition,human-computer interaction",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 127,
                "DownloadsXplore": 4809,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 524,
                "i": [
                    524
                ]
            }
        },
        {
            "name": "Ross Maciejewski",
            "value": 490,
            "numPapers": 225,
            "cluster": "1",
            "visible": 1,
            "index": 276,
            "x": -147.01329206401527,
            "y": 77.69872557835514,
            "vy": 0,
            "vx": 0,
            "r": 1.5641911341393206,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Proactive Spatiotemporal Resource Allocation and Predictive Visual Analytics for Community Policing and Law Enforcement",
                "DOI": "10.1109/tvcg.2014.2346926",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346926",
                "FirstPage": 1863,
                "LastPage": 1872,
                "PaperType": "J",
                "Abstract": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
                "AuthorNamesDeduped": "Abish Malik;Ross Maciejewski;Sherry Towers;Sean McCullough;David S. Ebert",
                "AuthorNames": "Abish Malik;Ross Maciejewski;Sherry Towers;Sean McCullough;David S. Ebert",
                "AuthorAffiliation": "Purdue University;Arizona State University;Arizona State University;Purdue University;Purdue University",
                "InternalReferences": "0.1109/tvcg.2013.125;10.1109/tvcg.2013.206;10.1109/vast.2012.6400491;10.1109/vast.2007.4389006;10.1109/tvcg.2013.200",
                "AuthorKeywords": "Visual Analytics, Natural Scales, Seasonal Trend decomposition based on Loess (STL), Law Enforcement",
                "AminerCitationCount": 97,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1909,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1257,
                "i": [
                    1257
                ]
            }
        },
        {
            "name": "Emre Oral",
            "value": 0,
            "numPapers": 35,
            "cluster": "5",
            "visible": 1,
            "index": 277,
            "x": 56.019318032625776,
            "y": -156.8815986888186,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "From Information to Choice: A Critical Inquiry Into Visualization Tools for Decision Making",
                "DOI": "10.1109/tvcg.2023.3326593",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326593",
                "FirstPage": 359,
                "LastPage": 369,
                "PaperType": "J",
                "Abstract": "In the face of complex decisions, people often engage in a three-stage process that spans from (1) exploring and analyzing pertinent information (intelligence); (2) generating and exploring alternative options (design); and ultimately culminating in (3) selecting the optimal decision by evaluating discerning criteria (choice). We can fairly assume that all good visualizations aid in the “intelligence” stage by enabling data exploration and analysis. Yet, to what degree and how do visualization systems currently support the other decision making stages, namely “design” and “choice”? To further explore this question, we conducted a comprehensive review of decision-focused visualization tools by examining publications in major visualization journals and conferences, including VIS, EuroVis, and CHI, spanning all available years. We employed a deductive coding method and in-depth analysis to assess whether and how visualization tools support design and choice. Specifically, we examined each visualization tool by (i) its degree of visibility for displaying decision alternatives, criteria, and preferences, and (ii) its degree of flexibility for offering means to manipulate the decision alternatives, criteria, and preferences with interactions such as adding, modifying, changing mapping, and filtering. Our review highlights the opportunities and challenges that decision-focused visualization tools face in realizing their full potential to support all stages of the decision making process. It reveals a surprising scarcity of tools that support all stages, and while most tools excel in offering visibility for decision criteria and alternatives, the degree of flexibility to manipulate these elements is often limited, and the lack of tools that accommodate decision preferences and their elicitation is notable. Based on our findings, to better support the choice stage, future research could explore enhancing flexibility levels and variety, exploring novel visualization paradigms, increasing algorithmic support, and ensuring that this automation is user-controlled via the enhanced flexibility I evels. Our curated list of the 88 surveyed visualization tools is available in the OSF link (https://osf.io/nrasz/?view_only=b92a90a34ae241449b5f2cd33383bfcb).",
                "AuthorNamesDeduped": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorNames": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorAffiliation": "Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands",
                "InternalReferences": "10.1109/vast.2011.6102457;10.1109/tvcg.2019.2934262;10.1109/vast.2007.4388995;10.1109/visual.1999.809923;10.1109/tvcg.2021.3114830;10.1109/tvcg.2021.3114760;10.1109/tvcg.2021.3114803;10.1109/tvcg.2018.2865233;10.1109/tvcg.2017.2745138;10.1109/tvcg.2019.2934283;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3030469;10.1109/vast.2015.7347636;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.173;10.1109/tvcg.2013.134;10.1109/tvcg.2020.3030335;10.1109/tvcg.2017.2744299;10.1109/tvcg.2018.2865159;10.1109/tvcg.2022.3209451;10.1109/tvcg.2016.2598432;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2864913;10.1109/tvcg.2016.2598589;10.1109/tvcg.2012.261;10.1109/vast.2009.5333920;10.1109/tvcg.2015.2468011;10.1109/vast.2017.8585669;10.1109/tvcg.2017.2745078;10.1109/tvcg.2010.223;10.1109/tvcg.2018.2865126;10.1109/tvcg.2020.3030458;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2744738;10.1109/tvcg.2018.2865020",
                "AuthorKeywords": "Decision making,visualization,state of the art,review,survey,design,interaction,multi-criteria decision making,MCDM",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 106,
                "DownloadsXplore": 374,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 33,
                "i": [
                    33
                ]
            }
        },
        {
            "name": "Ria Chawla",
            "value": 0,
            "numPapers": 35,
            "cluster": "5",
            "visible": 1,
            "index": 278,
            "x": 64.78150448123817,
            "y": 153.79647810384773,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "From Information to Choice: A Critical Inquiry Into Visualization Tools for Decision Making",
                "DOI": "10.1109/tvcg.2023.3326593",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326593",
                "FirstPage": 359,
                "LastPage": 369,
                "PaperType": "J",
                "Abstract": "In the face of complex decisions, people often engage in a three-stage process that spans from (1) exploring and analyzing pertinent information (intelligence); (2) generating and exploring alternative options (design); and ultimately culminating in (3) selecting the optimal decision by evaluating discerning criteria (choice). We can fairly assume that all good visualizations aid in the “intelligence” stage by enabling data exploration and analysis. Yet, to what degree and how do visualization systems currently support the other decision making stages, namely “design” and “choice”? To further explore this question, we conducted a comprehensive review of decision-focused visualization tools by examining publications in major visualization journals and conferences, including VIS, EuroVis, and CHI, spanning all available years. We employed a deductive coding method and in-depth analysis to assess whether and how visualization tools support design and choice. Specifically, we examined each visualization tool by (i) its degree of visibility for displaying decision alternatives, criteria, and preferences, and (ii) its degree of flexibility for offering means to manipulate the decision alternatives, criteria, and preferences with interactions such as adding, modifying, changing mapping, and filtering. Our review highlights the opportunities and challenges that decision-focused visualization tools face in realizing their full potential to support all stages of the decision making process. It reveals a surprising scarcity of tools that support all stages, and while most tools excel in offering visibility for decision criteria and alternatives, the degree of flexibility to manipulate these elements is often limited, and the lack of tools that accommodate decision preferences and their elicitation is notable. Based on our findings, to better support the choice stage, future research could explore enhancing flexibility levels and variety, exploring novel visualization paradigms, increasing algorithmic support, and ensuring that this automation is user-controlled via the enhanced flexibility I evels. Our curated list of the 88 surveyed visualization tools is available in the OSF link (https://osf.io/nrasz/?view_only=b92a90a34ae241449b5f2cd33383bfcb).",
                "AuthorNamesDeduped": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorNames": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorAffiliation": "Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands",
                "InternalReferences": "10.1109/vast.2011.6102457;10.1109/tvcg.2019.2934262;10.1109/vast.2007.4388995;10.1109/visual.1999.809923;10.1109/tvcg.2021.3114830;10.1109/tvcg.2021.3114760;10.1109/tvcg.2021.3114803;10.1109/tvcg.2018.2865233;10.1109/tvcg.2017.2745138;10.1109/tvcg.2019.2934283;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3030469;10.1109/vast.2015.7347636;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.173;10.1109/tvcg.2013.134;10.1109/tvcg.2020.3030335;10.1109/tvcg.2017.2744299;10.1109/tvcg.2018.2865159;10.1109/tvcg.2022.3209451;10.1109/tvcg.2016.2598432;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2864913;10.1109/tvcg.2016.2598589;10.1109/tvcg.2012.261;10.1109/vast.2009.5333920;10.1109/tvcg.2015.2468011;10.1109/vast.2017.8585669;10.1109/tvcg.2017.2745078;10.1109/tvcg.2010.223;10.1109/tvcg.2018.2865126;10.1109/tvcg.2020.3030458;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2744738;10.1109/tvcg.2018.2865020",
                "AuthorKeywords": "Decision making,visualization,state of the art,review,survey,design,interaction,multi-criteria decision making,MCDM",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 106,
                "DownloadsXplore": 374,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 33,
                "i": [
                    33
                ]
            }
        },
        {
            "name": "Michel Wijkstra",
            "value": 0,
            "numPapers": 35,
            "cluster": "5",
            "visible": 1,
            "index": 279,
            "x": -151.92792225749776,
            "y": -69.7703836776014,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "From Information to Choice: A Critical Inquiry Into Visualization Tools for Decision Making",
                "DOI": "10.1109/tvcg.2023.3326593",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326593",
                "FirstPage": 359,
                "LastPage": 369,
                "PaperType": "J",
                "Abstract": "In the face of complex decisions, people often engage in a three-stage process that spans from (1) exploring and analyzing pertinent information (intelligence); (2) generating and exploring alternative options (design); and ultimately culminating in (3) selecting the optimal decision by evaluating discerning criteria (choice). We can fairly assume that all good visualizations aid in the “intelligence” stage by enabling data exploration and analysis. Yet, to what degree and how do visualization systems currently support the other decision making stages, namely “design” and “choice”? To further explore this question, we conducted a comprehensive review of decision-focused visualization tools by examining publications in major visualization journals and conferences, including VIS, EuroVis, and CHI, spanning all available years. We employed a deductive coding method and in-depth analysis to assess whether and how visualization tools support design and choice. Specifically, we examined each visualization tool by (i) its degree of visibility for displaying decision alternatives, criteria, and preferences, and (ii) its degree of flexibility for offering means to manipulate the decision alternatives, criteria, and preferences with interactions such as adding, modifying, changing mapping, and filtering. Our review highlights the opportunities and challenges that decision-focused visualization tools face in realizing their full potential to support all stages of the decision making process. It reveals a surprising scarcity of tools that support all stages, and while most tools excel in offering visibility for decision criteria and alternatives, the degree of flexibility to manipulate these elements is often limited, and the lack of tools that accommodate decision preferences and their elicitation is notable. Based on our findings, to better support the choice stage, future research could explore enhancing flexibility levels and variety, exploring novel visualization paradigms, increasing algorithmic support, and ensuring that this automation is user-controlled via the enhanced flexibility I evels. Our curated list of the 88 surveyed visualization tools is available in the OSF link (https://osf.io/nrasz/?view_only=b92a90a34ae241449b5f2cd33383bfcb).",
                "AuthorNamesDeduped": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorNames": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorAffiliation": "Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands",
                "InternalReferences": "10.1109/vast.2011.6102457;10.1109/tvcg.2019.2934262;10.1109/vast.2007.4388995;10.1109/visual.1999.809923;10.1109/tvcg.2021.3114830;10.1109/tvcg.2021.3114760;10.1109/tvcg.2021.3114803;10.1109/tvcg.2018.2865233;10.1109/tvcg.2017.2745138;10.1109/tvcg.2019.2934283;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3030469;10.1109/vast.2015.7347636;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.173;10.1109/tvcg.2013.134;10.1109/tvcg.2020.3030335;10.1109/tvcg.2017.2744299;10.1109/tvcg.2018.2865159;10.1109/tvcg.2022.3209451;10.1109/tvcg.2016.2598432;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2864913;10.1109/tvcg.2016.2598589;10.1109/tvcg.2012.261;10.1109/vast.2009.5333920;10.1109/tvcg.2015.2468011;10.1109/vast.2017.8585669;10.1109/tvcg.2017.2745078;10.1109/tvcg.2010.223;10.1109/tvcg.2018.2865126;10.1109/tvcg.2020.3030458;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2744738;10.1109/tvcg.2018.2865020",
                "AuthorKeywords": "Decision making,visualization,state of the art,review,survey,design,interaction,multi-criteria decision making,MCDM",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 106,
                "DownloadsXplore": 374,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 33,
                "i": [
                    33
                ]
            }
        },
        {
            "name": "Narges Mahyar",
            "value": 124,
            "numPapers": 63,
            "cluster": "5",
            "visible": 1,
            "index": 280,
            "x": 159.44060018557727,
            "y": -51.27080077844402,
            "vy": 0,
            "vx": 0,
            "r": 1.1427748992515832,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "From Information to Choice: A Critical Inquiry Into Visualization Tools for Decision Making",
                "DOI": "10.1109/tvcg.2023.3326593",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326593",
                "FirstPage": 359,
                "LastPage": 369,
                "PaperType": "J",
                "Abstract": "In the face of complex decisions, people often engage in a three-stage process that spans from (1) exploring and analyzing pertinent information (intelligence); (2) generating and exploring alternative options (design); and ultimately culminating in (3) selecting the optimal decision by evaluating discerning criteria (choice). We can fairly assume that all good visualizations aid in the “intelligence” stage by enabling data exploration and analysis. Yet, to what degree and how do visualization systems currently support the other decision making stages, namely “design” and “choice”? To further explore this question, we conducted a comprehensive review of decision-focused visualization tools by examining publications in major visualization journals and conferences, including VIS, EuroVis, and CHI, spanning all available years. We employed a deductive coding method and in-depth analysis to assess whether and how visualization tools support design and choice. Specifically, we examined each visualization tool by (i) its degree of visibility for displaying decision alternatives, criteria, and preferences, and (ii) its degree of flexibility for offering means to manipulate the decision alternatives, criteria, and preferences with interactions such as adding, modifying, changing mapping, and filtering. Our review highlights the opportunities and challenges that decision-focused visualization tools face in realizing their full potential to support all stages of the decision making process. It reveals a surprising scarcity of tools that support all stages, and while most tools excel in offering visibility for decision criteria and alternatives, the degree of flexibility to manipulate these elements is often limited, and the lack of tools that accommodate decision preferences and their elicitation is notable. Based on our findings, to better support the choice stage, future research could explore enhancing flexibility levels and variety, exploring novel visualization paradigms, increasing algorithmic support, and ensuring that this automation is user-controlled via the enhanced flexibility I evels. Our curated list of the 88 surveyed visualization tools is available in the OSF link (https://osf.io/nrasz/?view_only=b92a90a34ae241449b5f2cd33383bfcb).",
                "AuthorNamesDeduped": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorNames": "Emre Oral;Ria Chawla;Michel Wijkstra;Narges Mahyar;Evanthia Dimara",
                "AuthorAffiliation": "Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands;University of Massachusetts Amherst, United States;Utrecht University, Netherlands",
                "InternalReferences": "10.1109/vast.2011.6102457;10.1109/tvcg.2019.2934262;10.1109/vast.2007.4388995;10.1109/visual.1999.809923;10.1109/tvcg.2021.3114830;10.1109/tvcg.2021.3114760;10.1109/tvcg.2021.3114803;10.1109/tvcg.2018.2865233;10.1109/tvcg.2017.2745138;10.1109/tvcg.2019.2934283;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3030469;10.1109/vast.2015.7347636;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.173;10.1109/tvcg.2013.134;10.1109/tvcg.2020.3030335;10.1109/tvcg.2017.2744299;10.1109/tvcg.2018.2865159;10.1109/tvcg.2022.3209451;10.1109/tvcg.2016.2598432;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2864913;10.1109/tvcg.2016.2598589;10.1109/tvcg.2012.261;10.1109/vast.2009.5333920;10.1109/tvcg.2015.2468011;10.1109/vast.2017.8585669;10.1109/tvcg.2017.2745078;10.1109/tvcg.2010.223;10.1109/tvcg.2018.2865126;10.1109/tvcg.2020.3030458;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2744738;10.1109/tvcg.2018.2865020",
                "AuthorKeywords": "Decision making,visualization,state of the art,review,survey,design,interaction,multi-criteria decision making,MCDM",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 106,
                "DownloadsXplore": 374,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 33,
                "i": [
                    33
                ]
            }
        },
        {
            "name": "Anastasia Bezerianos",
            "value": 347,
            "numPapers": 119,
            "cluster": "5",
            "visible": 1,
            "index": 281,
            "x": -83.08130839473644,
            "y": 145.76520913928223,
            "vy": 0,
            "vx": 0,
            "r": 1.3995394358088658,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Comparing Similarity Perception in Time Series Visualizations",
                "DOI": "10.1109/tvcg.2018.2865077",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865077",
                "FirstPage": 523,
                "LastPage": 533,
                "PaperType": "J",
                "Abstract": "A common challenge faced by many domain experts working with time series data is how to identify and compare similar patterns. This operation is fundamental in high-level tasks, such as detecting recurring phenomena or creating clusters of similar temporal sequences. While automatic measures exist to compute time series similarity, human intervention is often required to visually inspect these automatically generated results. The visualization literature has examined similarity perception and its relation to automatic similarity measures for line charts, but has not yet considered if alternative visual representations, such as horizon graphs and colorfields, alter this perception. Motivated by how neuroscientists evaluate epileptiform patterns, we conducted two experiments that study how these three visualization techniques affect similarity perception in EEG signals. We seek to understand if the time series results returned from automatic similarity measures are perceived in a similar manner, irrespective of the visualization technique; and if what people perceive as similar with each visualization aligns with different automatic measures and their similarity constraints. Our findings indicate that horizon graphs align with similarity measures that allow local variations in temporal position or speed (i.e., dynamic time warping) more than the two other techniques. On the other hand, horizon graphs do not align with measures that are insensitive to amplitude and y-offset scaling (i.e., measures based on z-normalization), but the inverse seems to be the case for line charts and colorfields. Overall, our work indicates that the choice of visualization affects what temporal patterns we consider as similar, i.e., the notion of similarity in time series is not visualization independent.",
                "AuthorNamesDeduped": "Anna Gogolou;Theophanis Tsandilas;Themis Palpanas;Anastasia Bezerianos",
                "AuthorNames": "Anna Gogolou;Theophanis Tsandilas;Themis Palpanas;Anastasia Bezerianos",
                "AuthorAffiliation": "Universite Paris-Sud, Orsay, ÃŽle-de-France, FR;Universite Paris-Sud, Orsay, ÃŽle-de-France, FR;Universite Paris Descartes, Paris, ÃŽle-de-France, FR;Universite Paris-Sud, Orsay, ÃŽle-de-France, FR",
                "InternalReferences": "0.1109/tvcg.2011.232;10.1109/vast.2007.4389007;10.1109/tvcg.2008.166;10.1109/vast.2016.7883519;10.1109/tvcg.2010.162;10.1109/vast.2016.7883518;10.1109/infvis.2005.1532144;10.1109/tvcg.2012.196;10.1109/infvis.1999.801851;10.1109/infvis.2001.963273;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Time series,similarity perception,automatic similarity search,line charts,horizon graphs,colorfields,evaluation",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 2190,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 659,
                "i": [
                    659
                ]
            }
        },
        {
            "name": "Samuel Gratzl",
            "value": 433,
            "numPapers": 76,
            "cluster": "4",
            "visible": 1,
            "index": 282,
            "x": -37.26743138512027,
            "y": -163.8936806529018,
            "vy": 0,
            "vx": 0,
            "r": 1.4985607369027059,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics",
                "DOI": "10.1109/tvcg.2020.3030334",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030334",
                "FirstPage": 1193,
                "LastPage": 1203,
                "PaperType": "J",
                "Abstract": "Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative’ scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use’ interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.",
                "AuthorNamesDeduped": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorNames": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2019.2934803;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2599107;10.1109/vast.2007.4389011;10.1109/vast.2010.5652880;10.1109/tvcg.2018.2865241;10.1109/vast.2007.4389006;10.1109/tvcg.2009.162;10.1109/tvcg.2007.70577;10.1109/tvcg.2019.2934538;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Data visualisation,tangible and embedded interaction,augmented reality,immersive analytics",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1968,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 460,
                "i": [
                    460
                ]
            }
        },
        {
            "name": "Di Weng",
            "value": 397,
            "numPapers": 172,
            "cluster": "3",
            "visible": 1,
            "index": 283,
            "x": 138.4327998800014,
            "y": 95.84550024588258,
            "vy": 0,
            "vx": 0,
            "r": 1.4571099597006332,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "AirVis: Visual Analytics of Air Pollution Propagation",
                "DOI": "10.1109/tvcg.2019.2934670",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934670",
                "FirstPage": 800,
                "LastPage": 810,
                "PaperType": "J",
                "Abstract": "Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao 0003;Yu Zheng 0004;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao;Yu Zheng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Research Center for Air Pollution and Health, Zhejiang University;JD Intelligent City Research, Beijing, China;JD Intelligent City Research, Beijing, China;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.193;10.1109/tvcg.2011.202;10.1109/tvcg.2018.2864826;10.1109/tvcg.2015.2467619;10.1109/tvcg.2017.2745083;10.1109/tvcg.2013.226;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865149;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2016.2598919;10.1109/tvcg.2012.213;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468111;10.1109/tvcg.2018.2865126;10.1109/tvcg.2015.2467194;10.1109/tvcg.2018.2865041;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Air pollution propagation,pattern mining,graph visualization",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 2525,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 611,
                "i": [
                    611
                ]
            }
        },
        {
            "name": "Stephan Pajer",
            "value": 64,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 284,
            "x": -167.11269746251827,
            "y": 22.876764342905556,
            "vy": 0,
            "vx": 0,
            "r": 1.0736902705814624,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making",
                "DOI": "10.1109/tvcg.2016.2598589",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598589",
                "FirstPage": 611,
                "LastPage": 620,
                "PaperType": "J",
                "Abstract": "A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.",
                "AuthorNamesDeduped": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorNames": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;University Linz;University of Vienna;VRVis Research Center;University of Vienna;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2015.2468011;10.1109/tvcg.2013.147;10.1109/vast.2015.7347686;10.1109/visual.1993.398859;10.1109/tvcg.2008.145;10.1109/vast.2011.6102457;10.1109/tvcg.2011.253;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.190;10.1109/tvcg.2009.110;10.1109/vast.2010.5652460;10.1109/tvcg.2013.173;10.1109/tvcg.2011.248;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Visual analysis;decision making;multi-objective optimization;interactive ranking;rank sensitivity",
                "AminerCitationCount": 84,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1657,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 895,
                "i": [
                    895
                ]
            }
        },
        {
            "name": "Thomas Torsney-Weir",
            "value": 160,
            "numPapers": 40,
            "cluster": "6",
            "visible": 1,
            "index": 285,
            "x": 107.95990745706445,
            "y": -129.97945369119725,
            "vy": 0,
            "vx": 0,
            "r": 1.1842256764536556,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making",
                "DOI": "10.1109/tvcg.2016.2598589",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598589",
                "FirstPage": 611,
                "LastPage": 620,
                "PaperType": "J",
                "Abstract": "A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.",
                "AuthorNamesDeduped": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorNames": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;University Linz;University of Vienna;VRVis Research Center;University of Vienna;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2015.2468011;10.1109/tvcg.2013.147;10.1109/vast.2015.7347686;10.1109/visual.1993.398859;10.1109/tvcg.2008.145;10.1109/vast.2011.6102457;10.1109/tvcg.2011.253;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.190;10.1109/tvcg.2009.110;10.1109/vast.2010.5652460;10.1109/tvcg.2013.173;10.1109/tvcg.2011.248;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Visual analysis;decision making;multi-objective optimization;interactive ranking;rank sensitivity",
                "AminerCitationCount": 84,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1657,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 895,
                "i": [
                    895
                ]
            }
        },
        {
            "name": "Florian Spechtenhauser",
            "value": 72,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 286,
            "x": 8.207920572875613,
            "y": 169.06398208923557,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making",
                "DOI": "10.1109/tvcg.2016.2598589",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598589",
                "FirstPage": 611,
                "LastPage": 620,
                "PaperType": "J",
                "Abstract": "A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.",
                "AuthorNamesDeduped": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorNames": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;University Linz;University of Vienna;VRVis Research Center;University of Vienna;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2015.2468011;10.1109/tvcg.2013.147;10.1109/vast.2015.7347686;10.1109/visual.1993.398859;10.1109/tvcg.2008.145;10.1109/vast.2011.6102457;10.1109/tvcg.2011.253;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.190;10.1109/tvcg.2009.110;10.1109/vast.2010.5652460;10.1109/tvcg.2013.173;10.1109/tvcg.2011.248;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Visual analysis;decision making;multi-objective optimization;interactive ranking;rank sensitivity",
                "AminerCitationCount": 84,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1657,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 895,
                "i": [
                    895
                ]
            }
        },
        {
            "name": "Torsten Möller",
            "value": 1050,
            "numPapers": 159,
            "cluster": "6",
            "visible": 1,
            "index": 287,
            "x": -120.46302760445965,
            "y": -119.32585210409016,
            "vy": 0,
            "vx": 0,
            "r": 2.208981001727116,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making",
                "DOI": "10.1109/tvcg.2016.2598589",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598589",
                "FirstPage": 611,
                "LastPage": 620,
                "PaperType": "J",
                "Abstract": "A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.",
                "AuthorNamesDeduped": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorNames": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;University Linz;University of Vienna;VRVis Research Center;University of Vienna;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2015.2468011;10.1109/tvcg.2013.147;10.1109/vast.2015.7347686;10.1109/visual.1993.398859;10.1109/tvcg.2008.145;10.1109/vast.2011.6102457;10.1109/tvcg.2011.253;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.190;10.1109/tvcg.2009.110;10.1109/vast.2010.5652460;10.1109/tvcg.2013.173;10.1109/tvcg.2011.248;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Visual analysis;decision making;multi-objective optimization;interactive ranking;rank sensitivity",
                "AminerCitationCount": 84,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1657,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 895,
                "i": [
                    895
                ]
            }
        },
        {
            "name": "Harald Piringer",
            "value": 660,
            "numPapers": 157,
            "cluster": "6",
            "visible": 1,
            "index": 288,
            "x": 169.72354606271475,
            "y": 6.6270590685125645,
            "vy": 0,
            "vx": 0,
            "r": 1.7599309153713298,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making",
                "DOI": "10.1109/tvcg.2016.2598589",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598589",
                "FirstPage": 611,
                "LastPage": 620,
                "PaperType": "J",
                "Abstract": "A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.",
                "AuthorNamesDeduped": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorNames": "Stephan Pajer;Marc Streit;Thomas Torsney-Weir;Florian Spechtenhauser;Torsten Möller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;University Linz;University of Vienna;VRVis Research Center;University of Vienna;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2015.2468011;10.1109/tvcg.2013.147;10.1109/vast.2015.7347686;10.1109/visual.1993.398859;10.1109/tvcg.2008.145;10.1109/vast.2011.6102457;10.1109/tvcg.2011.253;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.190;10.1109/tvcg.2009.110;10.1109/vast.2010.5652460;10.1109/tvcg.2013.173;10.1109/tvcg.2011.248;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Visual analysis;decision making;multi-objective optimization;interactive ranking;rank sensitivity",
                "AminerCitationCount": 84,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1657,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 895,
                "i": [
                    895
                ]
            }
        },
        {
            "name": "Hrvoje Ribicic",
            "value": 208,
            "numPapers": 21,
            "cluster": "6",
            "visible": 1,
            "index": 289,
            "x": -129.8498345156331,
            "y": 109.95008174741251,
            "vy": 0,
            "vx": 0,
            "r": 1.2394933793897525,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Sketching Uncertainty into Simulations",
                "DOI": "10.1109/tvcg.2012.261",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.261",
                "FirstPage": 2255,
                "LastPage": 2264,
                "PaperType": "J",
                "Abstract": "In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.",
                "AuthorNamesDeduped": "Hrvoje Ribicic;Jürgen Waser;Roman Gurbat;Bernhard Sadransky;M. Eduard Gröller",
                "AuthorNames": "Hrvoje Ribicic;Juergen Waser;Roman Gurbat;Bernhard Sadransky;M. Eduard Gröller",
                "AuthorAffiliation": "VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Technical University of of Vienna, Austria",
                "InternalReferences": "0.1109/tvcg.2010.223;10.1109/tvcg.2011.225;10.1109/tvcg.2010.223;10.1109/tvcg.2010.202;10.1109/vast.2011.6102457",
                "AuthorKeywords": "Emergency/disaster management, interaction design, uncertainty visualization, sketch-based steering, ensemble simulation steering, integrated visualization system, flood management",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 649,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1466,
                "i": [
                    1466
                ]
            }
        },
        {
            "name": "Jürgen Waser",
            "value": 249,
            "numPapers": 33,
            "cluster": "6",
            "visible": 1,
            "index": 290,
            "x": 21.51407503276169,
            "y": -169.07733312151777,
            "vy": 0,
            "vx": 0,
            "r": 1.2867012089810017,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Sketching Uncertainty into Simulations",
                "DOI": "10.1109/tvcg.2012.261",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.261",
                "FirstPage": 2255,
                "LastPage": 2264,
                "PaperType": "J",
                "Abstract": "In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.",
                "AuthorNamesDeduped": "Hrvoje Ribicic;Jürgen Waser;Roman Gurbat;Bernhard Sadransky;M. Eduard Gröller",
                "AuthorNames": "Hrvoje Ribicic;Juergen Waser;Roman Gurbat;Bernhard Sadransky;M. Eduard Gröller",
                "AuthorAffiliation": "VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Technical University of of Vienna, Austria",
                "InternalReferences": "0.1109/tvcg.2010.223;10.1109/tvcg.2011.225;10.1109/tvcg.2010.223;10.1109/tvcg.2010.202;10.1109/vast.2011.6102457",
                "AuthorKeywords": "Emergency/disaster management, interaction design, uncertainty visualization, sketch-based steering, ensemble simulation steering, integrated visualization system, flood management",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 649,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1466,
                "i": [
                    1466
                ]
            }
        },
        {
            "name": "Stephen Rudolph",
            "value": 45,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 291,
            "x": 98.5154134313982,
            "y": 139.44430184285298,
            "vy": 0,
            "vx": 0,
            "r": 1.0518134715025906,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "finVis: Applied visual analytics for personal financial planning",
                "DOI": "10.1109/vast.2009.5333920",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5333920",
                "FirstPage": 195,
                "LastPage": 202,
                "PaperType": "C",
                "Abstract": "FinVis is a visual analytics tool that allows the non-expert casual user to interpret the return, risk and correlation aspects of financial data and make personal finance decisions. This interactive exploratory tool helps the casual decision-maker quickly choose between various financial portfolio options and view possible outcomes. FinVis allows for exploration of inter-temporal data to analyze outcomes of short-term or long-term investment decisions. FinVis helps the user overcome cognitive limitations and understand the impact of correlation between financial instruments in order to reap the benefits of portfolio diversification. Because this software is accessible by non-expert users, decision-makers from the general population can benefit greatly from using FinVis in practical applications. We quantify the value of FinVis using experimental economics methods and find that subjects using the FinVis software make better financial portfolio decisions as compared to subjects using a tabular version with the same information. We also find that FinVis engages the user, which results in greater exploration of the dataset and increased learning as compared to a tabular display. Further, participants using FinVis reported increased confidence in financial decision-making and noted that they were likely to use this tool in practical application.",
                "AuthorNamesDeduped": "Stephen Rudolph;Anya Savikhin;David S. Ebert",
                "AuthorNames": "Stephen Rudolph;Anya Savikhin;David S. Ebert",
                "AuthorAffiliation": "Regional Visualization and Analytics Center, Purdue University, USA;Department of Economics, Purdue University, USA;Regional Visualization and Analytics Center, Purdue University, USA",
                "InternalReferences": "0.1109/infvis.2000.885098;10.1109/infvis.1997.636789;10.1109/tvcg.2007.70541;10.1109/infvis.2001.963273;10.1109/tvcg.2007.70589;10.1109/tvcg.2007.70577;10.1109/vast.2008.4677363;10.1109/infvis.2003.1249027;10.1109/infvis.1996.559222",
                "AuthorKeywords": "Casual Information Visualization, visual analytics, personal finance, visualization of risk, economic decision-making",
                "AminerCitationCount": 87,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1243,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1860,
                "i": [
                    1860
                ]
            }
        },
        {
            "name": "Anya Savikhin",
            "value": 61,
            "numPapers": 8,
            "cluster": "5",
            "visible": 1,
            "index": 292,
            "x": -167.1213945449295,
            "y": -36.33785196400592,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "finVis: Applied visual analytics for personal financial planning",
                "DOI": "10.1109/vast.2009.5333920",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5333920",
                "FirstPage": 195,
                "LastPage": 202,
                "PaperType": "C",
                "Abstract": "FinVis is a visual analytics tool that allows the non-expert casual user to interpret the return, risk and correlation aspects of financial data and make personal finance decisions. This interactive exploratory tool helps the casual decision-maker quickly choose between various financial portfolio options and view possible outcomes. FinVis allows for exploration of inter-temporal data to analyze outcomes of short-term or long-term investment decisions. FinVis helps the user overcome cognitive limitations and understand the impact of correlation between financial instruments in order to reap the benefits of portfolio diversification. Because this software is accessible by non-expert users, decision-makers from the general population can benefit greatly from using FinVis in practical applications. We quantify the value of FinVis using experimental economics methods and find that subjects using the FinVis software make better financial portfolio decisions as compared to subjects using a tabular version with the same information. We also find that FinVis engages the user, which results in greater exploration of the dataset and increased learning as compared to a tabular display. Further, participants using FinVis reported increased confidence in financial decision-making and noted that they were likely to use this tool in practical application.",
                "AuthorNamesDeduped": "Stephen Rudolph;Anya Savikhin;David S. Ebert",
                "AuthorNames": "Stephen Rudolph;Anya Savikhin;David S. Ebert",
                "AuthorAffiliation": "Regional Visualization and Analytics Center, Purdue University, USA;Department of Economics, Purdue University, USA;Regional Visualization and Analytics Center, Purdue University, USA",
                "InternalReferences": "0.1109/infvis.2000.885098;10.1109/infvis.1997.636789;10.1109/tvcg.2007.70541;10.1109/infvis.2001.963273;10.1109/tvcg.2007.70589;10.1109/tvcg.2007.70577;10.1109/vast.2008.4677363;10.1109/infvis.2003.1249027;10.1109/infvis.1996.559222",
                "AuthorKeywords": "Casual Information Visualization, visual analytics, personal finance, visualization of risk, economic decision-making",
                "AminerCitationCount": 87,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1243,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1860,
                "i": [
                    1860
                ]
            }
        },
        {
            "name": "Emily Wall",
            "value": 313,
            "numPapers": 84,
            "cluster": "4",
            "visible": 1,
            "index": 293,
            "x": 148.02837449345364,
            "y": -86.24152332157549,
            "vy": 0,
            "vx": 0,
            "r": 1.360391479562464,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Podium: Ranking Data Using Mixed-Initiative Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745078",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745078",
                "FirstPage": 288,
                "LastPage": 297,
                "PaperType": "J",
                "Abstract": "People often rank and order data points as a vital part of making decisions. Multi-attribute ranking systems are a common tool used to make these data-driven decisions. Such systems often take the form of a table-based visualization in which users assign weights to the attributes representing the quantifiable importance of each attribute to a decision, which the system then uses to compute a ranking of the data. However, these systems assume that users are able to quantify their conceptual understanding of how important particular attributes are to a decision. This is not always easy or even possible for users to do. Rather, people often have a more holistic understanding of the data. They form opinions that data point A is better than data point B but do not necessarily know which attributes are important. To address these challenges, we present a visual analytic application to help people rank multi-variate data points. We developed a prototype system, Podium, that allows users to drag rows in the table to rank order data points based on their perception of the relative value of the data. Podium then infers a weighting model using Ranking SVM that satisfies the user's data preferences as closely as possible. Whereas past systems help users understand the relationships between data points based on changes to attribute weights, our approach helps users to understand the attributes that might inform their understanding of the data. We present two usage scenarios to describe some of the potential uses of our proposed technique: (1) understanding which attributes contribute to a user's subjective preferences for data, and (2) deconstructing attributes of importance for existing rankings. Our proposed approach makes powerful machine learning techniques more usable to those who may not have expertise in these areas.",
                "AuthorNamesDeduped": "Emily Wall;Subhajit Das 0002;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorNames": "Emily Wall;Subhajit Das;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;DePaul University, Chicago, IL, USA;Georgia Institute of Technology, Atlanta, GA, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2015.7347625;10.1109/tvcg.2016.2598594;10.1109/vast.2011.6102449;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.253;10.1109/vast.2017.8585669",
                "AuthorKeywords": "Mixed-initiative visual analytics,multi-attribute ranking,user interaction",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1419,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 858,
                "i": [
                    858
                ]
            }
        },
        {
            "name": "Zikun Deng",
            "value": 242,
            "numPapers": 126,
            "cluster": "3",
            "visible": 1,
            "index": 294,
            "x": -50.98283635801276,
            "y": 163.86198582005557,
            "vy": 0,
            "vx": 0,
            "r": 1.2786413356361543,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "AirVis: Visual Analytics of Air Pollution Propagation",
                "DOI": "10.1109/tvcg.2019.2934670",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934670",
                "FirstPage": 800,
                "LastPage": 810,
                "PaperType": "J",
                "Abstract": "Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao 0003;Yu Zheng 0004;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao;Yu Zheng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Research Center for Air Pollution and Health, Zhejiang University;JD Intelligent City Research, Beijing, China;JD Intelligent City Research, Beijing, China;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.193;10.1109/tvcg.2011.202;10.1109/tvcg.2018.2864826;10.1109/tvcg.2015.2467619;10.1109/tvcg.2017.2745083;10.1109/tvcg.2013.226;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865149;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2016.2598919;10.1109/tvcg.2012.213;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468111;10.1109/tvcg.2018.2865126;10.1109/tvcg.2015.2467194;10.1109/tvcg.2018.2865041;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Air pollution propagation,pattern mining,graph visualization",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 2525,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 611,
                "i": [
                    611
                ]
            }
        },
        {
            "name": "Yifang Wang 0001",
            "value": 50,
            "numPapers": 55,
            "cluster": "1",
            "visible": 1,
            "index": 295,
            "x": -73.21801751802897,
            "y": -155.52852442793122,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InnovationInsights: A Visual Analytics Approach for Understanding the Dual Frontiers of Science and Technology",
                "DOI": "10.1109/tvcg.2023.3327387",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327387",
                "FirstPage": 518,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation. Through both the case studies and the engagement project, we find that our system not only meets our original goals of design, allowing users to better identify the sources of technical inventions and to understand the broad impact of scientific research; it also goes beyond these purposes to enable an array of new applications for researchers and research institutions, ranging from identifying untapped innovation potential within an institution to forging new collaboration opportunities between science and industry.",
                "AuthorNamesDeduped": "Yifang Wang 0001;Yifan Qian;Xiaoyu Qi;Nan Cao 0001;Dashun Wang",
                "AuthorNames": "Yifang Wang;Yifan Qian;Xiaoyu Qi;Nan Cao;Dashun Wang",
                "AuthorAffiliation": "The Center for Science of Science and Innovation, Northwestern University, USA;The Center for Science of Science and Innovation, Northwestern University, USA;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;The Center for Science of Science and Innovation, Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209427;10.1109/tvcg.2011.202;10.1109/tvcg.2011.226;10.1109/tvcg.2018.2864826;10.1109/tvcg.2012.252;10.1109/tvcg.2013.162;10.1109/visual.2001.964539;10.1109/tvcg.2022.3209422;10.1109/tvcg.2018.2865022;10.1109/tvcg.2019.2934667;10.1109/tvcg.2017.2745158;10.1109/tvcg.2021.3114820;10.1109/tvcg.2018.2865149;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.213;10.1109/tvcg.2021.3114787;10.1109/vast.2011.6102453;10.1109/tvcg.2021.3114794;10.1109/tvcg.2021.3114790;10.1109/tvcg.2022.3209360;10.1109/tvcg.2015.2468151",
                "AuthorKeywords": "Science of Science,Innovation,Academic Profiles,Patent Data,Publication Data,Visual Analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 369,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 34,
                "i": [
                    34
                ]
            }
        },
        {
            "name": "Yifan Qian",
            "value": 0,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 296,
            "x": 159.31558341653644,
            "y": 65.33410197323153,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InnovationInsights: A Visual Analytics Approach for Understanding the Dual Frontiers of Science and Technology",
                "DOI": "10.1109/tvcg.2023.3327387",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327387",
                "FirstPage": 518,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation. Through both the case studies and the engagement project, we find that our system not only meets our original goals of design, allowing users to better identify the sources of technical inventions and to understand the broad impact of scientific research; it also goes beyond these purposes to enable an array of new applications for researchers and research institutions, ranging from identifying untapped innovation potential within an institution to forging new collaboration opportunities between science and industry.",
                "AuthorNamesDeduped": "Yifang Wang 0001;Yifan Qian;Xiaoyu Qi;Nan Cao 0001;Dashun Wang",
                "AuthorNames": "Yifang Wang;Yifan Qian;Xiaoyu Qi;Nan Cao;Dashun Wang",
                "AuthorAffiliation": "The Center for Science of Science and Innovation, Northwestern University, USA;The Center for Science of Science and Innovation, Northwestern University, USA;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;The Center for Science of Science and Innovation, Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209427;10.1109/tvcg.2011.202;10.1109/tvcg.2011.226;10.1109/tvcg.2018.2864826;10.1109/tvcg.2012.252;10.1109/tvcg.2013.162;10.1109/visual.2001.964539;10.1109/tvcg.2022.3209422;10.1109/tvcg.2018.2865022;10.1109/tvcg.2019.2934667;10.1109/tvcg.2017.2745158;10.1109/tvcg.2021.3114820;10.1109/tvcg.2018.2865149;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.213;10.1109/tvcg.2021.3114787;10.1109/vast.2011.6102453;10.1109/tvcg.2021.3114794;10.1109/tvcg.2021.3114790;10.1109/tvcg.2022.3209360;10.1109/tvcg.2015.2468151",
                "AuthorKeywords": "Science of Science,Innovation,Academic Profiles,Patent Data,Publication Data,Visual Analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 369,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 34,
                "i": [
                    34
                ]
            }
        },
        {
            "name": "Xiaoyu Qi",
            "value": 0,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 297,
            "x": -161.87919966509247,
            "y": 59.54094990667447,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InnovationInsights: A Visual Analytics Approach for Understanding the Dual Frontiers of Science and Technology",
                "DOI": "10.1109/tvcg.2023.3327387",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327387",
                "FirstPage": 518,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation. Through both the case studies and the engagement project, we find that our system not only meets our original goals of design, allowing users to better identify the sources of technical inventions and to understand the broad impact of scientific research; it also goes beyond these purposes to enable an array of new applications for researchers and research institutions, ranging from identifying untapped innovation potential within an institution to forging new collaboration opportunities between science and industry.",
                "AuthorNamesDeduped": "Yifang Wang 0001;Yifan Qian;Xiaoyu Qi;Nan Cao 0001;Dashun Wang",
                "AuthorNames": "Yifang Wang;Yifan Qian;Xiaoyu Qi;Nan Cao;Dashun Wang",
                "AuthorAffiliation": "The Center for Science of Science and Innovation, Northwestern University, USA;The Center for Science of Science and Innovation, Northwestern University, USA;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;The Center for Science of Science and Innovation, Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209427;10.1109/tvcg.2011.202;10.1109/tvcg.2011.226;10.1109/tvcg.2018.2864826;10.1109/tvcg.2012.252;10.1109/tvcg.2013.162;10.1109/visual.2001.964539;10.1109/tvcg.2022.3209422;10.1109/tvcg.2018.2865022;10.1109/tvcg.2019.2934667;10.1109/tvcg.2017.2745158;10.1109/tvcg.2021.3114820;10.1109/tvcg.2018.2865149;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.213;10.1109/tvcg.2021.3114787;10.1109/vast.2011.6102453;10.1109/tvcg.2021.3114794;10.1109/tvcg.2021.3114790;10.1109/tvcg.2022.3209360;10.1109/tvcg.2015.2468151",
                "AuthorKeywords": "Science of Science,Innovation,Academic Profiles,Patent Data,Publication Data,Visual Analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 369,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 34,
                "i": [
                    34
                ]
            }
        },
        {
            "name": "Dashun Wang",
            "value": 0,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 298,
            "x": 79.27825587141538,
            "y": -153.50882106897438,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "InnovationInsights: A Visual Analytics Approach for Understanding the Dual Frontiers of Science and Technology",
                "DOI": "10.1109/tvcg.2023.3327387",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327387",
                "FirstPage": 518,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation. Through both the case studies and the engagement project, we find that our system not only meets our original goals of design, allowing users to better identify the sources of technical inventions and to understand the broad impact of scientific research; it also goes beyond these purposes to enable an array of new applications for researchers and research institutions, ranging from identifying untapped innovation potential within an institution to forging new collaboration opportunities between science and industry.",
                "AuthorNamesDeduped": "Yifang Wang 0001;Yifan Qian;Xiaoyu Qi;Nan Cao 0001;Dashun Wang",
                "AuthorNames": "Yifang Wang;Yifan Qian;Xiaoyu Qi;Nan Cao;Dashun Wang",
                "AuthorAffiliation": "The Center for Science of Science and Innovation, Northwestern University, USA;The Center for Science of Science and Innovation, Northwestern University, USA;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;The Center for Science of Science and Innovation, Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209427;10.1109/tvcg.2011.202;10.1109/tvcg.2011.226;10.1109/tvcg.2018.2864826;10.1109/tvcg.2012.252;10.1109/tvcg.2013.162;10.1109/visual.2001.964539;10.1109/tvcg.2022.3209422;10.1109/tvcg.2018.2865022;10.1109/tvcg.2019.2934667;10.1109/tvcg.2017.2745158;10.1109/tvcg.2021.3114820;10.1109/tvcg.2018.2865149;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.213;10.1109/tvcg.2021.3114787;10.1109/vast.2011.6102453;10.1109/tvcg.2021.3114794;10.1109/tvcg.2021.3114790;10.1109/tvcg.2022.3209360;10.1109/tvcg.2015.2468151",
                "AuthorKeywords": "Science of Science,Innovation,Academic Profiles,Patent Data,Publication Data,Visual Analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 369,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 34,
                "i": [
                    34
                ]
            }
        },
        {
            "name": "Anna Vilanova",
            "value": 390,
            "numPapers": 122,
            "cluster": "6",
            "visible": 1,
            "index": 299,
            "x": 45.31210963696255,
            "y": 167.02338974002382,
            "vy": 0,
            "vx": 0,
            "r": 1.4490500863557858,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Class-Constrained t-SNE: Combining Data Features and Class Probabilities",
                "DOI": "10.1109/tvcg.2023.3326600",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326600",
                "FirstPage": 164,
                "LastPage": 174,
                "PaperType": "J",
                "Abstract": "Data features and class probabilities are two main perspectives when, e.g., evaluating model results and identifying problematic items. Class probabilities represent the likelihood that each instance belongs to a particular class, which can be produced by probabilistic classifiers or even human labeling with uncertainty. Since both perspectives are multi-dimensional data, dimensionality reduction (DR) techniques are commonly used to extract informative characteristics from them. However, existing methods either focus solely on the data feature perspective or rely on class probability estimates to guide the DR process. In contrast to previous work where separate views are linked to conduct the analysis, we propose a novel approach, class-constrained t-SNE, that combines data features and class probabilities in the same DR result. Specifically, we combine them by balancing two corresponding components in a cost function to optimize the positions of data points and iconic representation of classes – class landmarks. Furthermore, an interactive user-adjustable parameter balances these two components so that users can focus on the weighted perspectives of interest and also empowers a smooth visual transition between varying perspectives to preserve the mental map. We illustrate its application potential in model evaluation and visual-interactive labeling. A comparative analysis is performed to evaluate the DR results.",
                "AuthorNamesDeduped": "Linhao Meng;Stef van den Elzen;Nicola Pezzotti;Anna Vilanova",
                "AuthorNames": "Linhao Meng;Stef van den Elzen;Nicola Pezzotti;Anna Vilanova",
                "AuthorAffiliation": "Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands",
                "InternalReferences": "10.1109/tvcg.2014.2346660;10.1109/tvcg.2017.2744818;10.1109/tvcg.2013.212;10.1109/vast.2010.5652443;10.1109/tvcg.2012.277;10.1109/visual.1997.663916;10.1109/vast.2012.6400492;10.1109/tvcg.2016.2598445;10.1109/tvcg.2018.2864843;10.1109/tvcg.2019.2934631;10.1109/tvcg.2011.212;10.1109/tvcg.2019.2934307;10.1109/tvcg.2016.2598828;10.1109/visual.2000.885740;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Dimensionality reduction,t-distributed stochastic neighbor embedding,constraint integration",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 346,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 35,
                "i": [
                    35
                ]
            }
        },
        {
            "name": "Helwig Hauser",
            "value": 1086,
            "numPapers": 207,
            "cluster": "6",
            "visible": 1,
            "index": 300,
            "x": -146.47834560291983,
            "y": -92.70433792132691,
            "vy": 0,
            "vx": 0,
            "r": 2.2504317789291886,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Methods for Analyzing Probabilistic Classification Data",
                "DOI": "10.1109/tvcg.2014.2346660",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346660",
                "FirstPage": 1703,
                "LastPage": 1712,
                "PaperType": "J",
                "Abstract": "Multi-class classifiers often compute scores for the classification samples describing probabilities to belong to different classes. In order to improve the performance of such classifiers, machine learning experts need to analyze classification results for a large number of labeled samples to find possible reasons for incorrect classification. Confusion matrices are widely used for this purpose. However, they provide no information about classification scores and features computed for the samples. We propose a set of integrated visual methods for analyzing the performance of probabilistic classifiers. Our methods provide insight into different aspects of the classification results for a large number of samples. One visualization emphasizes at which probabilities these samples were classified and how these probabilities correlate with classification error in terms of false positives and false negatives. Another view emphasizes the features of these samples and ranks them by their separation power between selected true and false classifications. We demonstrate the insight gained using our technique in a benchmarking classification dataset, and show how it enables improving classification performance by interactively defining and evaluating post-classification rules.",
                "AuthorNamesDeduped": "Bilal Alsallakh;Allan Hanbury;Helwig Hauser;Silvia Miksch;Andreas Rauber",
                "AuthorNames": "Bilal Alsallakh;Allan Hanbury;Helwig Hauser;Silvia Miksch;Andreas Rauber",
                "AuthorAffiliation": "Vienna University of Technology;Vienna University of Technology;University of Bergen;Vienna University of Technology;Vienna University of Technology",
                "InternalReferences": "0.1109/visual.2000.885740;10.1109/vast.2010.5652398;10.1109/vast.2009.5332628;10.1109/tvcg.2012.277;10.1109/vast.2012.6400486;10.1109/tvcg.2013.184;10.1109/tvcg.2012.254;10.1109/vast.2011.6102448;10.1109/vast.2011.6102453;10.1109/vast.2012.6400492;10.1109/vast.2010.5652443",
                "AuthorKeywords": "Probabilistic classification, confusion analysis, feature evaluation and selection, visual inspection",
                "AminerCitationCount": 121,
                "CitationCountCrossRef": 82,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 2292,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1253,
                "i": [
                    1253
                ]
            }
        },
        {
            "name": "Stef van den Elzen",
            "value": 308,
            "numPapers": 36,
            "cluster": "4",
            "visible": 1,
            "index": 301,
            "x": 170.91312729764476,
            "y": -30.638259045498565,
            "vy": 0,
            "vx": 0,
            "r": 1.3546344271732873,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "BaobabView: Interactive construction and analysis of decision trees",
                "DOI": "10.1109/vast.2011.6102453",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102453",
                "FirstPage": 151,
                "LastPage": 160,
                "PaperType": "C",
                "Abstract": "We present a system for the interactive construction and analysis of decision trees that enables domain experts to bring in domain specific knowledge. We identify different user tasks and corresponding requirements, and develop a system incorporating a tight integration of visualization, interaction and algorithmic support. Domain experts are supported in growing, pruning, optimizing and analysing decision trees. Furthermore, we present a scalable decision tree visualization optimized for exploration. We show the effectiveness of our approach by applying the methods to two use cases. The first case illustrates the advantages of interactive construction, the second case demonstrates the effectiveness of analysis of decision trees and exploration of the structure of the data.",
                "AuthorNamesDeduped": "Stef van den Elzen;Jarke J. van Wijk",
                "AuthorNames": "Stef van den Elzen;Jarke J. van Wijk",
                "AuthorAffiliation": "Eindhovan University of Technology, Netherlands;Eindhovan University of Technology, Netherlands",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/infvis.2001.963292;10.1109/infvis.2001.963290",
                "AuthorKeywords": null,
                "AminerCitationCount": 187,
                "CitationCountCrossRef": 92,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 3241,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1581,
                "i": [
                    1581
                ]
            }
        },
        {
            "name": "Boudewijn P. F. Lelieveldt",
            "value": 220,
            "numPapers": 51,
            "cluster": "1",
            "visible": 1,
            "index": 302,
            "x": -105.50470661181035,
            "y": 138.2705929789694,
            "vy": 0,
            "vx": 0,
            "r": 1.2533103051237766,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "GPGPU Linear Complexity t-SNE Optimization",
                "DOI": "10.1109/tvcg.2019.2934307",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934307",
                "FirstPage": 1172,
                "LastPage": 1181,
                "PaperType": "J",
                "Abstract": "In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library.",
                "AuthorNamesDeduped": "Nicola Pezzotti;Julian Thijssen;Alexander Mordvintsev;Thomas Höllt;Baldur van Lew;Boudewijn P. F. Lelieveldt;Elmar Eisemann;Anna Vilanova",
                "AuthorNames": "Nicola Pezzotti;Julian Thijssen;Alexander Mordvintsev;Thomas Höllt;Baldur Van Lew;Boudewijn P.F. Lelieveldt;Elmar Eisemann;Anna Vilanova",
                "AuthorAffiliation": "Google AI, Zürich, Switzerland and Delft University of Technology, Delft, The Netherlands;Delft University of Technology, Delft, The Netherlands;Google AI, Zürich, Switzerland;Delft University of Technology, Delft, The Netherlands and Leiden University Medical Center, Leiden, The Netherlands;Leiden University Medical Center, Leiden, The Netherlands;Delft University of Technology, Delft, The Netherlands and Leiden University Medical Center, Leiden, The Netherlands;Delft University of Technology, Delft, The Netherlands;Delft University of Technology, Delft, The Netherlands",
                "InternalReferences": "0.1109/tvcg.2017.2744318;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745141;10.1109/tvcg.2017.2744358;10.1109/tvcg.2014.2346574",
                "AuthorKeywords": "High Dimensional Data,Dimensionality Reduction,Progressive Visual Analytics,Approximate Computation,GPGPU",
                "AminerCitationCount": 59,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1063,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 605,
                "i": [
                    605
                ]
            }
        },
        {
            "name": "Fan Du",
            "value": 235,
            "numPapers": 58,
            "cluster": "3",
            "visible": 1,
            "index": 303,
            "x": -15.630327766857654,
            "y": -173.50992148548912,
            "vy": 0,
            "vx": 0,
            "r": 1.270581462291307,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Evaluating the Use of Uncertainty Visualisations for Imputations of Data Missing At Random in Scatterplots",
                "DOI": "10.1109/tvcg.2022.3209348",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209348",
                "FirstPage": 602,
                "LastPage": 612,
                "PaperType": "J",
                "Abstract": "Most real-world datasets contain missing values yet most exploratory data analysis (EDA) systems only support visualising data points with complete cases. This omission may potentially lead the user to biased analyses and insights. Imputation techniques can help estimate the value of a missing data point, but introduces additional uncertainty. In this work, we investigate the effects of visualising imputed values in charts using different ways of representing data imputations and imputation uncertainty—no imputation, mean, 95% confidence intervals, probability density plots, gradient intervals, and hypothetical outcome plots. We focus on scatterplots, which is a commonly used chart type, and conduct a crowdsourced study with 202 participants. We measure users' bias and precision in performing two tasks—estimating average and detecting trend—and their self-reported confidence in performing these tasks. Our results suggest that, when estimating averages, uncertainty representations may reduce bias but at the cost of decreasing precision. When estimating trend, only hypothetical outcome plots may lead to a small probability of reducing bias while increasing precision. Participants in every uncertainty representation were less certain about their response when compared to the baseline. The findings point towards potential trade-offs in using uncertainty encodings for datasets with a large number of missing values. This paper and the associated analysis materials are available at: https://osf.io/q4y5r/",
                "AuthorNamesDeduped": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan A. Rossi;Fan Du;Eunyee Koh;Matthew Kay 0001",
                "AuthorNames": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan Rossi;Fan Du;Eunyee Koh;Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3029413;10.1109/tvcg.2011.175;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2012.279;10.1109/tvcg.2021.3114684;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864914",
                "AuthorKeywords": "Uncertainty visualisations,missing values,data imputation,multivariate data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 533,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 172,
                "i": [
                    172
                ]
            }
        },
        {
            "name": "David Gotz",
            "value": 1015,
            "numPapers": 155,
            "cluster": "1",
            "visible": 1,
            "index": 304,
            "x": 128.941485866117,
            "y": 117.5759040902427,
            "vy": 0,
            "vx": 0,
            "r": 2.1686816350028786,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Progression Analysis of Event Sequence Data",
                "DOI": "10.1109/tvcg.2018.2864885",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864885",
                "FirstPage": 417,
                "LastPage": 426,
                "PaperType": "J",
                "Abstract": "Event sequence data is common to a broad range of application domains, from security to health care to scholarly communication. This form of data captures information about the progression of events for an individual entity (e.g., a computer network device; a patient; an author) in the form of a series of time-stamped observations. Moreover, each event is associated with an event type (e.g., a computer login attempt, or a hospital discharge). Analyses of event sequence data have been shown to help reveal important temporal patterns, such as clinical paths resulting in improved outcomes, or an understanding of common career trajectories for scholars. Moreover, recent research has demonstrated a variety of techniques designed to overcome methodological challenges such as large volumes of data and high dimensionality. However, the effective identification and analysis of latent stages of progression, which can allow for variation within different but similarly evolving event sequences, remain a significant challenge with important real-world motivations. In this paper, we propose an unsupervised stage analysis algorithm to identify semantically meaningful progression stages as well as the critical events which help define those stages. The algorithm follows three key steps: (1) event representation estimation, (2) event sequence warping and alignment, and (3) sequence segmentation. We also present a novel visualization system, ET<sup>2</sup>, which interactively illustrates the results of the stage analysis algorithm to help reveal evolution patterns across stages. Finally, we report three forms of evaluation for ET<sup>2</sup>: (1) case studies with two real-world datasets, (2) interviews with domain expert users, and (3) a performance evaluation on the progression analysis algorithm and the visualization design.",
                "AuthorNamesDeduped": "Shunan Guo;Zhuochen Jin;David Gotz;Fan Du;Hongyuan Zha;Nan Cao 0001",
                "AuthorNames": "Shunan Guo;Zhuochen Jin;David Gotz;Fan Du;Hongyuan Zha;Nan Cao",
                "AuthorAffiliation": "East China Normal University;iDVX lab, Tongji University;University of North Carolina, Chapel Hill;University of Maryland;East China Normal University;iDVX lab, Tongji University",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2017.2745320;10.1109/tvcg.2014.2346574;10.1109/tvcg.2009.187;10.1109/tvcg.2014.2346913",
                "AuthorKeywords": "Progression Analysis,Visual Analysis,Event Sequence Data",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 1816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 740,
                "i": [
                    740
                ]
            }
        },
        {
            "name": "John E. Wenskovitch",
            "value": 78,
            "numPapers": 59,
            "cluster": "4",
            "visible": 1,
            "index": 305,
            "x": -174.78511932678433,
            "y": 0.4025691515270116,
            "vy": 0,
            "vx": 0,
            "r": 1.0898100172711571,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Towards a Systematic Combination of Dimension Reduction and Clustering in Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745258",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745258",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Dimension reduction algorithms and clustering algorithms are both frequently used techniques in visual analytics. Both families of algorithms assist analysts in performing related tasks regarding the similarity of observations and finding groups in datasets. Though initially used independently, recent works have incorporated algorithms from each family into the same visualization systems. However, these algorithmic combinations are often ad hoc or disconnected, working independently and in parallel rather than integrating some degree of interdependence. A number of design decisions must be addressed when employing dimension reduction and clustering algorithms concurrently in a visualization system, including the selection of each algorithm, the order in which they are processed, and how to present and interact with the resulting projection. This paper contributes an overview of combining dimension reduction and clustering into a visualization system, discussing the challenges inherent in developing a visualization system that makes use of both families of algorithms.",
                "AuthorNamesDeduped": "John E. Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North 0001",
                "AuthorNames": "John Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North",
                "AuthorAffiliation": "Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2011.186;10.1109/infvis.2005.1532136;10.1109/vast.2014.7042492;10.1109/tvcg.2013.124;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346594;10.1109/vast.2009.5332629;10.1109/tvcg.2013.212;10.1109/tvcg.2009.122;10.1109/tvcg.2006.156;10.1109/vast.2011.6102449;10.1109/infvis.2005.1532126;10.1109/tvcg.2013.188;10.1109/vast.2012.6400487;10.1109/vast.2007.4388999;10.1109/tvcg.2014.2346422;10.1109/tvcg.2011.178;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Dimension reduction,clustering,algorithms,visual analytics",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 1887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 849,
                "i": [
                    849
                ]
            }
        },
        {
            "name": "Ian Crandell",
            "value": 58,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 306,
            "x": 128.81949346877605,
            "y": -118.55605468489564,
            "vy": 0,
            "vx": 0,
            "r": 1.0667818077144502,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Towards a Systematic Combination of Dimension Reduction and Clustering in Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745258",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745258",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Dimension reduction algorithms and clustering algorithms are both frequently used techniques in visual analytics. Both families of algorithms assist analysts in performing related tasks regarding the similarity of observations and finding groups in datasets. Though initially used independently, recent works have incorporated algorithms from each family into the same visualization systems. However, these algorithmic combinations are often ad hoc or disconnected, working independently and in parallel rather than integrating some degree of interdependence. A number of design decisions must be addressed when employing dimension reduction and clustering algorithms concurrently in a visualization system, including the selection of each algorithm, the order in which they are processed, and how to present and interact with the resulting projection. This paper contributes an overview of combining dimension reduction and clustering into a visualization system, discussing the challenges inherent in developing a visualization system that makes use of both families of algorithms.",
                "AuthorNamesDeduped": "John E. Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North 0001",
                "AuthorNames": "John Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North",
                "AuthorAffiliation": "Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2011.186;10.1109/infvis.2005.1532136;10.1109/vast.2014.7042492;10.1109/tvcg.2013.124;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346594;10.1109/vast.2009.5332629;10.1109/tvcg.2013.212;10.1109/tvcg.2009.122;10.1109/tvcg.2006.156;10.1109/vast.2011.6102449;10.1109/infvis.2005.1532126;10.1109/tvcg.2013.188;10.1109/vast.2012.6400487;10.1109/vast.2007.4388999;10.1109/tvcg.2014.2346422;10.1109/tvcg.2011.178;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Dimension reduction,clustering,algorithms,visual analytics",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 1887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 849,
                "i": [
                    849
                ]
            }
        },
        {
            "name": "Naren Ramakrishnan",
            "value": 130,
            "numPapers": 64,
            "cluster": "4",
            "visible": 1,
            "index": 307,
            "x": -14.928314557165816,
            "y": 174.72019180530427,
            "vy": 0,
            "vx": 0,
            "r": 1.1496833621185953,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Towards a Systematic Combination of Dimension Reduction and Clustering in Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745258",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745258",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Dimension reduction algorithms and clustering algorithms are both frequently used techniques in visual analytics. Both families of algorithms assist analysts in performing related tasks regarding the similarity of observations and finding groups in datasets. Though initially used independently, recent works have incorporated algorithms from each family into the same visualization systems. However, these algorithmic combinations are often ad hoc or disconnected, working independently and in parallel rather than integrating some degree of interdependence. A number of design decisions must be addressed when employing dimension reduction and clustering algorithms concurrently in a visualization system, including the selection of each algorithm, the order in which they are processed, and how to present and interact with the resulting projection. This paper contributes an overview of combining dimension reduction and clustering into a visualization system, discussing the challenges inherent in developing a visualization system that makes use of both families of algorithms.",
                "AuthorNamesDeduped": "John E. Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North 0001",
                "AuthorNames": "John Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North",
                "AuthorAffiliation": "Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2011.186;10.1109/infvis.2005.1532136;10.1109/vast.2014.7042492;10.1109/tvcg.2013.124;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346594;10.1109/vast.2009.5332629;10.1109/tvcg.2013.212;10.1109/tvcg.2009.122;10.1109/tvcg.2006.156;10.1109/vast.2011.6102449;10.1109/infvis.2005.1532126;10.1109/tvcg.2013.188;10.1109/vast.2012.6400487;10.1109/vast.2007.4388999;10.1109/tvcg.2014.2346422;10.1109/tvcg.2011.178;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Dimension reduction,clustering,algorithms,visual analytics",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 1887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 849,
                "i": [
                    849
                ]
            }
        },
        {
            "name": "Leanna House",
            "value": 264,
            "numPapers": 55,
            "cluster": "4",
            "visible": 1,
            "index": 308,
            "x": -107.18798467799874,
            "y": -139.14286162311424,
            "vy": 0,
            "vx": 0,
            "r": 1.3039723661485318,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Towards a Systematic Combination of Dimension Reduction and Clustering in Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745258",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745258",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Dimension reduction algorithms and clustering algorithms are both frequently used techniques in visual analytics. Both families of algorithms assist analysts in performing related tasks regarding the similarity of observations and finding groups in datasets. Though initially used independently, recent works have incorporated algorithms from each family into the same visualization systems. However, these algorithmic combinations are often ad hoc or disconnected, working independently and in parallel rather than integrating some degree of interdependence. A number of design decisions must be addressed when employing dimension reduction and clustering algorithms concurrently in a visualization system, including the selection of each algorithm, the order in which they are processed, and how to present and interact with the resulting projection. This paper contributes an overview of combining dimension reduction and clustering into a visualization system, discussing the challenges inherent in developing a visualization system that makes use of both families of algorithms.",
                "AuthorNamesDeduped": "John E. Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North 0001",
                "AuthorNames": "John Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North",
                "AuthorAffiliation": "Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2011.186;10.1109/infvis.2005.1532136;10.1109/vast.2014.7042492;10.1109/tvcg.2013.124;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346594;10.1109/vast.2009.5332629;10.1109/tvcg.2013.212;10.1109/tvcg.2009.122;10.1109/tvcg.2006.156;10.1109/vast.2011.6102449;10.1109/infvis.2005.1532126;10.1109/tvcg.2013.188;10.1109/vast.2012.6400487;10.1109/vast.2007.4388999;10.1109/tvcg.2014.2346422;10.1109/tvcg.2011.178;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Dimension reduction,clustering,algorithms,visual analytics",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 1887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 849,
                "i": [
                    849
                ]
            }
        },
        {
            "name": "Scotland Leman",
            "value": 262,
            "numPapers": 45,
            "cluster": "4",
            "visible": 1,
            "index": 309,
            "x": 173.30694197261832,
            "y": 30.24407155294267,
            "vy": 0,
            "vx": 0,
            "r": 1.3016695451928613,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Towards a Systematic Combination of Dimension Reduction and Clustering in Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745258",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745258",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Dimension reduction algorithms and clustering algorithms are both frequently used techniques in visual analytics. Both families of algorithms assist analysts in performing related tasks regarding the similarity of observations and finding groups in datasets. Though initially used independently, recent works have incorporated algorithms from each family into the same visualization systems. However, these algorithmic combinations are often ad hoc or disconnected, working independently and in parallel rather than integrating some degree of interdependence. A number of design decisions must be addressed when employing dimension reduction and clustering algorithms concurrently in a visualization system, including the selection of each algorithm, the order in which they are processed, and how to present and interact with the resulting projection. This paper contributes an overview of combining dimension reduction and clustering into a visualization system, discussing the challenges inherent in developing a visualization system that makes use of both families of algorithms.",
                "AuthorNamesDeduped": "John E. Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North 0001",
                "AuthorNames": "John Wenskovitch;Ian Crandell;Naren Ramakrishnan;Leanna House;Scotland Leman;Chris North",
                "AuthorAffiliation": "Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science;Virginia Tech Department of Statistics;Virginia Tech Department of Statistics;Virginia Tech Department of Computer Science",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2011.186;10.1109/infvis.2005.1532136;10.1109/vast.2014.7042492;10.1109/tvcg.2013.124;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346594;10.1109/vast.2009.5332629;10.1109/tvcg.2013.212;10.1109/tvcg.2009.122;10.1109/tvcg.2006.156;10.1109/vast.2011.6102449;10.1109/infvis.2005.1532126;10.1109/tvcg.2013.188;10.1109/vast.2012.6400487;10.1109/vast.2007.4388999;10.1109/tvcg.2014.2346422;10.1109/tvcg.2011.178;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Dimension reduction,clustering,algorithms,visual analytics",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 94,
                "DownloadsXplore": 1887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 849,
                "i": [
                    849
                ]
            }
        },
        {
            "name": "Chris North 0001",
            "value": 592,
            "numPapers": 133,
            "cluster": "4",
            "visible": 1,
            "index": 310,
            "x": -148.45998094563905,
            "y": 94.91909216601513,
            "vy": 0,
            "vx": 0,
            "r": 1.6816350028785263,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering",
                "DOI": "10.1109/tvcg.2012.260",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.260",
                "FirstPage": 2879,
                "LastPage": 2888,
                "PaperType": "J",
                "Abstract": "Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.",
                "AuthorNamesDeduped": "Alex Endert;Patrick Fiaux;Chris North 0001",
                "AuthorNames": "Alex Endert;Patrick Fiaux;Chris North",
                "AuthorAffiliation": "Virginia Polytechnic Institute and State University, USA;Virginia Polytechnic Institute and State University, USA;Virginia Polytechnic Institute and State University, USA",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/vast.2012.6400559;10.1109/vast.2011.6102449;10.1109/vast.2011.6102438;10.1109/vast.2007.4389006",
                "AuthorKeywords": "User Interaction, visualization, sensemaking, analytic reasoning, visual analytics",
                "AminerCitationCount": 163,
                "CitationCountCrossRef": 99,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 1333,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1489,
                "i": [
                    1489
                ]
            }
        },
        {
            "name": "Carlos D. Correa",
            "value": 417,
            "numPapers": 62,
            "cluster": "6",
            "visible": 1,
            "index": 311,
            "x": 45.425817338817644,
            "y": -170.54763299178435,
            "vy": 0,
            "vx": 0,
            "r": 1.4801381692573403,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Feature Aligned Volume Manipulation for Illustration and Visualization",
                "DOI": "10.1109/tvcg.2006.144",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.144",
                "FirstPage": 1069,
                "LastPage": 1076,
                "PaperType": "J",
                "Abstract": "In this paper we describe a GPU-based technique for creating illustrative visualization through interactive manipulation of volumetric models. It is partly inspired by medical illustrations, where it is common to depict cuts and deformation in order to provide a better understanding of anatomical and biological structures or surgical processes, and partly motivated by the need for a real-time solution that supports the specification and visualization of such illustrative manipulation. We propose two new feature aligned techniques, namely surface alignment and segment alignment, and compare them with the axis-aligned techniques which were reported in previous work on volume manipulation. We also present a mechanism for defining features using texture volumes, and methods for computing correct normals for the deformed volume in respect to different alignments. We describe a GPU-based implementation to achieve real-time performance of the techniques and a collection of manipulation operators including peelers, retractors, pliers and dilators which are adaptations of the metaphors and tools used in surgical procedures and medical illustrations. Our approach is directly applicable in medical and biological illustration, and we demonstrate how it works as an interactive tool for focus+context visualization, as well as a generic technique for volume graphics",
                "AuthorNamesDeduped": "Carlos D. Correa;Deborah Silver;Min Chen 0001",
                "AuthorNames": "Carlos Correa;Deborah Silver;Min Chen",
                "AuthorAffiliation": "Department of Electrical and Computer Engineering, State University of New Jersey, Rutgers, USA;Department of Electrical and Computer Engineering, State University of New Jersey, Rutgers, USA;Department of Computer Science, University of Wales, Swansea, UK",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/visual.2000.885694",
                "AuthorKeywords": "Illustrative visualization, Illustrative manipulation, GPU computing, volume rendering, volume deformation, computerassisted medical illustration",
                "AminerCitationCount": 125,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2280,
                "i": [
                    2280
                ]
            }
        },
        {
            "name": "Hanqi Guo 0001",
            "value": 152,
            "numPapers": 113,
            "cluster": "6",
            "visible": 1,
            "index": 312,
            "x": 81.83873372379072,
            "y": 156.6921238048884,
            "vy": 0,
            "vx": 0,
            "r": 1.1750143926309728,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Adaptively Placed Multi-Grid Scene Representation Networks for Large-Scale Data Visualization",
                "DOI": "10.1109/tvcg.2023.3327194",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327194",
                "FirstPage": 965,
                "LastPage": 974,
                "PaperType": "J",
                "Abstract": "Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN.",
                "AuthorNamesDeduped": "Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo 0001;Tom Peterka",
                "AuthorNames": "Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo;Tom Peterka",
                "AuthorAffiliation": "The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA",
                "InternalReferences": "10.1109/tvcg.2012.274",
                "AuthorKeywords": "Scene representation network,deep learning,scientific visualization,volume rendering",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 20,
                "i": [
                    20
                ]
            }
        },
        {
            "name": "Gordon L. Kindlmann",
            "value": 733,
            "numPapers": 117,
            "cluster": "11",
            "visible": 1,
            "index": 313,
            "x": -166.45503465454382,
            "y": -60.354962001103196,
            "vy": 0,
            "vx": 0,
            "r": 1.8439838802533104,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis",
                "DOI": "10.1109/tvcg.2015.2467449",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467449",
                "FirstPage": 867,
                "LastPage": 876,
                "PaperType": "J",
                "Abstract": "Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.",
                "AuthorNamesDeduped": "Gordon L. Kindlmann;Charisee Chiw;Nicholas Seltzer;Lamont Samuels;John H. Reppy",
                "AuthorNames": "Gordon Kindlmann;Charisee Chiw;Nicholas Seltzer;Lamont Samuels;John Reppy",
                "AuthorAffiliation": "Department of Computer Science, University of Chicago;Department of Computer Science, University of Chicago;Department of Computer Science, University of Chicago;Department of Computer Science, University of Chicago;Department of Computer Science, University of Chicago",
                "InternalReferences": "0.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/visual.2005.1532856;10.1109/tvcg.2014.2346322;10.1109/tvcg.2012.240;10.1109/visual.2003.1250414;10.1109/visual.1999.809896;10.1109/tvcg.2007.70534;10.1109/tvcg.2014.2346318;10.1109/visual.1998.745290;10.1109/tvcg.2008.148;10.1109/tvcg.2008.163",
                "AuthorKeywords": "Domain specific language, portable parallel programming, scientific visualization, tensor fields",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 741,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1055,
                "i": [
                    1055
                ]
            }
        },
        {
            "name": "Ross T. Whitaker",
            "value": 419,
            "numPapers": 40,
            "cluster": "11",
            "visible": 1,
            "index": 314,
            "x": 163.76852380462486,
            "y": -68.04315256404583,
            "vy": 0,
            "vx": 0,
            "r": 1.482440990213011,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Curve Boxplot: Generalization of Boxplot for Ensembles of Curves",
                "DOI": "10.1109/tvcg.2014.2346455",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346455",
                "FirstPage": 2654,
                "LastPage": 2663,
                "PaperType": "J",
                "Abstract": "In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.",
                "AuthorNamesDeduped": "Mahsa Mirzargar;Ross T. Whitaker;Robert M. Kirby",
                "AuthorNames": "Mahsa Mirzargar;Ross T. Whitaker;Robert M. Kirby",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT;Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT;Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT",
                "InternalReferences": "0.1109/tvcg.2013.143;10.1109/visual.2002.1183769;10.1109/visual.1996.568116;10.1109/visual.1996.568105;10.1109/tvcg.2013.141;10.1109/tvcg.2010.212;10.1109/tvcg.2013.126;10.1109/tvcg.2010.181",
                "AuthorKeywords": "Uncertainty visualization, boxplots, ensemble visualization, order statistics, data depth, nonparametric statistic, functional data, parametric curves",
                "AminerCitationCount": 159,
                "CitationCountCrossRef": 125,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2337,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1209,
                "i": [
                    1209
                ]
            }
        },
        {
            "name": "Tolga Tasdizen",
            "value": 145,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 315,
            "x": -74.91414062426155,
            "y": 161.0523875468115,
            "vy": 0,
            "vx": 0,
            "r": 1.1669545192861255,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Curvature-based transfer functions for direct volume rendering: methods and applications",
                "DOI": "10.1109/visual.2003.1250414",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250414",
                "FirstPage": 513,
                "LastPage": 520,
                "PaperType": "C",
                "Abstract": "Direct volume rendering of scalar fields uses a transfer function to map locally measured data properties to opacities and colors. The domain of the transfer function is typically the one-dimensional space of scalar data values. This paper advances the use of curvature information in multi-dimensional transfer functions, with a methodology for computing high-quality curvature measurements. The proposed methodology combines an implicit formulation of curvature with convolution-based reconstruction of the field. We give concrete guidelines for implementing the methodology, and illustrate the importance of choosing accurate filters for computing derivatives with convolution. Curvature-based transfer functions are shown to extend the expressivity and utility of volume rendering through contributions in three different application areas: nonphotorealistic volume rendering, surface smoothing via anisotropic diffusion, and visualization of isosurface uncertainty.",
                "AuthorNamesDeduped": "Gordon L. Kindlmann;Ross T. Whitaker;Tolga Tasdizen;Torsten Möller",
                "AuthorNames": "G. Kindlmann;R. Whitaker;T. Tasdizen;T. Moller",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Graphics, Usability, and Visualization (GrUVi) Laboratory, Simon Fraser University, Canada",
                "InternalReferences": "0.1109/visual.2000.885696;10.1109/visual.2002.1183766;10.1109/visual.1995.480795;10.1109/visual.2000.885694;10.1109/visual.1994.346331;10.1109/visual.2002.1183777",
                "AuthorKeywords": "volume rendering, implicit surface curvature, convolution-based differentiation, non-photorealistic rendering, surface processing, uncertainty visualization, flowline curvature",
                "AminerCitationCount": 583,
                "CitationCountCrossRef": 162,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 1270,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2658,
                "i": [
                    2658
                ]
            }
        },
        {
            "name": "Charles D. Hansen",
            "value": 730,
            "numPapers": 90,
            "cluster": "6",
            "visible": 1,
            "index": 316,
            "x": -53.63476693472609,
            "y": -169.62697832555293,
            "vy": 0,
            "vx": 0,
            "r": 1.8405296488198044,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Gaussian transfer functions for multi-field volume visualization",
                "DOI": "10.1109/visual.2003.1250412",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250412",
                "FirstPage": 497,
                "LastPage": 504,
                "PaperType": "C",
                "Abstract": "Volume rendering is a flexible technique for visualizing dense 3D volumetric datasets. A central element of volume rendering is the conversion between data values and observable quantities such as color and opacity. This process is usually realized through the use of transfer functions that are precomputed and stored in lookup tables. For multidimensional transfer functions applied to multivariate data, these lookup tables become prohibitively large. We propose the direct evaluation of a particular type of transfer functions based on a sum of Gaussians. Because of their simple form (in terms of number of parameters), these functions and their analytic integrals along line segments can be evaluated efficiently on current graphics hardware, obviating the need for precomputed lookup tables. We have adopted these transfer functions because they are well suited for classification based on a unique combination of multiple data values that localize features in the transfer function domain. We apply this technique to the visualization of several multivariate datasets (CT, cryosection) that are difficult to classify and render accurately at interactive rates using traditional approaches.",
                "AuthorNamesDeduped": "Joe Kniss;Simon Premoze;Milan Ikits;Aaron E. Lefohn;Charles D. Hansen;Emil Praun",
                "AuthorNames": "J. Kniss;S. premoze;M. Ikits;A. Lefohn;C. Hansen;E. Praun",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;School of Computing, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;School of Computing, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1999.809889;10.1109/visual.2000.885683;10.1109/visual.2001.964521",
                "AuthorKeywords": "Volume Rendering, Transfer Functions, Multi-field visualization",
                "AminerCitationCount": 167,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 558,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2673,
                "i": [
                    2673
                ]
            }
        },
        {
            "name": "Minsuk Kahng",
            "value": 355,
            "numPapers": 62,
            "cluster": "1",
            "visible": 1,
            "index": 317,
            "x": 154.37328468471293,
            "y": 88.98813952236883,
            "vy": 0,
            "vx": 0,
            "r": 1.4087507196315485,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "DendroMap: Visual Exploration of Large-Scale Image Datasets for Machine Learning with Treemaps",
                "DOI": "10.1109/tvcg.2022.3209425",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209425",
                "FirstPage": 320,
                "LastPage": 330,
                "PaperType": "J",
                "Abstract": "In this paper, we present DendroMap, a novel approach to interactively exploring large-scale image datasets for machine learning (ML). ML practitioners often explore image datasets by generating a grid of images or projecting high-dimensional representations of images into 2-D using dimensionality reduction techniques (e.g., t-SNE). However, neither approach effectively scales to large datasets because images are ineffectively organized and interactions are insufficiently supported. To address these challenges, we develop DendroMap by adapting Treemaps, a well-known visualization technique. DendroMap effectively organizes images by extracting hierarchical cluster structures from high-dimensional representations of images. It enables users to make sense of the overall distributions of datasets and interactively zoom into specific areas of interests at multiple levels of abstraction. Our case studies with widely-used image datasets for deep learning demonstrate that users can discover insights about datasets and trained models by examining the diversity of images, identifying underperforming subgroups, and analyzing classification errors. We conducted a user study that evaluates the effectiveness of DendroMap in grouping and searching tasks by comparing it with a gridified version of t-SNE and found that participants preferred DendroMap. DendroMap is available at https://div-lab.github.io/dendromap/.",
                "AuthorNamesDeduped": "Donald Bertucci;Md Montaser Hamid;Yashwanthi Anand;Anita Ruangrotsakun;Delyar Tabatabai;Melissa Perez;Minsuk Kahng",
                "AuthorNames": "Donald Bertucci;Md Montaser Hamid;Yashwanthi Anand;Anita Ruangrotsakun;Delyar Tabatabai;Melissa Perez;Minsuk Kahng",
                "AuthorAffiliation": "Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/vast47406.2019.8986948;10.1109/tvcg.2020.3030342;10.1109/tvcg.2013.212;10.1109/tvcg.2013.162;10.1109/tvcg.2014.2346276;10.1109/tvcg.2021.3114855;10.1109/tvcg.2019.2934659;10.1109/tvcg.2017.2744718;10.1109/tvcg.2016.2598445;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2019.2934619;10.1109/vast47406.2019.8986943;10.1109/tvcg.2007.70515;10.1109/vast.2014.7042476;10.1109/tvcg.2020.3030383;10.1109/tvcg.2021.3114837",
                "AuthorKeywords": "Visualization for machine learning,image data,treemaps,visual analytics,data-centric AI,error analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 728,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 153,
                "i": [
                    153
                ]
            }
        },
        {
            "name": "Luis Gustavo Nonato",
            "value": 211,
            "numPapers": 88,
            "cluster": "11",
            "visible": 1,
            "index": 318,
            "x": -174.21438726826213,
            "y": 38.72140582086344,
            "vy": 0,
            "vx": 0,
            "r": 1.2429476108232584,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Local Affine Multidimensional Projection",
                "DOI": "10.1109/tvcg.2011.220",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.220",
                "FirstPage": 2563,
                "LastPage": 2571,
                "PaperType": "J",
                "Abstract": "Multidimensional projection techniques have experienced many improvements lately, mainly regarding computational times and accuracy. However, existing methods do not yet provide flexible enough mechanisms for visualization-oriented fully interactive applications. This work presents a new multidimensional projection technique designed to be more flexible and versatile than other methods. This novel approach, called Local Affine Multidimensional Projection (LAMP), relies on orthogonal mapping theory to build accurate local transformations that can be dynamically modified according to user knowledge. The accuracy, flexibility and computational efficiency of LAMP is confirmed by a comprehensive set of comparisons. LAMP's versatility is exploited in an application which seeks to correlate data that, in principle, has no connection as well as in visual exploration of textual documents.",
                "AuthorNamesDeduped": "Paulo Joia;Danilo Barbosa Coimbra;José Alberto Cuminato;Fernando Vieira Paulovich;Luis Gustavo Nonato",
                "AuthorNames": "Paulo Joia;Danilo Coimbra;Jose A. Cuminato;Fernando V. Paulovich;Luis G. Nonato",
                "AuthorAffiliation": "Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil",
                "InternalReferences": "0.1109/visual.1996.567787;10.1109/tvcg.2009.140;10.1109/tvcg.2007.70580;10.1109/infvis.2002.1173159;10.1109/tvcg.2010.207;10.1109/tvcg.2010.170;10.1109/infvis.2002.1173161",
                "AuthorKeywords": "Multidimensional Projection, High Dimensional Data, Visual Data Mining",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 174,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 1429,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1539,
                "i": [
                    1539
                ]
            }
        },
        {
            "name": "Weikai Yang",
            "value": 37,
            "numPapers": 59,
            "cluster": "1",
            "visible": 1,
            "index": 319,
            "x": 102.46481095742575,
            "y": -146.4614710955035,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Cluster-Aware Grid Layout",
                "DOI": "10.1109/tvcg.2023.3326934",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326934",
                "FirstPage": 240,
                "LastPage": 250,
                "PaperType": "J",
                "Abstract": "Grid visualizations are widely used in many applications to visually explain a set of data and their proximity relationships. However, existing layout methods face difficulties when dealing with the inherent cluster structures within the data. To address this issue, we propose a cluster-aware grid layout method that aims to better preserve cluster structures by simultaneously considering proximity, compactness, and convexity in the optimization process. Our method utilizes a hybrid optimization strategy that consists of two phases. The global phase aims to balance proximity and compactness within each cluster, while the local phase ensures the convexity of cluster shapes. We evaluate the proposed grid layout method through a series of quantitative experiments and two use cases, demonstrating its effectiveness in preserving cluster structures and facilitating analysis tasks.",
                "AuthorNamesDeduped": "Yuxing Zhou;Weikai Yang;Jiashu Chen;Changjian Chen;Zhiyang Shen;Xiaonan Luo;Lingyun Yu 0001;Shixia Liu",
                "AuthorNames": "Yuxing Zhou;Weikai Yang;Jiashu Chen;Changjian Chen;Zhiyang Shen;Xiaonan Luo;Lingyun Yu;Shixia Liu",
                "AuthorAffiliation": "School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Kuaishou Technology, China;School of Software, BNRist, Tsinghua University, China;Guilin University of Electronic Technology, China;Xi'an Jiaotong-Liverpool University, China;School of Software, BNRist, Tsinghua University, China",
                "InternalReferences": "10.1109/tvcg.2022.3209425;10.1109/tvcg.2019.2934280;10.1109/tvcg.2016.2598447;10.1109/tvcg.2022.3209384;10.1109/tvcg.2009.152;10.1109/tvcg.2021.3114834;10.1109/tvcg.2016.2598831;10.1109/tvcg.2019.2934811;10.1109/tvcg.2018.2865151;10.1109/tvcg.2016.2598542;10.1109/tvcg.2008.158;10.1109/tvcg.2021.3114841;10.1109/tvcg.2022.3209485;10.1109/tvcg.2022.3209458;10.1109/tvcg.2016.2598796;10.1109/tvcg.2022.3209423;10.1109/tvcg.2015.2467251;10.1109/tvcg.2022.3209404;10.1109/tvcg.2020.3030410",
                "AuthorKeywords": "Grid layout,similarity,convexity,compactness,optimization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 305,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 39,
                "i": [
                    39
                ]
            }
        },
        {
            "name": "Mengchen Liu",
            "value": 928,
            "numPapers": 125,
            "cluster": "1",
            "visible": 1,
            "index": 320,
            "x": 23.415497686465947,
            "y": 177.48722339395337,
            "vy": 0,
            "vx": 0,
            "r": 2.068508923431203,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Towards Better Analysis of Deep Convolutional Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598831",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598831",
                "FirstPage": 91,
                "LastPage": 100,
                "PaperType": "J",
                "Abstract": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
                "AuthorNamesDeduped": "Mengchen Liu;Jiaxin Shi;Zhen Li 0044;Chongxuan Li;Jun Zhu 0001;Shixia Liu",
                "AuthorNames": "Mengchen Liu;Jiaxin Shi;Zhen Li;Chongxuan Li;Jun Zhu;Shixia Liu",
                "AuthorAffiliation": "School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;School of Software and TNList, Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2015.2468151;10.1109/tvcg.2015.2467554;10.1109/tvcg.2015.2467813;10.1109/tvcg.2010.132;10.1109/tvcg.2008.135;10.1109/tvcg.2014.2346919;10.1109/tvcg.2011.239;10.1109/visual.1991.175815;10.1109/visual.2005.1532820;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346433",
                "AuthorKeywords": "Deep convolutional neural networks;rectangle packing;matrix reordering;edge bundling;biclustering",
                "AminerCitationCount": 460,
                "CitationCountCrossRef": 329,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 7766,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 955,
                "i": [
                    955
                ]
            }
        },
        {
            "name": "Jiaxin Shi",
            "value": 512,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 321,
            "x": -137.37056246925079,
            "y": -115.2359690673084,
            "vy": 0,
            "vx": 0,
            "r": 1.5895221646516984,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Towards Better Analysis of Deep Convolutional Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598831",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598831",
                "FirstPage": 91,
                "LastPage": 100,
                "PaperType": "J",
                "Abstract": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
                "AuthorNamesDeduped": "Mengchen Liu;Jiaxin Shi;Zhen Li 0044;Chongxuan Li;Jun Zhu 0001;Shixia Liu",
                "AuthorNames": "Mengchen Liu;Jiaxin Shi;Zhen Li;Chongxuan Li;Jun Zhu;Shixia Liu",
                "AuthorAffiliation": "School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;School of Software and TNList, Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2015.2468151;10.1109/tvcg.2015.2467554;10.1109/tvcg.2015.2467813;10.1109/tvcg.2010.132;10.1109/tvcg.2008.135;10.1109/tvcg.2014.2346919;10.1109/tvcg.2011.239;10.1109/visual.1991.175815;10.1109/visual.2005.1532820;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346433",
                "AuthorKeywords": "Deep convolutional neural networks;rectangle packing;matrix reordering;edge bundling;biclustering",
                "AminerCitationCount": 460,
                "CitationCountCrossRef": 329,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 7766,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 955,
                "i": [
                    955
                ]
            }
        },
        {
            "name": "Zhen Li 0044",
            "value": 537,
            "numPapers": 53,
            "cluster": "1",
            "visible": 1,
            "index": 322,
            "x": 179.411930176343,
            "y": -7.833218393421118,
            "vy": 0,
            "vx": 0,
            "r": 1.618307426597582,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Towards Better Analysis of Deep Convolutional Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598831",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598831",
                "FirstPage": 91,
                "LastPage": 100,
                "PaperType": "J",
                "Abstract": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
                "AuthorNamesDeduped": "Mengchen Liu;Jiaxin Shi;Zhen Li 0044;Chongxuan Li;Jun Zhu 0001;Shixia Liu",
                "AuthorNames": "Mengchen Liu;Jiaxin Shi;Zhen Li;Chongxuan Li;Jun Zhu;Shixia Liu",
                "AuthorAffiliation": "School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;School of Software and TNList, Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2015.2468151;10.1109/tvcg.2015.2467554;10.1109/tvcg.2015.2467813;10.1109/tvcg.2010.132;10.1109/tvcg.2008.135;10.1109/tvcg.2014.2346919;10.1109/tvcg.2011.239;10.1109/visual.1991.175815;10.1109/visual.2005.1532820;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346433",
                "AuthorKeywords": "Deep convolutional neural networks;rectangle packing;matrix reordering;edge bundling;biclustering",
                "AminerCitationCount": 460,
                "CitationCountCrossRef": 329,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 7766,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 955,
                "i": [
                    955
                ]
            }
        },
        {
            "name": "Jun Zhu 0001",
            "value": 661,
            "numPapers": 52,
            "cluster": "1",
            "visible": 1,
            "index": 323,
            "x": -127.19825983665531,
            "y": 127.16368465299645,
            "vy": 0,
            "vx": 0,
            "r": 1.7610823258491652,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Towards Better Analysis of Deep Convolutional Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598831",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598831",
                "FirstPage": 91,
                "LastPage": 100,
                "PaperType": "J",
                "Abstract": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
                "AuthorNamesDeduped": "Mengchen Liu;Jiaxin Shi;Zhen Li 0044;Chongxuan Li;Jun Zhu 0001;Shixia Liu",
                "AuthorNames": "Mengchen Liu;Jiaxin Shi;Zhen Li;Chongxuan Li;Jun Zhu;Shixia Liu",
                "AuthorAffiliation": "School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;School of Software and TNList, Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2015.2468151;10.1109/tvcg.2015.2467554;10.1109/tvcg.2015.2467813;10.1109/tvcg.2010.132;10.1109/tvcg.2008.135;10.1109/tvcg.2014.2346919;10.1109/tvcg.2011.239;10.1109/visual.1991.175815;10.1109/visual.2005.1532820;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346433",
                "AuthorKeywords": "Deep convolutional neural networks;rectangle packing;matrix reordering;edge bundling;biclustering",
                "AminerCitationCount": 460,
                "CitationCountCrossRef": 329,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 7766,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 955,
                "i": [
                    955
                ]
            }
        },
        {
            "name": "Chongxuan Li",
            "value": 365,
            "numPapers": 10,
            "cluster": "1",
            "visible": 1,
            "index": 324,
            "x": 7.906395210799137,
            "y": -179.96524362990385,
            "vy": 0,
            "vx": 0,
            "r": 1.420264824409902,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Towards Better Analysis of Deep Convolutional Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598831",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598831",
                "FirstPage": 91,
                "LastPage": 100,
                "PaperType": "J",
                "Abstract": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
                "AuthorNamesDeduped": "Mengchen Liu;Jiaxin Shi;Zhen Li 0044;Chongxuan Li;Jun Zhu 0001;Shixia Liu",
                "AuthorNames": "Mengchen Liu;Jiaxin Shi;Zhen Li;Chongxuan Li;Jun Zhu;Shixia Liu",
                "AuthorAffiliation": "School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;Dept. of Comp. Sci. & Tech., CBICR Center;School of Software and TNList, Tsinghua University;School of Software and TNList, Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2015.2468151;10.1109/tvcg.2015.2467554;10.1109/tvcg.2015.2467813;10.1109/tvcg.2010.132;10.1109/tvcg.2008.135;10.1109/tvcg.2014.2346919;10.1109/tvcg.2011.239;10.1109/visual.1991.175815;10.1109/visual.2005.1532820;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346433",
                "AuthorKeywords": "Deep convolutional neural networks;rectangle packing;matrix reordering;edge bundling;biclustering",
                "AminerCitationCount": 460,
                "CitationCountCrossRef": 329,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 7766,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 955,
                "i": [
                    955
                ]
            }
        },
        {
            "name": "Jing Wu 0004",
            "value": 168,
            "numPapers": 49,
            "cluster": "1",
            "visible": 1,
            "index": 325,
            "x": 115.91303639408275,
            "y": 138.25399811182336,
            "vy": 0,
            "vx": 0,
            "r": 1.1934369602763386,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive Visual Cluster Analysis by Contrastive Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2022.3209423",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209423",
                "FirstPage": 734,
                "LastPage": 744,
                "PaperType": "J",
                "Abstract": "We propose a contrastive dimensionality reduction approach (CDR) for interactive visual cluster analysis. Although dimensionality reduction of high-dimensional data is widely used in visual cluster analysis in conjunction with scatterplots, there are several limitations on effective visual cluster analysis. First, it is non-trivial for an embedding to present clear visual cluster separation when keeping neighborhood structures. Second, as cluster analysis is a subjective task, user steering is required. However, it is also non-trivial to enable interactions in dimensionality reduction. To tackle these problems, we introduce contrastive learning into dimensionality reduction for high-quality embedding. We then redefine the gradient of the loss function to the negative pairs to enhance the visual cluster separation of embedding results. Based on the contrastive learning scheme, we employ link-based interactions to steer embeddings. After that, we implement a prototype visual interface that integrates the proposed algorithms and a set of visualizations. Quantitative experiments demonstrate that CDR outperforms existing techniques in terms of preserving correct neighborhood structures and improving visual cluster separation. The ablation experiment demonstrates the effectiveness of gradient redefinition. The user study verifies that CDR outperforms t-SNE and UMAP in the task of cluster identification. We also showcase two use cases on real-world datasets to present the effectiveness of link-based interactions.",
                "AuthorNamesDeduped": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu 0004;Yang Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu;Yang Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;Cardiff University, UK;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/tvcg.2018.2864477;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/tvcg.2015.2467615;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2010.138;10.1109/tvcg.2012.207;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2745258;10.1109/vast50239.2020.00015;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Dimensionality reduction,visual cluster analysis,contrastive learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 1384,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 141,
                "i": [
                    141
                ]
            }
        },
        {
            "name": "Yuxin Ma",
            "value": 162,
            "numPapers": 119,
            "cluster": "1",
            "visible": 1,
            "index": 326,
            "x": -179.1344351620878,
            "y": -23.68235923973291,
            "vy": 0,
            "vx": 0,
            "r": 1.1865284974093264,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics",
                "DOI": "10.1109/tvcg.2019.2934631",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934631",
                "FirstPage": 1075,
                "LastPage": 1085,
                "PaperType": "J",
                "Abstract": "Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.",
                "AuthorNamesDeduped": "Yuxin Ma;Tiankai Xie;Jundong Li;Ross Maciejewski",
                "AuthorNames": "Yuxin Ma;Tiankai Xie;Jundong Li;Ross Maciejewski",
                "AuthorAffiliation": "School of Computing, Informatics & Decision Systems Engineering, Arizona State University;School of Computing, Informatics & Decision Systems Engineering, Arizona State University;Department of Electrical and Computer Engineering, University of Virginia;School of Computing, Informatics & Decision Systems Engineering, Arizona State University",
                "InternalReferences": "0.1109/tvcg.2014.2346660;10.1109/tvcg.2014.2346594;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/vast.2017.8585720;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2014.2346578;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2014.2346574;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/vast.2011.6102453;10.1109/tvcg.2018.2864504;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Adversarial machine learning,data poisoning,visual analytics",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 2113,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 606,
                "i": [
                    606
                ]
            }
        },
        {
            "name": "Josua Krause",
            "value": 197,
            "numPapers": 25,
            "cluster": "1",
            "visible": 1,
            "index": 327,
            "x": 148.31196488514695,
            "y": -103.69937835834384,
            "vy": 0,
            "vx": 0,
            "r": 1.2268278641335637,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "INFUSE: Interactive Feature Selection for Predictive Modeling of High Dimensional Data",
                "DOI": "10.1109/tvcg.2014.2346482",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346482",
                "FirstPage": 1614,
                "LastPage": 1623,
                "PaperType": "J",
                "Abstract": "Predictive modeling techniques are increasingly being used by data scientists to understand the probability of predicted outcomes. However, for data that is high-dimensional, a critical step in predictive modeling is determining which features should be included in the models. Feature selection algorithms are often used to remove non-informative features from models. However, there are many different classes of feature selection algorithms. Deciding which one to use is problematic as the algorithmic output is often not amenable to user interpretation. This limits the ability for users to utilize their domain expertise during the modeling process. To improve on this limitation, we developed INFUSE, a novel visual analytics system designed to help analysts understand how predictive features are being ranked across feature selection algorithms, cross-validation folds, and classifiers. We demonstrate how our system can lead to important insights in a case study involving clinical researchers predicting patient outcomes from electronic medical records.",
                "AuthorNamesDeduped": "Josua Krause;Adam Perer;Enrico Bertini",
                "AuthorNames": "Josua Krause;Adam Perer;Enrico Bertini",
                "AuthorAffiliation": "NYU Polytechnic School of Engineering;IBM T.J. Watson Research Center;NYU Polytechnic School of Engineering",
                "InternalReferences": "0.1109/infvis.2004.71;10.1109/vast.2009.5332586;10.1109/infvis.2005.1532142;10.1109/tvcg.2011.229;10.1109/vast.2011.6102448;10.1109/infvis.2003.1249015;10.1109/tvcg.2011.178;10.1109/vast.2011.6102453;10.1109/tvcg.2013.125;10.1109/tvcg.2009.153;10.1109/vast.2010.5652443",
                "AuthorKeywords": "Predictive modeling, feature selection, classification, visual analytics, high-dimensional data",
                "AminerCitationCount": 190,
                "CitationCountCrossRef": 123,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 2583,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1247,
                "i": [
                    1247
                ]
            }
        },
        {
            "name": "Yao Ming",
            "value": 310,
            "numPapers": 47,
            "cluster": "1",
            "visible": 1,
            "index": 328,
            "x": -39.37267719915535,
            "y": 176.91747310588377,
            "vy": 0,
            "vx": 0,
            "r": 1.356937248128958,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models",
                "DOI": "10.1109/tvcg.2020.3030342",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030342",
                "FirstPage": 1438,
                "LastPage": 1447,
                "PaperType": "J",
                "Abstract": "With machine learning models being increasingly applied to various decision-making scenarios, people have spent growing efforts to make machine learning models more transparent and explainable. Among various explanation techniques, counterfactual explanations have the advantages of being human-friendly and actionable-a counterfactual explanation tells the user how to gain the desired prediction with minimal changes to the input. Besides, counterfactual explanations can also serve as efficient probes to the models' decisions. In this work, we exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models. We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets, supporting users ranging from decision-subjects to model developers. DECE supports exploratory analysis of model decisions by combining the strengths of counterfactual explanations at instance- and subgroup-levels. We also introduce a set of interactions that enable users to customize the generation of counterfactual explanations to find more actionable ones that can suit their needs. Through three use cases and an expert interview, we demonstrate the effectiveness of DECE in supporting decision exploration tasks and instance explanations.",
                "AuthorNamesDeduped": "Furui Cheng;Yao Ming;Huamin Qu",
                "AuthorNames": "Furui Cheng;Yao Ming;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Hong Kong University of Science and Technology and Bloomberg L.P.;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/tvcg.2017.2744683;10.1109/tvcg.2019.2934659;10.1109/tvcg.2017.2744718;10.1109/vast.2017.8585720;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2019.2934629;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2019.2934619;10.1109/tvcg.2018.2864812",
                "AuthorKeywords": "Tabular Data,Explainable Machine Learning,Counterfactual Explanation,Decision Making",
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 2113,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 462,
                "i": [
                    462
                ]
            }
        },
        {
            "name": "Thomas Mühlbacher",
            "value": 225,
            "numPapers": 52,
            "cluster": "6",
            "visible": 1,
            "index": 329,
            "x": -90.61145204693482,
            "y": -157.28815835257916,
            "vy": 0,
            "vx": 0,
            "r": 1.2590673575129534,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees",
                "DOI": "10.1109/tvcg.2017.2745158",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745158",
                "FirstPage": 174,
                "LastPage": 183,
                "PaperType": "J",
                "Abstract": "Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.",
                "AuthorNamesDeduped": "Thomas Mühlbacher;Lorenz Linhardt;Torsten Möller;Harald Piringer",
                "AuthorNames": "Thomas Mühlbacher;Lorenz Linhardt;Torsten Möller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;ETH Zurich;University of Vienna;VRVis Research Center",
                "InternalReferences": "0.1109/vast.2011.6102457;10.1109/tvcg.2010.190;10.1109/tvcg.2008.145;10.1109/tvcg.2014.2346578;10.1109/tvcg.2016.2598589;10.1109/tvcg.2009.110;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.130;10.1109/tvcg.2011.248;10.1109/vast.2011.6102453",
                "AuthorKeywords": "Model selection,classification trees,visual parameter search,sensitivity analysis,Pareto optimality",
                "AminerCitationCount": 50,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 1068,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 863,
                "i": [
                    863
                ]
            }
        },
        {
            "name": "Eugene Zhang",
            "value": 205,
            "numPapers": 93,
            "cluster": "11",
            "visible": 1,
            "index": 330,
            "x": 173.3231012193044,
            "y": 54.855287655090876,
            "vy": 0,
            "vx": 0,
            "r": 1.2360391479562465,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Scalable Hypergraph Visualization",
                "DOI": "10.1109/tvcg.2023.3326599",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326599",
                "FirstPage": 595,
                "LastPage": 605,
                "PaperType": "J",
                "Abstract": "Hypergraph visualization has many applications in network data analysis. Recently, a polygon-based representation for hypergraphs has been proposed with demonstrated benefits. However, the polygon-based layout often suffers from excessive self-intersections when the input dataset is relatively large. In this paper, we propose a framework in which the hypergraph is iteratively simplified through a set of atomic operations. Then, the layout of the simplest hypergraph is optimized and used as the foundation for a reverse process that brings the simplest hypergraph back to the original one, but with an improved layout. At the core of our approach is the set of atomic simplification operations and an operation priority measure to guide the simplification process. In addition, we introduce necessary definitions and conditions for hypergraph planarity within the polygon representation. We extend our approach to handle simultaneous simplification and layout optimization for both the hypergraph and its dual. We demonstrate the utility of our approach with datasets from a number of real-world applications.",
                "AuthorNamesDeduped": "Peter Oliver;Eugene Zhang;Yue Zhang 0009",
                "AuthorNames": "Peter Oliver;Eugene Zhang;Yue Zhang",
                "AuthorAffiliation": "School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA",
                "InternalReferences": "10.1109/tvcg.2013.184;10.1109/tvcg.2012.252;10.1109/tvcg.2020.3030475;10.1109/tvcg.2014.2346248;10.1109/tvcg.2021.3114759;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467992;10.1109/vast.2007.4389006",
                "AuthorKeywords": "Hypergraph visualization,scalable visualization,polygon layout,hypergraph embedding,primal-dual visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 243,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 44,
                "i": [
                    44
                ]
            }
        },
        {
            "name": "Yue Zhang 0009",
            "value": 47,
            "numPapers": 65,
            "cluster": "11",
            "visible": 1,
            "index": 331,
            "x": -165.1064925804238,
            "y": 76.7453328078683,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Scalable Hypergraph Visualization",
                "DOI": "10.1109/tvcg.2023.3326599",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326599",
                "FirstPage": 595,
                "LastPage": 605,
                "PaperType": "J",
                "Abstract": "Hypergraph visualization has many applications in network data analysis. Recently, a polygon-based representation for hypergraphs has been proposed with demonstrated benefits. However, the polygon-based layout often suffers from excessive self-intersections when the input dataset is relatively large. In this paper, we propose a framework in which the hypergraph is iteratively simplified through a set of atomic operations. Then, the layout of the simplest hypergraph is optimized and used as the foundation for a reverse process that brings the simplest hypergraph back to the original one, but with an improved layout. At the core of our approach is the set of atomic simplification operations and an operation priority measure to guide the simplification process. In addition, we introduce necessary definitions and conditions for hypergraph planarity within the polygon representation. We extend our approach to handle simultaneous simplification and layout optimization for both the hypergraph and its dual. We demonstrate the utility of our approach with datasets from a number of real-world applications.",
                "AuthorNamesDeduped": "Peter Oliver;Eugene Zhang;Yue Zhang 0009",
                "AuthorNames": "Peter Oliver;Eugene Zhang;Yue Zhang",
                "AuthorAffiliation": "School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA",
                "InternalReferences": "10.1109/tvcg.2013.184;10.1109/tvcg.2012.252;10.1109/tvcg.2020.3030475;10.1109/tvcg.2014.2346248;10.1109/tvcg.2021.3114759;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467992;10.1109/vast.2007.4389006",
                "AuthorKeywords": "Hypergraph visualization,scalable visualization,polygon layout,hypergraph embedding,primal-dual visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 243,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 44,
                "i": [
                    44
                ]
            }
        },
        {
            "name": "Botong Qu",
            "value": 24,
            "numPapers": 30,
            "cluster": "11",
            "visible": 1,
            "index": 332,
            "x": 70.0090177489902,
            "y": -168.3708330852514,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Automatic Polygon Layout for Primal-Dual Visualization of Hypergraphs",
                "DOI": "10.1109/tvcg.2021.3114759",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114759",
                "FirstPage": 633,
                "LastPage": 642,
                "PaperType": "J",
                "Abstract": "N-ary relationships, which relate $N$ entities where $N$ is not necessarily two, can be visually represented as polygons whose vertices are the entities of the relationships. Manually generating a high-quality layout using this representation is labor-intensive. In this paper, we provide an automatic polygon layout generation algorithm for the visualization of N-ary relationships. At the core of our algorithm is a set of objective functions motivated by a number of design principles that we have identified. These objective functions are then used in an optimization framework that we develop to achieve high-quality layouts. Recognizing the duality between entities and relationships in the data, we provide a second visualization in which the roles of entities and relationships in the original data are reversed. This can lead to additional insight about the data. Furthermore, we enhance our framework for a joint optimization on the primal layout (original data) and the dual layout (where the roles of entities and relationships are reversed). This allows users to inspect their data using two complementary views. We apply our visualization approach to a number of datasets that include co-authorship data and social contact pattern data.",
                "AuthorNamesDeduped": "Botong Qu;Eugene Zhang;Yue Zhang 0009",
                "AuthorNames": "Botong Qu;Eugene Zhang;Yue Zhang",
                "AuthorAffiliation": "School of Electrical Engineering and Computer Science, Oregon State University, United States;School of Electrical Engineering and Computer Science, Oregon State University, United States;School of Electrical Engineering and Computer Science, Oregon State University, United States",
                "InternalReferences": "0.1109/tvcg.2013.184;10.1109/tvcg.2012.252;10.1109/tvcg.2020.3030475;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346249;10.1109/vast.2007.4389006;10.1109/tvcg.2015.2467813;10.1109/infvis.1999.801860;10.1109/tvcg.2013.232;10.1109/tvcg.2017.2744458",
                "AuthorKeywords": "Hypergraph visualization,N-ary relationships,optimization,polygon layout,duality,primal-dual visualization",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 570,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 344,
                "i": [
                    344
                ]
            }
        },
        {
            "name": "Eli Holder",
            "value": 9,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 333,
            "x": 62.20372168020766,
            "y": 171.6994379988801,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Polarizing Political Polls: How Visualization Design Choices Can Shape Public Opinion and Increase Political Polarization",
                "DOI": "10.1109/tvcg.2023.3326512",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326512",
                "FirstPage": 1446,
                "LastPage": 1456,
                "PaperType": "J",
                "Abstract": "While we typically focus on data visualization as a tool for facilitating cognitive tasks (e.g. learning facts, making decisions), we know relatively little about their second-order impacts on our opinions, attitudes, and values. For example, could design or framing choices interact with viewers' social cognitive biases in ways that promote political polarization? When reporting on U.S. attitudes toward public policies, it is popular to highlight the gap between Democrats and Republicans (e.g. with blue vs red connected dot plots). But these charts may encourage social-normative conformity, influencing viewers' attitudes to match the divided opinions shown in the visualization. We conducted three experiments examining visualization framing in the context of social conformity and polarization. Crowdworkers viewed charts showing simulated polling results for public policy proposals. We varied framing (aggregating data as non-partisan “All US Adults,” or partisan “Democrat” / “Republican”) and the visualized groups' support levels. Participants then reported their own support for each policy. We found that participants' attitudes biased significantly toward the group attitudes shown in the stimuli and this can increase inter-party attitude divergence. These results demonstrate that data visualizations can induce social conformity and accelerate political polarization. Choosing to visualize partisan divisions can divide us further.",
                "AuthorNamesDeduped": "Eli Holder;Cindy Xiong Bearfield",
                "AuthorNames": "Eli Holder;Cindy Xiong Bearfield",
                "AuthorAffiliation": "3iap, USA;Georgia Tech, Georgia",
                "InternalReferences": "10.1109/tvcg.2015.2467732;10.1109/tvcg.2014.2346298;10.1109/tvcg.2022.3209456;10.1109/tvcg.2022.3209377;10.1109/tvcg.2011.255;10.1109/tvcg.2020.3030335;10.1109/tvcg.2020.3029412;10.1109/tvcg.2017.2745240;10.1109/tvcg.2022.3209500;10.1109/tvcg.2010.179;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209405",
                "AuthorKeywords": "Political Polarization,Public Opinion,Social Categorization,Survey Data,Social Influence,Attitude Change",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 88,
                "DownloadsXplore": 237,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 45,
                "i": [
                    45
                ]
            }
        },
        {
            "name": "Cindy Xiong",
            "value": 163,
            "numPapers": 85,
            "cluster": "5",
            "visible": 1,
            "index": 334,
            "x": -162.0908620036514,
            "y": -84.71453508644925,
            "vy": 0,
            "vx": 0,
            "r": 1.1876799078871618,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Comparison Conundrum and the Chamber of Visualizations: An Exploration of How Language Influences Visual Design",
                "DOI": "10.1109/tvcg.2022.3209456",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209456",
                "FirstPage": 1211,
                "LastPage": 1221,
                "PaperType": "J",
                "Abstract": "The language for expressing comparisons is often complex and nuanced, making supporting natural language-based visual comparison a non-trivial task. To better understand how people reason about comparisons in natural language, we explore a design space of utterances for comparing data entities. We identified different parameters of comparison utterances that indicate what is being compared (i.e., data variables and attributes) as well as how these parameters are specified (i.e., explicitly or implicitly). We conducted a user study with sixteen data visualization experts and non-experts to investigate how they designed visualizations for comparisons in our design space. Based on the rich set of visualization techniques observed, we extracted key design features from the visualizations and synthesized them into a subset of sixteen representative visualization designs. We then conducted a follow-up study to validate user preferences for the sixteen representative visualizations corresponding to utterances in our design space. Findings from these studies suggest guidelines and future directions for designing natural language interfaces and recommendation tools to better support natural language comparisons in visual analytics.",
                "AuthorNamesDeduped": "Aimen Gaba;Vidya Setlur;Arjun Srinivasan;Jane Hoffswell;Cindy Xiong",
                "AuthorNames": "Aimen Gaba;Vidya Setlur;Arjun Srinivasan;Jane Hoffswell;Cindy Xiong",
                "AuthorAffiliation": "UMass Amherst, USA;Tableau Research, USA;Tableau Research, USA;Adobe Research, USA;UMass Amherst, USA",
                "InternalReferences": "0.1109/tvcg.2017.2744199;10.1109/tvcg.2013.183;10.1109/tvcg.2007.70556;10.1109/tvcg.2019.2934786;10.1109/tvcg.2011.194;10.1109/tvcg.2019.2934801;10.1109/tvcg.2016.2599030;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2021.3114814;10.1109/tvcg.2016.2598920",
                "AuthorKeywords": "Comparative constructions,cardinality,explicit and implicit comparisons,natural language,intent,visual analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 408,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 183,
                "i": [
                    183
                ]
            }
        },
        {
            "name": "Daniele Chiappalupi",
            "value": 8,
            "numPapers": 12,
            "cluster": "3",
            "visible": 1,
            "index": 335,
            "x": 177.00859836979384,
            "y": -47.09518131572503,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "The Quest for Omnioculars: Embedded Visualization for Augmenting Basketball Game Viewing Experiences",
                "DOI": "10.1109/tvcg.2022.3209353",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209353",
                "FirstPage": 962,
                "LastPage": 971,
                "PaperType": "J",
                "Abstract": "Sports game data is becoming increasingly complex, often consisting of multivariate data such as player performance stats, historical team records, and athletes' positional tracking information. While numerous visual analytics systems have been developed for sports analysts to derive insights, few tools target fans to improve their understanding and engagement of sports data during live games. By presenting extra data in the actual game views, embedded visualization has the potential to enhance fans' game-viewing experience. However, little is known about how to design such kinds of visualizations embedded into live games. In this work, we present a user-centered design study of developing interactive embedded visualizations for basketball fans to improve their live game-watching experiences. We first conducted a formative study to characterize basketball fans' in-game analysis behaviors and tasks. Based on our findings, we propose a design framework to inform the design of embedded visualizations based on specific data-seeking contexts. Following the design framework, we present five novel embedded visualization designs targeting five representative contexts identified by the fans, including shooting, offense, defense, player evaluation, and team comparison. We then developed Omnioculars, an interactive basketball game-viewing prototype that features the proposed embedded visualizations for fans' in-game data analysis. We evaluated Omnioculars in a simulated basketball game with basketball fans. The study results suggest that our design supports personalized in-game data analysis and enhances game understanding and engagement.",
                "AuthorNamesDeduped": "Tica Lin;Zhutian Chen;Yalong Yang 0001;Daniele Chiappalupi;Johanna Beyer;Hanspeter Pfister",
                "AuthorNames": "Tica Lin;Zhutian Chen;Yalong Yang;Daniele Chiappalupi;Johanna Beyer;Hanspeter Pfister",
                "AuthorAffiliation": "John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;Department of Computer Science, Virginia Tech, Blacksburg, VA, USA;Department of Computer Science, ETH Zürich, Switzerland;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA",
                "InternalReferences": "0.1109/tvcg.2013.124;10.1109/tvcg.2021.3114806;10.1109/tvcg.2021.3114861;10.1109/tvcg.2013.192;10.1109/tvcg.2017.2745181;10.1109/tvcg.2016.2598608;10.1109/tvcg.2020.3030392",
                "AuthorKeywords": "Sports Analytics,Embedded Visualization,Data Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 989,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 142,
                "i": [
                    142
                ]
            }
        },
        {
            "name": "Steffen Koch 0001",
            "value": 428,
            "numPapers": 146,
            "cluster": "1",
            "visible": 1,
            "index": 336,
            "x": -98.85529050070939,
            "y": 154.5238866325215,
            "vy": 0,
            "vx": 0,
            "r": 1.492803684513529,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning",
                "DOI": "10.1109/tvcg.2020.3030467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030467",
                "FirstPage": 294,
                "LastPage": 303,
                "PaperType": "J",
                "Abstract": "Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.",
                "AuthorNamesDeduped": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch 0001;Lingyun Yu 0001;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorNames": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch;Lingyun Yu;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;Department of Computer Science and Software Engineering, Xi 'an Jiaotong-Liverpool University.;Alibaba Group;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2019.2934396;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2019.2934798",
                "AuthorKeywords": "Storyline visualization,reinforcement learning,mixed-initiative design",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 376,
                "i": [
                    376
                ]
            }
        },
        {
            "name": "Hariharan Subramonyam",
            "value": 8,
            "numPapers": 69,
            "cluster": "1",
            "visible": 1,
            "index": 337,
            "x": -31.533321691185805,
            "y": -180.985219349869,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research",
                "DOI": "10.1109/tvcg.2023.3326591",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326591",
                "FirstPage": 672,
                "LastPage": 682,
                "PaperType": "J",
                "Abstract": "Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models. In conceiving VIS4ML systems, researchers characterize the nature of human knowledge to support human-in-the-loop tasks, design interactive visualizations to make ML components interpretable and elicit knowledge, and evaluate the effectiveness of human-model interchange. We survey recent VIS4ML papers to assess the generalizability of research contributions and claims in enabling human-in-the-loop ML. Our results show potential gaps between the current scope of VIS4ML research and aspirations for its use in practice. We find that while papers motivate that VIS4ML systems are applicable beyond the specific conditions studied, conclusions are often overfitted to non-representative scenarios, are based on interactions with a small set of ML experts and well-understood datasets, fail to acknowledge crucial dependencies, and hinge on decisions that lack justification. We discuss approaches to close the gap between aspirations and research claims and suggest documentation practices to report generality constraints that better acknowledge the exploratory nature of VIS4ML research.",
                "AuthorNamesDeduped": "Hariharan Subramonyam;Jessica Hullman",
                "AuthorNames": "Hariharan Subramonyam;Jessica Hullman",
                "AuthorAffiliation": "Stanford University, USA;Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2014.2346660;10.1109/tvcg.2017.2744683;10.1109/tvcg.2019.2934261;10.1109/tvcg.2020.3030342;10.1109/tvcg.2019.2934654;10.1109/tvcg.2018.2864769;10.1109/vast.2017.8585498;10.1109/tvcg.2019.2934659;10.1109/tvcg.2022.3209384;10.1109/tvcg.2013.126;10.1109/tvcg.2021.3114793;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2019.2934267;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114858;10.1109/tvcg.2017.2744358;10.1109/tvcg.2018.2864838;10.1109/tvcg.2014.2346481;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2022.3209361;10.1109/vast.2011.6102453;10.1109/tvcg.2018.2864504;10.1109/tvcg.2022.3209347;10.1109/tvcg.2020.3030418;10.1109/tvcg.2019.2934619;10.1109/tvcg.2017.2744878;10.1109/vast47406.2019.8986943;10.1109/tvcg.2022.3209465;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "VIS4ML,Visualization,Machine learning,Human-in-the-loop,Human Knowledge,Generalizability,Survey",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 227,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 47,
                "i": [
                    47
                ]
            }
        },
        {
            "name": "Klaus Mueller 0001",
            "value": 679,
            "numPapers": 166,
            "cluster": "6",
            "visible": 1,
            "index": 338,
            "x": 145.72085330081666,
            "y": 112.31844422570084,
            "vy": 0,
            "vx": 0,
            "r": 1.7818077144502014,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias",
                "DOI": "10.1109/tvcg.2022.3209484",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209484",
                "FirstPage": 473,
                "LastPage": 482,
                "PaperType": "J",
                "Abstract": "With the rise of AI, algorithms have become better at learning underlying patterns from the training data including ingrained social biases based on gender, race, etc. Deployment of such algorithms to domains such as hiring, healthcare, law enforcement, etc. has raised serious concerns about fairness, accountability, trust and interpretability in machine learning algorithms. To alleviate this problem, we propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases from tabular datasets. It uses a graphical causal model to represent causal relationships among different features in the dataset and as a medium to inject domain knowledge. A user can detect the presence of bias against a group, say females, or a subgroup, say black females, by identifying unfair causal relationships in the causal network and using an array of fairness metrics. Thereafter, the user can mitigate bias by refining the causal model and acting on the unfair causal edges. For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset based on the current causal model while ensuring a minimal change from the original dataset. Users can visually assess the impact of their interactions on different fairness metrics, utility metrics, data distortion, and the underlying data distribution. Once satisfied, they can download the debiased dataset and use it for any downstream application for fairer predictions. We evaluate D-BIAS by conducting experiments on 3 datasets and also a formal user study. We found that D-BIAS helps reduce bias significantly compared to the baseline debiasing approach across different fairness metrics while incurring little data distortion and a small loss in utility. Moreover, our human-in-the-loop based approach significantly outperforms an automated approach on trust, interpretability and accountability.",
                "AuthorNamesDeduped": "Bhavya Ghai;Klaus Mueller 0001",
                "AuthorNames": "Bhavya Ghai;Klaus Mueller",
                "AuthorAffiliation": "Computer Science department, Stony Brook University, USA;Computer Science department, Stony Brook University, USA",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/vast.2017.8585647;10.1109/tvcg.2021.3114850;10.1109/tvcg.2020.3028957;10.1109/vast47406.2019.8986948",
                "AuthorKeywords": "Algorithmic Fairness,Causality,Debiasing,Human-in-the-loop,Visual Analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 901,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 144,
                "i": [
                    144
                ]
            }
        },
        {
            "name": "Duen Horng Chau",
            "value": 99,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 339,
            "x": -183.5906242857434,
            "y": 15.635941748772268,
            "vy": 0,
            "vx": 0,
            "r": 1.1139896373056994,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning",
                "DOI": "10.1109/vast47406.2019.8986948",
                "Link": "http://dx.doi.org/10.1109/VAST47406.2019.8986948",
                "FirstPage": 46,
                "LastPage": 56,
                "PaperType": "C",
                "Abstract": "The growing capability and accessibility of machine learning has led to its application to many real-world domains and data about people. Despite the benefits algorithmic systems may bring, models can reflect, inject, or exacerbate implicit and explicit societal biases into their outputs, disadvantaging certain demographic subgroups. Discovering which biases a machine learning model has introduced is a great challenge, due to the numerous definitions of fairness and the large number of potentially impacted subgroups. We present FAIRVIS, a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models. Through FAIRVIS, users can apply domain knowledge to generate and investigate known subgroups, and explore suggested and similar subgroups. FAIRVIS's coordinated views enable users to explore a high-level overview of subgroup performance and subsequently drill down into detailed investigation of specific subgroups. We show how FAIRVIS helps to discover biases in two real datasets used in predicting income and recidivism. As a visual analytics system devoted to discovering bias in machine learning, FAIRVIS demonstrates how interactive visualization may help data scientists and the general public understand and create more equitable algorithmic systems.",
                "AuthorNamesDeduped": "Ángel Alexander Cabrera;Will Epperson;Fred Hohman;Minsuk Kahng;Jamie Morgenstern;Duen Horng Chau",
                "AuthorNames": "Ángel Alexander Cabrera;Will Epperson;Fred Hohman;Minsuk Kahng;Jamie Morgenstern;Duen Horng Chau",
                "AuthorAffiliation": "Georgia Institute of Technology;Georgia Institute of Technology;Georgia Institute of Technology;Georgia Institute of Technology;Georgia Institute of Technology;Georgia Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2017.2744718;10.1109/vast.2017.8585720;10.1109/tvcg.2016.2598828;10.1109/tvcg.2018.2865044",
                "AuthorKeywords": "Machine learning fairness,visual analytics,intersectional bias,subgroup discovery",
                "AminerCitationCount": 107,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 1835,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 598,
                "i": [
                    598
                ]
            }
        },
        {
            "name": "Arnaud Prouzeau",
            "value": 24,
            "numPapers": 18,
            "cluster": "2",
            "visible": 1,
            "index": 340,
            "x": 124.99576810714456,
            "y": -135.74261657749545,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics",
                "DOI": "10.1109/tvcg.2020.3030334",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030334",
                "FirstPage": 1193,
                "LastPage": 1203,
                "PaperType": "J",
                "Abstract": "Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative’ scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use’ interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.",
                "AuthorNamesDeduped": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorNames": "Barrett Ens;Sarah Goodwin;Arnaud Prouzeau;Fraser Anderson;Florence Y. Wang;Samuel Gratzl;Zac Lucarelli;Brendan Moyle;Jim Smiley;Tim Dwyer",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2019.2934803;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2599107;10.1109/vast.2007.4389011;10.1109/vast.2010.5652880;10.1109/tvcg.2018.2865241;10.1109/vast.2007.4389006;10.1109/tvcg.2009.162;10.1109/tvcg.2007.70577;10.1109/tvcg.2019.2934538;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Data visualisation,tangible and embedded interaction,augmented reality,immersive analytics",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1968,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 460,
                "i": [
                    460
                ]
            }
        },
        {
            "name": "Yvonne Jansen",
            "value": 240,
            "numPapers": 44,
            "cluster": "5",
            "visible": 1,
            "index": 341,
            "x": -0.47586651863664853,
            "y": 184.7965734288827,
            "vy": 0,
            "vx": 0,
            "r": 1.2763385146804835,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Embedded Data Representations",
                "DOI": "10.1109/tvcg.2016.2598608",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598608",
                "FirstPage": 461,
                "LastPage": 470,
                "PaperType": "J",
                "Abstract": "We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles are making it increasingly easier to display data in-context. While researchers and artists have already begun to create embedded data representations, the benefits, trade-offs, and even the language necessary to describe and compare these approaches remain unexplored. In this paper, we formalize the notion of physical data referents - the real-world entities and spaces to which data corresponds - and examine the relationship between referents and the visual and physical representations of their data. We differentiate situated representations, which display data in proximity to data referents, and embedded representations, which display data so that it spatially coincides with data referents. Drawing on examples from visualization, ubiquitous computing, and art, we explore the role of spatial indirection, scale, and interaction for embedded representations. We also examine the tradeoffs between non-situated, situated, and embedded data displays, including both visualizations and physicalizations. Based on our observations, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research and applications.",
                "AuthorNamesDeduped": "Wesley Willett;Yvonne Jansen;Pierre Dragicevic",
                "AuthorNames": "Wesley Willett;Yvonne Jansen;Pierre Dragicevic",
                "AuthorAffiliation": "University of Calgary;University of Copenhagen;Inria",
                "InternalReferences": "0.1109/tvcg.2013.134;10.1109/infvis.1998.729560",
                "AuthorKeywords": "augmented reality;Information visualization;data physicalization;ambient displays;ubiquitous computing",
                "AminerCitationCount": 192,
                "CitationCountCrossRef": 160,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 3740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 889,
                "i": [
                    889
                ]
            }
        },
        {
            "name": "Chen Chen 0080",
            "value": 0,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 342,
            "x": -124.65952033407589,
            "y": -136.78451663137213,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Mystique: Deconstructing SVG Charts for Layout Reuse",
                "DOI": "10.1109/tvcg.2023.3327354",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327354",
                "FirstPage": 447,
                "LastPage": 457,
                "PaperType": "J",
                "Abstract": "To facilitate the reuse of existing charts, previous research has examined how to obtain a semantic understanding of a chart by deconstructing its visual representation into reusable components, such as encodings. However, existing deconstruction approaches primarily focus on chart styles, handling only basic layouts. In this paper, we investigate how to deconstruct chart layouts, focusing on rectangle-based ones, as they cover not only 17 chart types but also advanced layouts (e.g., small multiples, nested layouts). We develop an interactive tool, called Mystique, adopting a mixed-initiative approach to extract the axes and legend, and deconstruct a chart's layout into four semantic components: mark groups, spatial relationships, data encodings, and graphical constraints. Mystique employs a wizard interface that guides chart authors through a series of steps to specify how the deconstructed components map to their own data. On 150 rectangle-based SVG charts, Mystique achieves above 85% accuracy for axis and legend extraction and 96% accuracy for layout deconstruction. In a chart reproduction study, participants could easily reuse existing charts on new datasets. We discuss the current limitations of Mystique and future research directions.",
                "AuthorNamesDeduped": "Chen Chen 0080;Bongshin Lee;Yunhai Wang;Yunjeong Chang;Zhicheng Liu 0001",
                "AuthorNames": "Chen Chen;Bongshin Lee;Yunhai Wang;Yunjeong Chang;Zhicheng Liu",
                "AuthorAffiliation": "University of Maryland, College Park, Maryland, United States;Microsoft Research, Redmond, Washington, United States;Shandong University, Qingdao, China;University of Maryland, College Park, Maryland, United States;University of Maryland, College Park, Maryland, United States",
                "InternalReferences": "10.1109/tvcg.2022.3209490;10.1109/tvcg.2011.185;10.1109/tvcg.2019.2934810;10.1109/tvcg.2021.3114856;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/infvis.2001.963283;10.1109/tvcg.2019.2934538;10.1109/tvcg.2008.165;10.1109/tvcg.2021.3114877",
                "AuthorKeywords": "Chart layout,Reuse,Reverse-engineering,Deconstruction",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 183,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 51,
                "i": [
                    51
                ]
            }
        },
        {
            "name": "Yunjeong Chang",
            "value": 0,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 343,
            "x": 184.585543343005,
            "y": 16.67864469216869,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Mystique: Deconstructing SVG Charts for Layout Reuse",
                "DOI": "10.1109/tvcg.2023.3327354",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327354",
                "FirstPage": 447,
                "LastPage": 457,
                "PaperType": "J",
                "Abstract": "To facilitate the reuse of existing charts, previous research has examined how to obtain a semantic understanding of a chart by deconstructing its visual representation into reusable components, such as encodings. However, existing deconstruction approaches primarily focus on chart styles, handling only basic layouts. In this paper, we investigate how to deconstruct chart layouts, focusing on rectangle-based ones, as they cover not only 17 chart types but also advanced layouts (e.g., small multiples, nested layouts). We develop an interactive tool, called Mystique, adopting a mixed-initiative approach to extract the axes and legend, and deconstruct a chart's layout into four semantic components: mark groups, spatial relationships, data encodings, and graphical constraints. Mystique employs a wizard interface that guides chart authors through a series of steps to specify how the deconstructed components map to their own data. On 150 rectangle-based SVG charts, Mystique achieves above 85% accuracy for axis and legend extraction and 96% accuracy for layout deconstruction. In a chart reproduction study, participants could easily reuse existing charts on new datasets. We discuss the current limitations of Mystique and future research directions.",
                "AuthorNamesDeduped": "Chen Chen 0080;Bongshin Lee;Yunhai Wang;Yunjeong Chang;Zhicheng Liu 0001",
                "AuthorNames": "Chen Chen;Bongshin Lee;Yunhai Wang;Yunjeong Chang;Zhicheng Liu",
                "AuthorAffiliation": "University of Maryland, College Park, Maryland, United States;Microsoft Research, Redmond, Washington, United States;Shandong University, Qingdao, China;University of Maryland, College Park, Maryland, United States;University of Maryland, College Park, Maryland, United States",
                "InternalReferences": "10.1109/tvcg.2022.3209490;10.1109/tvcg.2011.185;10.1109/tvcg.2019.2934810;10.1109/tvcg.2021.3114856;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/infvis.2001.963283;10.1109/tvcg.2019.2934538;10.1109/tvcg.2008.165;10.1109/tvcg.2021.3114877",
                "AuthorKeywords": "Chart layout,Reuse,Reverse-engineering,Deconstruction",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 183,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 51,
                "i": [
                    51
                ]
            }
        },
        {
            "name": "Jagoda Walny",
            "value": 126,
            "numPapers": 57,
            "cluster": "5",
            "visible": 1,
            "index": 344,
            "x": -147.58825976149964,
            "y": 112.55090217573603,
            "vy": 0,
            "vx": 0,
            "r": 1.145077720207254,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Understanding Pen and Touch Interaction for Data Exploration on Interactive Whiteboards",
                "DOI": "10.1109/tvcg.2012.275",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.275",
                "FirstPage": 2779,
                "LastPage": 2788,
                "PaperType": "J",
                "Abstract": "Current interfaces for common information visualizations such as bar graphs, line graphs, and scatterplots usually make use of the WIMP (Windows, Icons, Menus and a Pointer) interface paradigm with its frequently discussed problems of multiple levels of indirection via cascading menus, dialog boxes, and control panels. Recent advances in interface capabilities such as the availability of pen and touch interaction challenge us to re-think this and investigate more direct access to both the visualizations and the data they portray. We conducted a Wizard of Oz study to explore applying pen and touch interaction to the creation of information visualization interfaces on interactive whiteboards without implementing a plethora of recognizers. Our wizard acted as a robust and flexible pen and touch recognizer, giving participants maximum freedom in how they interacted with the system. Based on our qualitative analysis of the interactions our participants used, we discuss our insights about pen and touch interactions in the context of learnability and the interplay between pen and touch gestures. We conclude with suggestions for designing pen and touch enabled interactive visualization interfaces.",
                "AuthorNamesDeduped": "Jagoda Walny;Bongshin Lee;Paul Johns;Nathalie Henry Riche;Sheelagh Carpendale",
                "AuthorNames": "Jagoda Walny;Bongshin Lee;Paul Johns;Nathalie Henry Riche;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Calgary, Canada;Microsoft Research, USA;Microsoft Research, USA;Microsoft Research, USA;Consultant for Microsoft Research, University of Calgary, Canada",
                "InternalReferences": "0.1109/tvcg.2012.262;10.1109/tvcg.2009.174;10.1109/tvcg.2011.251;10.1109/tvcg.2007.70568;10.1109/tvcg.2010.164",
                "AuthorKeywords": "Pen and touch, interaction, Wizard of Oz, whiteboard, data exploration",
                "AminerCitationCount": 104,
                "CitationCountCrossRef": 58,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1404,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1413,
                "i": [
                    1413
                ]
            }
        },
        {
            "name": "Wesley Willett",
            "value": 345,
            "numPapers": 48,
            "cluster": "5",
            "visible": 1,
            "index": 345,
            "x": 32.84751814002698,
            "y": -182.95092388955186,
            "vy": 0,
            "vx": 0,
            "r": 1.3972366148531952,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Embedded Data Representations",
                "DOI": "10.1109/tvcg.2016.2598608",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598608",
                "FirstPage": 461,
                "LastPage": 470,
                "PaperType": "J",
                "Abstract": "We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles are making it increasingly easier to display data in-context. While researchers and artists have already begun to create embedded data representations, the benefits, trade-offs, and even the language necessary to describe and compare these approaches remain unexplored. In this paper, we formalize the notion of physical data referents - the real-world entities and spaces to which data corresponds - and examine the relationship between referents and the visual and physical representations of their data. We differentiate situated representations, which display data in proximity to data referents, and embedded representations, which display data so that it spatially coincides with data referents. Drawing on examples from visualization, ubiquitous computing, and art, we explore the role of spatial indirection, scale, and interaction for embedded representations. We also examine the tradeoffs between non-situated, situated, and embedded data displays, including both visualizations and physicalizations. Based on our observations, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research and applications.",
                "AuthorNamesDeduped": "Wesley Willett;Yvonne Jansen;Pierre Dragicevic",
                "AuthorNames": "Wesley Willett;Yvonne Jansen;Pierre Dragicevic",
                "AuthorAffiliation": "University of Calgary;University of Copenhagen;Inria",
                "InternalReferences": "0.1109/tvcg.2013.134;10.1109/infvis.1998.729560",
                "AuthorKeywords": "augmented reality;Information visualization;data physicalization;ambient displays;ubiquitous computing",
                "AminerCitationCount": 192,
                "CitationCountCrossRef": 160,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 3740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 889,
                "i": [
                    889
                ]
            }
        },
        {
            "name": "James P. Ahrens",
            "value": 131,
            "numPapers": 44,
            "cluster": "6",
            "visible": 1,
            "index": 346,
            "x": 99.50452473434486,
            "y": 157.31767083640716,
            "vy": 0,
            "vx": 0,
            "r": 1.1508347725964307,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "MolSieve: A Progressive Visual Analytics System for Molecular Dynamics Simulations",
                "DOI": "10.1109/tvcg.2023.3326584",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326584",
                "FirstPage": 727,
                "LastPage": 737,
                "PaperType": "J",
                "Abstract": "Molecular Dynamics (MD) simulations are ubiquitous in cutting-edge physio-chemical research. They provide critical insights into how a physical system evolves over time given a model of interatomic interactions. Understanding a system's evolution is key to selecting the best candidates for new drugs, materials for manufacturing, and countless other practical applications. With today's technology, these simulations can encompass millions of unit transitions between discrete molecular structures, spanning up to several milliseconds of real time. Attempting to perform a brute-force analysis with data-sets of this size is not only computationally impractical, but would not shed light on the physically-relevant features of the data. Moreover, there is a need to analyze simulation ensembles in order to compare similar processes in differing environments. These problems call for an approach that is analytically transparent, computationally efficient, and flexible enough to handle the variety found in materials-based research. In order to address these problems, we introduce MolSieve, a progressive visual analytics system that enables the comparison of multiple long-duration simulations. Using MolSieve, analysts are able to quickly identify and compare regions of interest within immense simulations through its combination of control charts, data-reduction techniques, and highly informative visual components. A simple programming interface is provided which allows experts to fit MolSieve to their needs. To demonstrate the efficacy of our approach, we present two case studies of MolSieve and report on findings from domain collaborators.",
                "AuthorNamesDeduped": "Rostyslav Hnatyshyn;Jieqiong Zhao;Danny Perez;James P. Ahrens;Ross Maciejewski",
                "AuthorNames": "Rostyslav Hnatyshyn;Jieqiong Zhao;Danny Perez;James Ahrens;Ross Maciejewski",
                "AuthorAffiliation": "Arizona State University, USA;Arizona State University, USA;Los Alamos National Laboratory, USA;Los Alamos National Laboratory, USA;Arizona State University, USA",
                "InternalReferences": "10.1109/tvcg.2018.2864851;10.1109/tvcg.2010.193;10.1109/tvcg.2012.265;10.1109/tvcg.2022.3209411;10.1109/tvcg.2018.2864504;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Molecular dynamics,time-series analysis,visual analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 183,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 52,
                "i": [
                    52
                ]
            }
        },
        {
            "name": "Junpeng Wang",
            "value": 223,
            "numPapers": 44,
            "cluster": "1",
            "visible": 1,
            "index": 347,
            "x": -179.89713081313877,
            "y": -48.85716247594036,
            "vy": 0,
            "vx": 0,
            "r": 1.2567645365572826,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "InSituNet: Deep Image Synthesis for Parameter Space Exploration of Ensemble Simulations",
                "DOI": "10.1109/tvcg.2019.2934312",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934312",
                "FirstPage": 23,
                "LastPage": 33,
                "PaperType": "J",
                "Abstract": "We propose InSituNet, a deep learning based surrogate model to support parameter space exploration for ensemble simulations that are visualized in situ. In situ visualization, generating visualizations at simulation time, is becoming prevalent in handling large-scale simulations because of the I/O and storage constraints. However, in situ visualization approaches limit the flexibility of post-hoc exploration because the raw simulation data are no longer available. Although multiple image-based approaches have been proposed to mitigate this limitation, those approaches lack the ability to explore the simulation parameters. Our approach allows flexible exploration of parameter space for large-scale ensemble simulations by taking advantage of the recent advances in deep learning. Specifically, we design InSituNet as a convolutional regression model to learn the mapping from the simulation and visualization parameters to the visualization results. With the trained model, users can generate new images for different simulation parameters under various visualization settings, which enables in-depth analysis of the underlying ensemble simulations. We demonstrate the effectiveness of InSituNet in combustion, cosmology, and ocean simulations through quantitative and qualitative evaluations.",
                "AuthorNamesDeduped": "Wenbin He;Junpeng Wang;Hanqi Guo 0001;Ko-Chih Wang;Han-Wei Shen;Mukund Raj;Youssef S. G. Nashed;Tom Peterka",
                "AuthorNames": "Wenbin He;Junpeng Wang;Hanqi Guo;Ko-Chih Wang;Han-Wei Shen;Mukund Raj;Youssef S. G. Nashed;Tom Peterka",
                "AuthorAffiliation": "Department of Computer Science and Engineering, The Ohio State University;Department of Computer Science and Engineering, The Ohio State University;Mathematics and Computer Science Division, Argonne National Laboratory;Department of Computer Science and Engineering, The Ohio State University;Department of Computer Science and Engineering, The Ohio State University;Mathematics and Computer Science Division, Argonne National Laboratory;Mathematics and Computer Science Division, Argonne National Laboratory;Mathematics and Computer Science Division, Argonne National Laboratory",
                "InternalReferences": "0.1109/tvcg.2016.2598869;10.1109/scivis.2015.7429487;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/tvcg.2016.2598604;10.1109/tvcg.2009.155;10.1109/tvcg.2018.2865051;10.1109/tvcg.2014.2346755;10.1109/tvcg.2014.2346321;10.1109/vast.2015.7347635;10.1109/tvcg.2010.215;10.1109/tvcg.2011.248;10.1109/tvcg.2016.2598830;10.1109/tvcg.2018.2865026",
                "AuthorKeywords": "In situ visualization,ensemble visualization,parameter space exploration,deep learning,image synthesis",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 1702,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 571,
                "i": [
                    571
                ]
            }
        },
        {
            "name": "Hao Yang 0007",
            "value": 106,
            "numPapers": 10,
            "cluster": "1",
            "visible": 1,
            "index": 348,
            "x": 165.89126310846166,
            "y": -85.61593791040987,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "DQNViz: A Visual Analytics Approach to Understand Deep Q-Networks",
                "DOI": "10.1109/tvcg.2018.2864504",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864504",
                "FirstPage": 288,
                "LastPage": 298,
                "PaperType": "J",
                "Abstract": "Deep Q-Network (DQN), as one type of deep reinforcement learning model, targets to train an intelligent agent that acquires optimal actions while interacting with an environment. The model is well known for its ability to surpass professional human players across many Atari 2600 games. Despite the superhuman performance, in-depth understanding of the model and interpreting the sophisticated behaviors of the DQN agent remain to be challenging tasks, due to the long-time model training process and the large number of experiences dynamically generated by the agent. In this work, we propose DQNViz, a visual analytics system to expose details of the blind training process in four levels, and enable users to dive into the large experience space of the agent for comprehensive analysis. As an initial attempt in visualizing DQN models, our work focuses more on Atari games with a simple action space, most notably the Breakout game. From our visual analytics of the agent's experiences, we extract useful action/reward patterns that help to interpret the model and control the training. Through multiple case studies conducted together with deep learning experts, we demonstrate that DQNViz can effectively help domain experts to understand, diagnose, and potentially improve DQN models.",
                "AuthorNamesDeduped": "Junpeng Wang;Liang Gou;Han-Wei Shen;Hao Yang 0007",
                "AuthorNames": "Junpeng Wang;Liang Gou;Han-Wei Shen;Hao Yang",
                "AuthorAffiliation": "The Ohio State University;Visa Research;The Ohio State University;Visa Research",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2014.2346682;10.1109/tvcg.2017.2745320;10.1109/tvcg.2017.2744718;10.1109/tvcg.2011.179;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/vast.2017.8585721;10.1109/tvcg.2013.200;10.1109/tvcg.2017.2744358;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Deep Q-Network (DQN),reinforcement learning,model interpretation,visual analytics",
                "AminerCitationCount": 108,
                "CitationCountCrossRef": 83,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2559,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 732,
                "i": [
                    732
                ]
            }
        },
        {
            "name": "Kai Lawonn",
            "value": 141,
            "numPapers": 84,
            "cluster": "6",
            "visible": 1,
            "index": 349,
            "x": -64.5827787555625,
            "y": 175.43963260395316,
            "vy": 0,
            "vx": 0,
            "r": 1.162348877374784,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Perception of Line Attributes for Visualization",
                "DOI": "10.1109/tvcg.2023.3326523",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326523",
                "FirstPage": 1041,
                "LastPage": 1051,
                "PaperType": "J",
                "Abstract": "Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty—e.g., in weather forecasts—is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.",
                "AuthorNamesDeduped": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorNames": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorAffiliation": "University of Jena, Germany;University of Tübingen, Germany;University of Jena, Germany;University of Tübingen, Germany;Brandenburg University of Technology, Germany;University of Jena, Germany",
                "InternalReferences": "10.1109/tvcg.2012.220;10.1109/tvcg.2017.2743959;10.1109/tvcg.2015.2467671;10.1109/tvcg.2012.279;10.1109/tvcg.2015.2467591;10.1109/tvcg.2016.2598826;10.1109/tvcg.2023.3326574",
                "AuthorKeywords": "Line Drawings,Line Stylization,Perceptual Evaluation,Uncertainty Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 157,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 54,
                "i": [
                    54
                ]
            }
        },
        {
            "name": "Anna Sterzik",
            "value": 10,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 350,
            "x": -70.98777671506423,
            "y": -173.23606886861694,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Perception of Line Attributes for Visualization",
                "DOI": "10.1109/tvcg.2023.3326523",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326523",
                "FirstPage": 1041,
                "LastPage": 1051,
                "PaperType": "J",
                "Abstract": "Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty—e.g., in weather forecasts—is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.",
                "AuthorNamesDeduped": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorNames": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorAffiliation": "University of Jena, Germany;University of Tübingen, Germany;University of Jena, Germany;University of Tübingen, Germany;Brandenburg University of Technology, Germany;University of Jena, Germany",
                "InternalReferences": "10.1109/tvcg.2012.220;10.1109/tvcg.2017.2743959;10.1109/tvcg.2015.2467671;10.1109/tvcg.2012.279;10.1109/tvcg.2015.2467591;10.1109/tvcg.2016.2598826;10.1109/tvcg.2023.3326574",
                "AuthorKeywords": "Line Drawings,Line Stylization,Perceptual Evaluation,Uncertainty Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 157,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 54,
                "i": [
                    54
                ]
            }
        },
        {
            "name": "Nils Lichtenberg",
            "value": 8,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 351,
            "x": 169.60489098262087,
            "y": 79.90106979742697,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Perception of Line Attributes for Visualization",
                "DOI": "10.1109/tvcg.2023.3326523",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326523",
                "FirstPage": 1041,
                "LastPage": 1051,
                "PaperType": "J",
                "Abstract": "Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty—e.g., in weather forecasts—is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.",
                "AuthorNamesDeduped": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorNames": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorAffiliation": "University of Jena, Germany;University of Tübingen, Germany;University of Jena, Germany;University of Tübingen, Germany;Brandenburg University of Technology, Germany;University of Jena, Germany",
                "InternalReferences": "10.1109/tvcg.2012.220;10.1109/tvcg.2017.2743959;10.1109/tvcg.2015.2467671;10.1109/tvcg.2012.279;10.1109/tvcg.2015.2467591;10.1109/tvcg.2016.2598826;10.1109/tvcg.2023.3326574",
                "AuthorKeywords": "Line Drawings,Line Stylization,Perceptual Evaluation,Uncertainty Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 157,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 54,
                "i": [
                    54
                ]
            }
        },
        {
            "name": "Douglas W. Cunningham",
            "value": 10,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 352,
            "x": -179.28825552972933,
            "y": 55.72899989329144,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Perception of Line Attributes for Visualization",
                "DOI": "10.1109/tvcg.2023.3326523",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326523",
                "FirstPage": 1041,
                "LastPage": 1051,
                "PaperType": "J",
                "Abstract": "Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty—e.g., in weather forecasts—is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.",
                "AuthorNamesDeduped": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorNames": "Anna Sterzik;Nils Lichtenberg;Jana Wilms;Michael Krone;Douglas W. Cunningham;Kai Lawonn",
                "AuthorAffiliation": "University of Jena, Germany;University of Tübingen, Germany;University of Jena, Germany;University of Tübingen, Germany;Brandenburg University of Technology, Germany;University of Jena, Germany",
                "InternalReferences": "10.1109/tvcg.2012.220;10.1109/tvcg.2017.2743959;10.1109/tvcg.2015.2467671;10.1109/tvcg.2012.279;10.1109/tvcg.2015.2467591;10.1109/tvcg.2016.2598826;10.1109/tvcg.2023.3326574",
                "AuthorKeywords": "Line Drawings,Line Stylization,Perceptual Evaluation,Uncertainty Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 157,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 54,
                "i": [
                    54
                ]
            }
        },
        {
            "name": "Monique Meuschke",
            "value": 30,
            "numPapers": 29,
            "cluster": "6",
            "visible": 1,
            "index": 353,
            "x": 94.69120996869214,
            "y": -162.43021502991695,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Perceptually Uniform Construction of Illustrative Textures",
                "DOI": "10.1109/tvcg.2023.3326574",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326574",
                "FirstPage": 1052,
                "LastPage": 1062,
                "PaperType": "J",
                "Abstract": "Illustrative textures, such as stippling or hatching, were predominantly used as an alternative to conventional Phong rendering. Recently, the potential of encoding information on surfaces or maps using different densities has also been recognized. This has the significant advantage that additional color can be used as another visual channel and the illustrative textures can then be overlaid. Effectively, it is thus possible to display multiple information, such as two different scalar fields on surfaces simultaneously. In previous work, these textures were manually generated and the choice of density was unempirically determined. Here, we first want to determine and understand the perceptual space of illustrative textures. We chose a succession of simplices with increasing dimensions as primitives for our textures: Dots, lines, and triangles. Thus, we explore the texture types of stippling, hatching, and triangles. We create a range of textures by sampling the density space uniformly. Then, we conduct three perceptual studies in which the participants performed pairwise comparisons for each texture type. We use multidimensional scaling (MDS) to analyze the perceptual spaces per category. The perception of stippling and triangles seems relatively similar. Both are adequately described by a 1D manifold in 2D space. The perceptual space of hatching consists of two main clusters: Crosshatched textures, and textures with only one hatching direction. However, the perception of hatching textures with only one hatching direction is similar to the perception of stippling and triangles. Based on our findings, we construct perceptually uniform illustrative textures. Afterwards, we provide concrete application examples for the constructed textures.",
                "AuthorNamesDeduped": "Anna Sterzik;Monique Meuschke;Douglas W. Cunningham;Kai Lawonn",
                "AuthorNames": "Anna Sterzik;Monique Meuschke;Douglas W. Cunningham;Kai Lawonn",
                "AuthorAffiliation": "University of Jena, Germany;University of Magdeburg, Germany;Brandenburg University of Technology, Germany;University of Jena, Germany",
                "InternalReferences": "10.1109/tvcg.2006.180;10.1109/visual.1996.568110;10.1109/tvcg.2016.2598795;10.1109/tvcg.2023.3326523",
                "AuthorKeywords": "Illustrative Visualization,Perceptual Evaluation,Hatching,Stippling",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 105,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 131,
                "i": [
                    131
                ]
            }
        },
        {
            "name": "Zehua Zeng",
            "value": 19,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 354,
            "x": 39.95407500125615,
            "y": 183.9936735075258,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Too Many Cooks: Exploring How Graphical Perception Studies Influence Visualization Recommendations in Draco",
                "DOI": "10.1109/tvcg.2023.3326527",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326527",
                "FirstPage": 1063,
                "LastPage": 1073,
                "PaperType": "J",
                "Abstract": "Findings from graphical perception can guide visualization recommendation algorithms in identifying effective visualization designs. However, existing algorithms use knowledge from, at best, a few studies, limiting our understanding of how complementary (or contradictory) graphical perception results influence generated recommendations. In this paper, we present a pipeline of applying a large body of graphical perception results to develop new visualization recommendation algorithms and conduct an exploratory study to investigate how results from graphical perception can alter the behavior of downstream algorithms. Specifically, we model graphical perception results from 30 papers in Draco—a framework to model visualization knowledge—to develop new recommendation algorithms. By analyzing Draco-generated algorithms, we showcase the feasibility of our method to (1) identify gaps in existing graphical perception literature informing recommendation algorithms, (2) cluster papers by their preferred design rules and constraints, and (3) investigate why certain studies can dominate Draco's recommendations, whereas others may have little influence. Given our findings, we discuss the potential for mutually reinforcing advancements in graphical perception and visualization recommendation research.",
                "AuthorNamesDeduped": "Zehua Zeng;Junran Yang;Dominik Moritz;Jeffrey Heer;Leilani Battle",
                "AuthorNames": "Zehua Zeng;Junran Yang;Dominik Moritz;Jeffrey Heer;Leilani Battle",
                "AuthorAffiliation": "University of Maryland, College Park, USA;University of Washington, Seattle, USA;Carnegie Mellon University, United States;University of Washington, Seattle, USA;University of Washington, Seattle, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745086;10.1109/tvcg.2018.2865077;10.1109/tvcg.2019.2934786;10.1109/tvcg.2021.3114863;10.1109/tvcg.2007.70594;10.1109/tvcg.2021.3114684;10.1109/tvcg.2018.2865240;10.1109/tvcg.2018.2864884;10.1109/tvcg.2019.2934807;10.1109/tvcg.2018.2865264;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2346320;10.1109/tvcg.2019.2934784;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934400;10.1109/tvcg.2021.3114814",
                "AuthorKeywords": "Graphical Perception Studies,Visualization Recommendation Algorithms",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 153,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 55,
                "i": [
                    55
                ]
            }
        },
        {
            "name": "Junran Yang",
            "value": 0,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 355,
            "x": -153.96352966353243,
            "y": -108.83580078975194,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Too Many Cooks: Exploring How Graphical Perception Studies Influence Visualization Recommendations in Draco",
                "DOI": "10.1109/tvcg.2023.3326527",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326527",
                "FirstPage": 1063,
                "LastPage": 1073,
                "PaperType": "J",
                "Abstract": "Findings from graphical perception can guide visualization recommendation algorithms in identifying effective visualization designs. However, existing algorithms use knowledge from, at best, a few studies, limiting our understanding of how complementary (or contradictory) graphical perception results influence generated recommendations. In this paper, we present a pipeline of applying a large body of graphical perception results to develop new visualization recommendation algorithms and conduct an exploratory study to investigate how results from graphical perception can alter the behavior of downstream algorithms. Specifically, we model graphical perception results from 30 papers in Draco—a framework to model visualization knowledge—to develop new recommendation algorithms. By analyzing Draco-generated algorithms, we showcase the feasibility of our method to (1) identify gaps in existing graphical perception literature informing recommendation algorithms, (2) cluster papers by their preferred design rules and constraints, and (3) investigate why certain studies can dominate Draco's recommendations, whereas others may have little influence. Given our findings, we discuss the potential for mutually reinforcing advancements in graphical perception and visualization recommendation research.",
                "AuthorNamesDeduped": "Zehua Zeng;Junran Yang;Dominik Moritz;Jeffrey Heer;Leilani Battle",
                "AuthorNames": "Zehua Zeng;Junran Yang;Dominik Moritz;Jeffrey Heer;Leilani Battle",
                "AuthorAffiliation": "University of Maryland, College Park, USA;University of Washington, Seattle, USA;Carnegie Mellon University, United States;University of Washington, Seattle, USA;University of Washington, Seattle, USA",
                "InternalReferences": "10.1109/tvcg.2017.2745086;10.1109/tvcg.2018.2865077;10.1109/tvcg.2019.2934786;10.1109/tvcg.2021.3114863;10.1109/tvcg.2007.70594;10.1109/tvcg.2021.3114684;10.1109/tvcg.2018.2865240;10.1109/tvcg.2018.2864884;10.1109/tvcg.2019.2934807;10.1109/tvcg.2018.2865264;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2346320;10.1109/tvcg.2019.2934784;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934400;10.1109/tvcg.2021.3114814",
                "AuthorKeywords": "Graphical Perception Studies,Visualization Recommendation Algorithms",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 153,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 55,
                "i": [
                    55
                ]
            }
        },
        {
            "name": "Leilani Battle",
            "value": 44,
            "numPapers": 81,
            "cluster": "5",
            "visible": 1,
            "index": 356,
            "x": 187.3083312501534,
            "y": -23.782116059821135,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Understanding how Designers Find and Use Data Visualization Examples",
                "DOI": "10.1109/tvcg.2022.3209490",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209490",
                "FirstPage": 1048,
                "LastPage": 1058,
                "PaperType": "J",
                "Abstract": "Examples are useful for inspiring ideas and facilitating implementation in visualization design. However, there is little understanding of how visualization designers use examples, and how computational tools may support such activities. In this paper, we contribute an exploratory study of current practices in incorporating visualization examples. We conducted semi-structured interviews with 15 university students and 15 professional designers. Our analysis focus on two core design activities: searching for examples and utilizing examples. We characterize observed strategies and tools for performing these activities, as well as major challenges that hinder designers' current workflows. In addition, we identify themes that cut across these two activities: criteria for determining example usefulness, curation practices, and design fixation. Given our findings, we discuss the implications for visualization design and authoring tools and highlight critical areas for future research.",
                "AuthorNamesDeduped": "Hannah K. Bako;Xinyi Liu;Leilani Battle;Zhicheng Liu 0001",
                "AuthorNames": "Hannah K. Bako;Xinyi Liu;Leilani Battle;Zhicheng Liu",
                "AuthorAffiliation": "University of Maryland, USA;University of Maryland, USA;University of Washington, USA;University of Maryland, USA",
                "InternalReferences": "0.1109/tvcg.2018.2865040;10.1109/tvcg.2021.3114760;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114856;10.1109/tvcg.2019.2934431;10.1109/tvcg.2007.70594;10.1109/tvcg.2010.179;10.1109/tvcg.2019.2934538;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Examples,visualization design,idea generation,interview study,qualitative research",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 773,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 158,
                "i": [
                    158
                ]
            }
        },
        {
            "name": "Bilal Alsallakh",
            "value": 186,
            "numPapers": 64,
            "cluster": "1",
            "visible": 1,
            "index": 357,
            "x": -122.22180478600312,
            "y": 144.2630598415691,
            "vy": 0,
            "vx": 0,
            "r": 1.2141623488773747,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Methods for Analyzing Probabilistic Classification Data",
                "DOI": "10.1109/tvcg.2014.2346660",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346660",
                "FirstPage": 1703,
                "LastPage": 1712,
                "PaperType": "J",
                "Abstract": "Multi-class classifiers often compute scores for the classification samples describing probabilities to belong to different classes. In order to improve the performance of such classifiers, machine learning experts need to analyze classification results for a large number of labeled samples to find possible reasons for incorrect classification. Confusion matrices are widely used for this purpose. However, they provide no information about classification scores and features computed for the samples. We propose a set of integrated visual methods for analyzing the performance of probabilistic classifiers. Our methods provide insight into different aspects of the classification results for a large number of samples. One visualization emphasizes at which probabilities these samples were classified and how these probabilities correlate with classification error in terms of false positives and false negatives. Another view emphasizes the features of these samples and ranks them by their separation power between selected true and false classifications. We demonstrate the insight gained using our technique in a benchmarking classification dataset, and show how it enables improving classification performance by interactively defining and evaluating post-classification rules.",
                "AuthorNamesDeduped": "Bilal Alsallakh;Allan Hanbury;Helwig Hauser;Silvia Miksch;Andreas Rauber",
                "AuthorNames": "Bilal Alsallakh;Allan Hanbury;Helwig Hauser;Silvia Miksch;Andreas Rauber",
                "AuthorAffiliation": "Vienna University of Technology;Vienna University of Technology;University of Bergen;Vienna University of Technology;Vienna University of Technology",
                "InternalReferences": "0.1109/visual.2000.885740;10.1109/vast.2010.5652398;10.1109/vast.2009.5332628;10.1109/tvcg.2012.277;10.1109/vast.2012.6400486;10.1109/tvcg.2013.184;10.1109/tvcg.2012.254;10.1109/vast.2011.6102448;10.1109/vast.2011.6102453;10.1109/vast.2012.6400492;10.1109/vast.2010.5652443",
                "AuthorKeywords": "Probabilistic classification, confusion analysis, feature evaluation and selection, visual inspection",
                "AminerCitationCount": 121,
                "CitationCountCrossRef": 82,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 2292,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1253,
                "i": [
                    1253
                ]
            }
        },
        {
            "name": "Yukai Guo",
            "value": 0,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 358,
            "x": -7.3359803093476215,
            "y": -189.19879331777162,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision",
                "DOI": "10.1109/tvcg.2023.3326588",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326588",
                "FirstPage": 76,
                "LastPage": 86,
                "PaperType": "J",
                "Abstract": "Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.",
                "AuthorNamesDeduped": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu 0004;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorNames": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorAffiliation": "School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Department of Computer Science and Technology, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Cardiff University, United Kingdom;Department of Computer Science and Technology, Tsinghua University, China;Harvard University, United Kingdom;School of Software, BNRist, Tsinghua University, China",
                "InternalReferences": "10.1109/tvcg.2014.2346660;10.1109/tvcg.2022.3209425;10.1109/tvcg.2017.2744683;10.1109/tvcg.2020.3028976;10.1109/tvcg.2020.3030350;10.1109/tvcg.2013.173;10.1109/tvcg.2021.3114855;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2022.3209485;10.1109/tvcg.2022.3209458;10.1109/tvcg.2007.70589;10.1109/tvcg.2022.3209489;10.1109/vast50239.2020.00007;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "Model evaluation,computer vision,classification,object detection,instance segmentation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 69,
                "DownloadsXplore": 668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 56,
                "i": [
                    56
                ]
            }
        },
        {
            "name": "Fengyuan Tian",
            "value": 8,
            "numPapers": 36,
            "cluster": "1",
            "visible": 1,
            "index": 359,
            "x": 133.3969325421586,
            "y": 134.74145014932412,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Analysis of Neural Architecture Spaces for Summarizing Design Principles",
                "DOI": "10.1109/tvcg.2022.3209404",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209404",
                "FirstPage": 288,
                "LastPage": 298,
                "PaperType": "J",
                "Abstract": "Recent advances in artificial intelligence largely benefit from better neural network architectures. These architectures are a product of a costly process of trial-and-error. To ease this process, we develop ArchExplorer, a visual analysis method for understanding a neural architecture space and summarizing design principles. The key idea behind our method is to make the architecture space explainable by exploiting structural distances between architectures. We formulate the pairwise distance calculation as solving an all-pairs shortest path problem. To improve efficiency, we decompose this problem into a set of single-source shortest path problems. The time complexity is reduced from O(kn2N) to O(knN). Architectures are hierarchically clustered according to the distances between them. A circle-packing-based architecture visualization has been developed to convey both the global relationships between clusters and local neighborhoods of the architectures in each cluster. Two case studies and a post-analysis are presented to demonstrate the effectiveness of ArchExplorer in summarizing design principles and selecting better-performing architectures.",
                "AuthorNamesDeduped": "Jun Yuan 0003;Mengchen Liu;Fengyuan Tian;Shixia Liu",
                "AuthorNames": "Jun Yuan;Mengchen Liu;Fengyuan Tian;Shixia Liu",
                "AuthorAffiliation": "BNRist, Tsinghua University, China;Microsoft, USA;BNRist, Tsinghua University, China;BNRist, Tsinghua University, China",
                "InternalReferences": "0.1109/tvcg.2019.2934261;10.1109/tvcg.2020.3028976;10.1109/tvcg.2021.3114683;10.1109/tvcg.2015.2466992;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/tvcg.2020.3028888;10.1109/vast.2017.8585721;10.1109/tvcg.2020.3030361;10.1109/tvcg.2020.3030380;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2018.2864838;10.1109/tvcg.2017.2744158;10.1109/tvcg.2020.3030471;10.1109/tvcg.2020.3030418;10.1109/vast50239.2020.00007;10.1109/tvcg.2020.3030432;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Machine learning,visual analytics,neural architecture search,design principle,knowledge discovery",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 77,
                "DownloadsXplore": 936,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 207,
                "i": [
                    207
                ]
            }
        },
        {
            "name": "Shilong Liu",
            "value": 0,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 360,
            "x": -189.64249799904348,
            "y": -9.258669055689866,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision",
                "DOI": "10.1109/tvcg.2023.3326588",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326588",
                "FirstPage": 76,
                "LastPage": 86,
                "PaperType": "J",
                "Abstract": "Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.",
                "AuthorNamesDeduped": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu 0004;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorNames": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorAffiliation": "School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Department of Computer Science and Technology, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Cardiff University, United Kingdom;Department of Computer Science and Technology, Tsinghua University, China;Harvard University, United Kingdom;School of Software, BNRist, Tsinghua University, China",
                "InternalReferences": "10.1109/tvcg.2014.2346660;10.1109/tvcg.2022.3209425;10.1109/tvcg.2017.2744683;10.1109/tvcg.2020.3028976;10.1109/tvcg.2020.3030350;10.1109/tvcg.2013.173;10.1109/tvcg.2021.3114855;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2022.3209485;10.1109/tvcg.2022.3209458;10.1109/tvcg.2007.70589;10.1109/tvcg.2022.3209489;10.1109/vast50239.2020.00007;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "Model evaluation,computer vision,classification,object detection,instance segmentation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 69,
                "DownloadsXplore": 668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 56,
                "i": [
                    56
                ]
            }
        },
        {
            "name": "Zhaowei Wang",
            "value": 0,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 361,
            "x": 146.293098965339,
            "y": -121.44269922526233,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision",
                "DOI": "10.1109/tvcg.2023.3326588",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326588",
                "FirstPage": 76,
                "LastPage": 86,
                "PaperType": "J",
                "Abstract": "Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.",
                "AuthorNamesDeduped": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu 0004;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorNames": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorAffiliation": "School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Department of Computer Science and Technology, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Cardiff University, United Kingdom;Department of Computer Science and Technology, Tsinghua University, China;Harvard University, United Kingdom;School of Software, BNRist, Tsinghua University, China",
                "InternalReferences": "10.1109/tvcg.2014.2346660;10.1109/tvcg.2022.3209425;10.1109/tvcg.2017.2744683;10.1109/tvcg.2020.3028976;10.1109/tvcg.2020.3030350;10.1109/tvcg.2013.173;10.1109/tvcg.2021.3114855;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2022.3209485;10.1109/tvcg.2022.3209458;10.1109/tvcg.2007.70589;10.1109/tvcg.2022.3209489;10.1109/vast50239.2020.00007;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "Model evaluation,computer vision,classification,object detection,instance segmentation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 69,
                "DownloadsXplore": 668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 56,
                "i": [
                    56
                ]
            }
        },
        {
            "name": "Hang Su",
            "value": 0,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 362,
            "x": -25.874326992896197,
            "y": 188.62799156717088,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision",
                "DOI": "10.1109/tvcg.2023.3326588",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326588",
                "FirstPage": 76,
                "LastPage": 86,
                "PaperType": "J",
                "Abstract": "Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.",
                "AuthorNamesDeduped": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu 0004;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorNames": "Changjian Chen;Yukai Guo;Fengyuan Tian;Shilong Liu;Weikai Yang;Zhaowei Wang;Jing Wu;Hang Su;Hanspeter Pfister;Shixia Liu",
                "AuthorAffiliation": "School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Department of Computer Science and Technology, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;School of Software, BNRist, Tsinghua University, China;Cardiff University, United Kingdom;Department of Computer Science and Technology, Tsinghua University, China;Harvard University, United Kingdom;School of Software, BNRist, Tsinghua University, China",
                "InternalReferences": "10.1109/tvcg.2014.2346660;10.1109/tvcg.2022.3209425;10.1109/tvcg.2017.2744683;10.1109/tvcg.2020.3028976;10.1109/tvcg.2020.3030350;10.1109/tvcg.2013.173;10.1109/tvcg.2021.3114855;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2022.3209485;10.1109/tvcg.2022.3209458;10.1109/tvcg.2007.70589;10.1109/tvcg.2022.3209489;10.1109/vast50239.2020.00007;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "Model evaluation,computer vision,classification,object detection,instance segmentation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 69,
                "DownloadsXplore": 668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 56,
                "i": [
                    56
                ]
            }
        },
        {
            "name": "Kelei Cao",
            "value": 277,
            "numPapers": 57,
            "cluster": "1",
            "visible": 1,
            "index": 363,
            "x": -108.48678212106651,
            "y": -156.78207201404197,
            "vy": 0,
            "vx": 0,
            "r": 1.3189407023603914,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Analyzing the Noise Robustness of Deep Neural Networks",
                "DOI": "10.1109/vast.2018.8802509",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802509",
                "FirstPage": 60,
                "LastPage": 71,
                "PaperType": "C",
                "Abstract": "Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial examples. These examples are intentionally designed by making imperceptible perturbations and often mislead a DNN into making an incorrect prediction. This phenomenon means that there is significant risk in applying DNNs to safety-critical applications, such as driverless cars. To address this issue, we present a visual analytics approach to explain the primary cause of the wrong predictions introduced by adversarial examples. The key is to analyze the datapaths of the adversarial examples and compare them with those of the normal examples. A datapath is a group of critical neurons and their connections. To this end, we formulate the datapath extraction as a subset selection problem and approximately solve it based on back-propagation. A multi-level visualization consisting of a segmented DAG (layer level), an Euler diagram (feature map level), and a heat map (neuron level), has been designed to help experts investigate datapaths from the high-level layers to the detailed neuron activations. Two case studies are conducted that demonstrate the promise of our approach in support of explaining the working mechanism of adversarial examples.",
                "AuthorNamesDeduped": "Mengchen Liu;Shixia Liu;Hang Su 0006;Kelei Cao;Jun Zhu 0001",
                "AuthorNames": "Mengchen Liu;Shixia Liu;Hang Su;Kelei Cao;Jun Zhu",
                "AuthorAffiliation": "School of Software, Tsinghua University;School of Software, Tsinghua University;Dept.of Comp.Sci.Tech., Tsinghua University;School of Software, Tsinghua University;Dept.of Comp.Sci.Tech., Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2015.2467618;10.1109/tvcg.2011.186;10.1109/tvcg.2016.2598496;10.1109/tvcg.2017.2744683;10.1109/tvcg.2014.2346431;10.1109/tvcg.2014.2346433;10.1109/tvcg.2017.2744199;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2011.209;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2010.210;10.1109/tvcg.2017.2744018;10.1109/tvcg.2011.183;10.1109/tvcg.2017.2744158;10.1109/visual.2005.1532820;10.1109/vast.2014.7042494;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2865041;10.1109/vast.2017.8585721",
                "AuthorKeywords": "Deep neural networks,robustness,adversarial examples,back propagation,multi-level visualization.",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 64,
                "DownloadsXplore": 851,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 745,
                "i": [
                    745
                ]
            }
        },
        {
            "name": "Saleema Amershi",
            "value": 151,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 364,
            "x": 186.15507691956932,
            "y": 42.38263013392672,
            "vy": 0,
            "vx": 0,
            "r": 1.1738629821531377,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Squares: Supporting Interactive Performance Analysis for Multiclass Classifiers",
                "DOI": "10.1109/tvcg.2016.2598828",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598828",
                "FirstPage": 61,
                "LastPage": 70,
                "PaperType": "J",
                "Abstract": "Performance analysis is critical in applied machine learning because it influences the models practitioners produce. Current performance analysis tools suffer from issues including obscuring important characteristics of model behavior and dissociating performance from data. In this work, we present Squares, a performance visualization for multiclass classification problems. Squares supports estimating common performance metrics while displaying instance-level distribution information necessary for helping practitioners prioritize efforts and access data. Our controlled study shows that practitioners can assess performance significantly faster and more accurately with Squares than a confusion matrix, a common performance analysis tool in machine learning.",
                "AuthorNamesDeduped": "Donghao Ren;Saleema Amershi;Bongshin Lee;Jina Suh;Jason D. Williams",
                "AuthorNames": "Donghao Ren;Saleema Amershi;Bongshin Lee;Jina Suh;Jason D. Williams",
                "AuthorAffiliation": "University of California, Santa Barbara;Microsoft Research;Microsoft Research;Microsoft Research;Microsoft Research",
                "InternalReferences": "0.1109/visual.2000.885740;10.1109/vast.2010.5652443;10.1109/tvcg.2012.277;10.1109/tvcg.2014.2346660;10.1109/vast.2011.6102453;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Performance analysis;classification;usable machine learning",
                "AminerCitationCount": 197,
                "CitationCountCrossRef": 137,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 2915,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 959,
                "i": [
                    959
                ]
            }
        },
        {
            "name": "Jina Suh",
            "value": 135,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 365,
            "x": -166.1214234591095,
            "y": 94.62384830432133,
            "vy": 0,
            "vx": 0,
            "r": 1.155440414507772,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Squares: Supporting Interactive Performance Analysis for Multiclass Classifiers",
                "DOI": "10.1109/tvcg.2016.2598828",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598828",
                "FirstPage": 61,
                "LastPage": 70,
                "PaperType": "J",
                "Abstract": "Performance analysis is critical in applied machine learning because it influences the models practitioners produce. Current performance analysis tools suffer from issues including obscuring important characteristics of model behavior and dissociating performance from data. In this work, we present Squares, a performance visualization for multiclass classification problems. Squares supports estimating common performance metrics while displaying instance-level distribution information necessary for helping practitioners prioritize efforts and access data. Our controlled study shows that practitioners can assess performance significantly faster and more accurately with Squares than a confusion matrix, a common performance analysis tool in machine learning.",
                "AuthorNamesDeduped": "Donghao Ren;Saleema Amershi;Bongshin Lee;Jina Suh;Jason D. Williams",
                "AuthorNames": "Donghao Ren;Saleema Amershi;Bongshin Lee;Jina Suh;Jason D. Williams",
                "AuthorAffiliation": "University of California, Santa Barbara;Microsoft Research;Microsoft Research;Microsoft Research;Microsoft Research",
                "InternalReferences": "0.1109/visual.2000.885740;10.1109/vast.2010.5652443;10.1109/tvcg.2012.277;10.1109/tvcg.2014.2346660;10.1109/vast.2011.6102453;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Performance analysis;classification;usable machine learning",
                "AminerCitationCount": 197,
                "CitationCountCrossRef": 137,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 2915,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 959,
                "i": [
                    959
                ]
            }
        },
        {
            "name": "Jason D. Williams",
            "value": 135,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 366,
            "x": 58.65535210892813,
            "y": -182.23487500744102,
            "vy": 0,
            "vx": 0,
            "r": 1.155440414507772,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Squares: Supporting Interactive Performance Analysis for Multiclass Classifiers",
                "DOI": "10.1109/tvcg.2016.2598828",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598828",
                "FirstPage": 61,
                "LastPage": 70,
                "PaperType": "J",
                "Abstract": "Performance analysis is critical in applied machine learning because it influences the models practitioners produce. Current performance analysis tools suffer from issues including obscuring important characteristics of model behavior and dissociating performance from data. In this work, we present Squares, a performance visualization for multiclass classification problems. Squares supports estimating common performance metrics while displaying instance-level distribution information necessary for helping practitioners prioritize efforts and access data. Our controlled study shows that practitioners can assess performance significantly faster and more accurately with Squares than a confusion matrix, a common performance analysis tool in machine learning.",
                "AuthorNamesDeduped": "Donghao Ren;Saleema Amershi;Bongshin Lee;Jina Suh;Jason D. Williams",
                "AuthorNames": "Donghao Ren;Saleema Amershi;Bongshin Lee;Jina Suh;Jason D. Williams",
                "AuthorAffiliation": "University of California, Santa Barbara;Microsoft Research;Microsoft Research;Microsoft Research;Microsoft Research",
                "InternalReferences": "0.1109/visual.2000.885740;10.1109/vast.2010.5652443;10.1109/tvcg.2012.277;10.1109/tvcg.2014.2346660;10.1109/vast.2011.6102453;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Performance analysis;classification;usable machine learning",
                "AminerCitationCount": 197,
                "CitationCountCrossRef": 137,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 2915,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 959,
                "i": [
                    959
                ]
            }
        },
        {
            "name": "Yafeng Lu",
            "value": 143,
            "numPapers": 86,
            "cluster": "1",
            "visible": 1,
            "index": 367,
            "x": 79.95611610052991,
            "y": 174.23265910304698,
            "vy": 0,
            "vx": 0,
            "r": 1.164651698330455,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "An Interactive Method to Improve Crowdsourced Annotations",
                "DOI": "10.1109/tvcg.2018.2864843",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864843",
                "FirstPage": 235,
                "LastPage": 245,
                "PaperType": "J",
                "Abstract": "In order to effectively infer correct labels from noisy crowdsourced annotations, learning-from-crowds models have introduced expert validation. However, little research has been done on facilitating the validation procedure. In this paper, we propose an interactive method to assist experts in verifying uncertain instance labels and unreliable workers. Given the instance labels and worker reliability inferred from a learning-from-crowds model, candidate instances and workers are selected for expert validation. The influence of verified results is propagated to relevant instances and workers through the learning-from-crowds model. To facilitate the validation of annotations, we have developed a confusion visualization to indicate the confusing classes for further exploration, a constrained projection method to show the uncertain labels in context, and a scatter-plot-based visualization to illustrate worker reliability. The three visualizations are tightly integrated with the learning-from-crowds model to provide an iterative and progressive environment for data validation. Two case studies were conducted that demonstrate our approach offers an efficient method for validating and improving crowdsourced annotations.",
                "AuthorNamesDeduped": "Shixia Liu;Changjian Chen;Yafeng Lu;Fang-Xin Ou-Yang;Bin Wang 0021",
                "AuthorNames": "Shixia Liu;Changjian Chen;Yafeng Lu;Fangxin Ouyang;Bin Wang",
                "AuthorAffiliation": "Tsinghua University, Beijing, Beijing, CN;Tsinghua University, Beijing, Beijing, CN;Arizona State University, Tempe, AZ, US;Tsinghua University, Beijing, Beijing, CN;Tsinghua University, Beijing, Beijing, CN",
                "InternalReferences": "0.1109/tvcg.2016.2598592;10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744818;10.1109/vast.2016.7883520;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.212;10.1109/tvcg.2011.239;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2016.2598445;10.1109/tvcg.2015.2467622;10.1109/tvcg.2015.2467554;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/vast.2016.7883508;10.1109/tvcg.2009.139;10.1109/tvcg.2016.2598829;10.1109/tvcg.2017.2745078;10.1109/vast.2014.7042494;10.1109/tvcg.2017.2744685;10.1109/tvcg.2013.164;10.1109/vast.2016.7883514",
                "AuthorKeywords": "Crowdsourcing,learning-from-crowds,interactive visualization,focus + context",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 1538,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 741,
                "i": [
                    741
                ]
            }
        },
        {
            "name": "Elmar Eisemann",
            "value": 201,
            "numPapers": 68,
            "cluster": "6",
            "visible": 1,
            "index": 368,
            "x": -176.88979813924342,
            "y": -74.5654029309686,
            "vy": 0,
            "vx": 0,
            "r": 1.231433506044905,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "GPGPU Linear Complexity t-SNE Optimization",
                "DOI": "10.1109/tvcg.2019.2934307",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934307",
                "FirstPage": 1172,
                "LastPage": 1181,
                "PaperType": "J",
                "Abstract": "In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library.",
                "AuthorNamesDeduped": "Nicola Pezzotti;Julian Thijssen;Alexander Mordvintsev;Thomas Höllt;Baldur van Lew;Boudewijn P. F. Lelieveldt;Elmar Eisemann;Anna Vilanova",
                "AuthorNames": "Nicola Pezzotti;Julian Thijssen;Alexander Mordvintsev;Thomas Höllt;Baldur Van Lew;Boudewijn P.F. Lelieveldt;Elmar Eisemann;Anna Vilanova",
                "AuthorAffiliation": "Google AI, Zürich, Switzerland and Delft University of Technology, Delft, The Netherlands;Delft University of Technology, Delft, The Netherlands;Google AI, Zürich, Switzerland;Delft University of Technology, Delft, The Netherlands and Leiden University Medical Center, Leiden, The Netherlands;Leiden University Medical Center, Leiden, The Netherlands;Delft University of Technology, Delft, The Netherlands and Leiden University Medical Center, Leiden, The Netherlands;Delft University of Technology, Delft, The Netherlands;Delft University of Technology, Delft, The Netherlands",
                "InternalReferences": "0.1109/tvcg.2017.2744318;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745141;10.1109/tvcg.2017.2744358;10.1109/tvcg.2014.2346574",
                "AuthorKeywords": "High Dimensional Data,Dimensionality Reduction,Progressive Visual Analytics,Approximate Computation,GPGPU",
                "AminerCitationCount": 59,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1063,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 605,
                "i": [
                    605
                ]
            }
        },
        {
            "name": "Elke A. Rundensteiner",
            "value": 617,
            "numPapers": 68,
            "cluster": "6",
            "visible": 1,
            "index": 369,
            "x": 181.0463922985791,
            "y": -64.59259892332098,
            "vy": 0,
            "vx": 0,
            "r": 1.71042026482441,
            "node": {
                "Conference": "InfoVis",
                "Year": 2006,
                "Title": "Measuring Data Abstraction Quality in Multiresolution Visualizations",
                "DOI": "10.1109/tvcg.2006.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.161",
                "FirstPage": 709,
                "LastPage": 716,
                "PaperType": "J",
                "Abstract": "Data abstraction techniques are widely used in multiresolution visualization systems to reduce visual clutter and facilitate analysis from overview to detail. However, analysts are usually unaware of how well the abstracted data represent the original dataset, which can impact the reliability of results gleaned from the abstractions. In this paper, we define two data abstraction quality measures for computing the degree to which the abstraction conveys the original dataset: the histogram difference measure and the nearest neighbor measure. They have been integrated within XmdvTool, a public-domain multiresolution visualization system for multivariate data analysis that supports sampling as well as clustering to simplify data. Several interactive operations are provided, including adjusting the data abstraction level, changing selected regions, and setting the acceptable data abstraction quality level. Conducting these operations, analysts can select an optimal data abstraction level. Also, analysts can compare different abstraction methods using the measures to see how well relative data density and outliers are maintained, and then select an abstraction method that meets the requirement of their analytic tasks",
                "AuthorNamesDeduped": "Qingguang Cui;Matthew O. Ward;Elke A. Rundensteiner;Jing Yang 0001",
                "AuthorNames": "Qingguang Cui;Matthew Ward;Elke Rundensteiner;Jing Yang",
                "AuthorAffiliation": "Worcester Polytechnic Institute, Worcester, MA, USA;Worcester Polytechnic Institute, Worcester, MA, USA;Worcester Polytechnic Institute, Worcester, MA, USA;University of North Carolina, Charlotte, Charlotte, NC, USA",
                "InternalReferences": "0.1109/infvis.2004.19;10.1109/visual.2005.1532819;10.1109/infvis.2004.15;10.1109/visual.1995.485139;10.1109/infvis.2000.885088",
                "AuthorKeywords": "Metrics, Clustering, Sampling, Multiresolution Visualization",
                "AminerCitationCount": 128,
                "CitationCountCrossRef": 68,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 940,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2226,
                "i": [
                    2226
                ]
            }
        },
        {
            "name": "Charles D. Stolper",
            "value": 203,
            "numPapers": 27,
            "cluster": "6",
            "visible": 1,
            "index": 370,
            "x": -89.98782462918606,
            "y": 170.15343493008552,
            "vy": 0,
            "vx": 0,
            "r": 1.2337363270005757,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics",
                "DOI": "10.1109/tvcg.2014.2346574",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346574",
                "FirstPage": 1653,
                "LastPage": 1662,
                "PaperType": "J",
                "Abstract": "As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.",
                "AuthorNamesDeduped": "Charles D. Stolper;Adam Perer;David Gotz",
                "AuthorNames": "Charles D. Stolper;Adam Perer;David Gotz",
                "AuthorAffiliation": "School of Interactive Computing, Georgia Institute of Technology;IBM T.J. Watson Research Center;University of North Carolina at Chapel Hill",
                "InternalReferences": "0.1109/vast.2006.261421;10.1109/tvcg.2013.227;10.1109/tvcg.2009.187;10.1109/tvcg.2011.179;10.1109/infvis.2005.1532133;10.1109/tvcg.2012.225;10.1109/tvcg.2013.179;10.1109/infvis.2000.885097;10.1109/tvcg.2013.200",
                "AuthorKeywords": "Progressive visual analytics, information visualization, interactive machine learning, electronic medical records",
                "AminerCitationCount": 233,
                "CitationCountCrossRef": 140,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 2581,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1244,
                "i": [
                    1244
                ]
            }
        },
        {
            "name": "Thomas Höllt",
            "value": 217,
            "numPapers": 53,
            "cluster": "1",
            "visible": 1,
            "index": 371,
            "x": -48.64829215812986,
            "y": -186.50293206836517,
            "vy": 0,
            "vx": 0,
            "r": 1.2498560736902706,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "GPGPU Linear Complexity t-SNE Optimization",
                "DOI": "10.1109/tvcg.2019.2934307",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934307",
                "FirstPage": 1172,
                "LastPage": 1181,
                "PaperType": "J",
                "Abstract": "In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library.",
                "AuthorNamesDeduped": "Nicola Pezzotti;Julian Thijssen;Alexander Mordvintsev;Thomas Höllt;Baldur van Lew;Boudewijn P. F. Lelieveldt;Elmar Eisemann;Anna Vilanova",
                "AuthorNames": "Nicola Pezzotti;Julian Thijssen;Alexander Mordvintsev;Thomas Höllt;Baldur Van Lew;Boudewijn P.F. Lelieveldt;Elmar Eisemann;Anna Vilanova",
                "AuthorAffiliation": "Google AI, Zürich, Switzerland and Delft University of Technology, Delft, The Netherlands;Delft University of Technology, Delft, The Netherlands;Google AI, Zürich, Switzerland;Delft University of Technology, Delft, The Netherlands and Leiden University Medical Center, Leiden, The Netherlands;Leiden University Medical Center, Leiden, The Netherlands;Delft University of Technology, Delft, The Netherlands and Leiden University Medical Center, Leiden, The Netherlands;Delft University of Technology, Delft, The Netherlands;Delft University of Technology, Delft, The Netherlands",
                "InternalReferences": "0.1109/tvcg.2017.2744318;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2745141;10.1109/tvcg.2017.2744358;10.1109/tvcg.2014.2346574",
                "AuthorKeywords": "High Dimensional Data,Dimensionality Reduction,Progressive Visual Analytics,Approximate Computation,GPGPU",
                "AminerCitationCount": 59,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1063,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 605,
                "i": [
                    605
                ]
            }
        },
        {
            "name": "Silvia Miksch",
            "value": 293,
            "numPapers": 135,
            "cluster": "5",
            "visible": 1,
            "index": 372,
            "x": 162.07034736203363,
            "y": 104.80077531177785,
            "vy": 0,
            "vx": 0,
            "r": 1.3373632700057572,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Characterizing Guidance in Visual Analytics",
                "DOI": "10.1109/tvcg.2016.2598468",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598468",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.",
                "AuthorNamesDeduped": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorNames": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany",
                "InternalReferences": "0.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "Visual analytics;guidance model;assistance;user support",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3130,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 958,
                "i": [
                    958
                ]
            }
        },
        {
            "name": "Davide Ceneda",
            "value": 108,
            "numPapers": 47,
            "cluster": "5",
            "visible": 1,
            "index": 373,
            "x": -190.55279849176495,
            "y": 32.24330918123657,
            "vy": 0,
            "vx": 0,
            "r": 1.1243523316062176,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Characterizing Guidance in Visual Analytics",
                "DOI": "10.1109/tvcg.2016.2598468",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598468",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.",
                "AuthorNamesDeduped": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorNames": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany",
                "InternalReferences": "0.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "Visual analytics;guidance model;assistance;user support",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3130,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 958,
                "i": [
                    958
                ]
            }
        },
        {
            "name": "Theresia Gschwandtner",
            "value": 148,
            "numPapers": 30,
            "cluster": "5",
            "visible": 1,
            "index": 374,
            "x": 118.88649389385566,
            "y": -152.69578111272824,
            "vy": 0,
            "vx": 0,
            "r": 1.1704087507196315,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Characterizing Guidance in Visual Analytics",
                "DOI": "10.1109/tvcg.2016.2598468",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598468",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.",
                "AuthorNamesDeduped": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorNames": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany",
                "InternalReferences": "0.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "Visual analytics;guidance model;assistance;user support",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3130,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 958,
                "i": [
                    958
                ]
            }
        },
        {
            "name": "Thorsten May",
            "value": 157,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 375,
            "x": 15.501973031350152,
            "y": 193.1571609651977,
            "vy": 0,
            "vx": 0,
            "r": 1.1807714450201496,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Characterizing Guidance in Visual Analytics",
                "DOI": "10.1109/tvcg.2016.2598468",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598468",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.",
                "AuthorNamesDeduped": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorNames": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany",
                "InternalReferences": "0.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "Visual analytics;guidance model;assistance;user support",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3130,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 958,
                "i": [
                    958
                ]
            }
        },
        {
            "name": "Hans-Jörg Schulz",
            "value": 277,
            "numPapers": 46,
            "cluster": "4",
            "visible": 1,
            "index": 376,
            "x": -142.0952909961422,
            "y": -132.13223784043646,
            "vy": 0,
            "vx": 0,
            "r": 1.3189407023603914,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Characterizing Guidance in Visual Analytics",
                "DOI": "10.1109/tvcg.2016.2598468",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598468",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.",
                "AuthorNamesDeduped": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorNames": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany",
                "InternalReferences": "0.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "Visual analytics;guidance model;assistance;user support",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3130,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 958,
                "i": [
                    958
                ]
            }
        },
        {
            "name": "Christian Tominski",
            "value": 345,
            "numPapers": 64,
            "cluster": "3",
            "visible": 1,
            "index": 377,
            "x": 194.28819552748675,
            "y": 1.4481293702843117,
            "vy": 0,
            "vx": 0,
            "r": 1.3972366148531952,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Characterizing Guidance in Visual Analytics",
                "DOI": "10.1109/tvcg.2016.2598468",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598468",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.",
                "AuthorNamesDeduped": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorNames": "Davide Ceneda;Theresia Gschwandtner;Thorsten May;Silvia Miksch;Hans-Jörg Schulz;Marc Streit;Christian Tominski",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Fraunhofer IGD, Darmstadt, Germany;Vienna University of Technology, Austria;University of Rostock, Germany;Johannes Kepler University, Linz, Austria;University of Rostock, Germany",
                "InternalReferences": "0.1109/visual.2000.885678;10.1109/tvcg.2015.2467191;10.1109/visual.1990.146375;10.1109/tvcg.2014.2346260;10.1109/tvcg.2014.2346481;10.1109/infvis.2004.2;10.1109/tvcg.2013.120;10.1109/visual.1997.663889;10.1109/tvcg.2015.2467691;10.1109/visual.2002.1183803;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.174;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "Visual analytics;guidance model;assistance;user support",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 3130,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 958,
                "i": [
                    958
                ]
            }
        },
        {
            "name": "Mennatallah El-Assady",
            "value": 176,
            "numPapers": 89,
            "cluster": "5",
            "visible": 1,
            "index": 378,
            "x": -144.43118645078584,
            "y": 130.34428403431554,
            "vy": 0,
            "vx": 0,
            "r": 1.2026482440990214,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Comparison of Language Model Adaptation",
                "DOI": "10.1109/tvcg.2022.3209458",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209458",
                "FirstPage": 1178,
                "LastPage": 1188,
                "PaperType": "J",
                "Abstract": "Neural language models are widely used; however, their model parameters often need to be adapted to the specific domains and tasks of an application, which is time- and resource-consuming. Thus, adapters have recently been introduced as a lightweight alternative for model adaptation. They consist of a small set of task-specific parameters with a reduced training time and simple parameter composition. The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces. To help developers overcome these challenges, we provide a twofold contribution. First, in close collaboration with NLP researchers, we conducted a requirement analysis for an approach supporting adapter evaluation and detected, among others, the need for both intrinsic (i.e., embedding similarity-based) and extrinsic (i.e., prediction-based) explanation methods. Second, motivated by the gathered requirements, we designed a flexible visual analytics workspace that enables the comparison of adapter properties. In this paper, we discuss several design iterations and alternatives for interactive, comparative visual explanation methods. Our comparative visualizations show the differences in the adapted embedding vectors and prediction outcomes for diverse human-interpretable concepts (e.g., person names, human qualities). We evaluate our workspace through case studies and show that, for instance, an adapter trained on the language debiasing task according to context-0 (decontextualized) embeddings introduces a new type of bias where words (even gender-independent words such as countries) become more similar to female- than male pronouns. We demonstrate that these are artifacts of context-0 embeddings, and the adapter effectively eliminates the gender information from the contextualized word representations.",
                "AuthorNamesDeduped": "Rita Sevastjanova;Eren Cakmak;Shauli Ravfogel;Ryan Cotterell;Mennatallah El-Assady",
                "AuthorNames": "Rita Sevastjanova;Eren Cakmak;Shauli Ravfogel;Ryan Cotterell;Mennatallah El-Assady",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Bar-Ilan University, Israel;ETH, Israel;ETH, AI Center, Israel",
                "InternalReferences": "0.1109/tvcg.2020.3028976;10.1109/tvcg.2017.2744199;10.1109/vast.2018.8802454;10.1109/tvcg.2017.2745141;10.1109/tvcg.2018.2865230;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865044",
                "AuthorKeywords": "Language Model Adaptation,Adapter,Word Embeddings,Sequence Classification,Visual Analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 592,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 178,
                "i": [
                    178
                ]
            }
        },
        {
            "name": "Fabian Sperrle",
            "value": 88,
            "numPapers": 40,
            "cluster": "5",
            "visible": 1,
            "index": 379,
            "x": 18.477123229042594,
            "y": -193.9293580590076,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "VIANA: Visual Interactive Annotation of Argumentation",
                "DOI": "10.1109/vast47406.2019.8986917",
                "Link": "http://dx.doi.org/10.1109/VAST47406.2019.8986917",
                "FirstPage": 11,
                "LastPage": 22,
                "PaperType": "C",
                "Abstract": "Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.",
                "AuthorNamesDeduped": "Fabian Sperrle;Rita Sevastjanova;Rebecca Kehlbeck;Mennatallah El-Assady",
                "AuthorNames": "Fabian Sperrle;Rita Sevastjanova;Rebecca Kehlbeck;Mennatallah El-Assady",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/vast.2012.6400485;10.1109/tvcg.2006.156;10.1109/tvcg.2019.2934654;10.1109/tvcg.2017.2745080;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2467531;10.1109/tvcg.2007.70539;10.1109/tvcg.2008.127;10.1109/tvcg.2014.2346677;10.1109/tvcg.2015.2467759;10.1109/tvcg.2012.262",
                "AuthorKeywords": "Argumentation annotation,machine learning,user interaction,layered interfaces,semantic transitions",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 390,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 633,
                "i": [
                    633
                ]
            }
        },
        {
            "name": "Christoph Heinzl",
            "value": 269,
            "numPapers": 61,
            "cluster": "6",
            "visible": 1,
            "index": 380,
            "x": 117.52748333711034,
            "y": 155.68330244584757,
            "vy": 0,
            "vx": 0,
            "r": 1.3097294185377086,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "Visual Parameter Space Analysis: A Conceptual Framework",
                "DOI": "10.1109/tvcg.2014.2346321",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346321",
                "FirstPage": 2161,
                "LastPage": 2170,
                "PaperType": "J",
                "Abstract": "Various case studies in different application domains have shown the great potential of visual parameter space analysis to support validating and using simulation models. In order to guide and systematize research endeavors in this area, we provide a conceptual framework for visual parameter space analysis problems. The framework is based on our own experience and a structured analysis of the visualization literature. It contains three major components: (1) a data flow model that helps to abstractly describe visual parameter space analysis problems independent of their application domain; (2) a set of four navigation strategies of how parameter space analysis can be supported by visualization tools; and (3) a characterization of six analysis tasks. Based on our framework, we analyze and classify the current body of literature, and identify three open research gaps in visual parameter space analysis. The framework and its discussion are meant to support visualization designers and researchers in characterizing parameter space analysis problems and to guide their design and evaluation processes.",
                "AuthorNamesDeduped": "Michael Sedlmair;Christoph Heinzl;Stefan Bruckner;Harald Piringer;Torsten Möller",
                "AuthorNames": "Michael Sedlmair;Christoph Heinzl;Stefan Bruckner;Harald Piringer;Torsten Möller",
                "AuthorAffiliation": "University of Vienna;University of Applied Sciences Upper Austria;University of Bergen;VRVis;University of Vienna",
                "InternalReferences": "0.1109/infvis.1995.528680;10.1109/tvcg.2010.177;10.1109/tvcg.2008.145;10.1109/tvcg.2012.219;10.1109/tvcg.2009.155;10.1109/tvcg.2010.223;10.1109/tvcg.2012.224;10.1109/tvcg.2012.213;10.1109/tvcg.2010.190;10.1109/infvis.2005.1532136;10.1109/visual.1993.398859;10.1109/vast.2009.5333431;10.1109/tvcg.2007.70581;10.1109/tvcg.2013.142;10.1109/vast.2010.5652392;10.1109/infvis.2005.1532142;10.1109/tvcg.2013.130;10.1109/tvcg.2013.147;10.1109/tvcg.2013.124;10.1109/tvcg.2012.190;10.1109/tvcg.2009.111;10.1109/tvcg.2011.229;10.1109/tvcg.2013.157;10.1109/tvcg.2013.125;10.1109/vast.2011.6102450;10.1109/visual.2005.1532788;10.1109/tvcg.2013.126;10.1109/tvcg.2011.248;10.1109/tvcg.2010.214;10.1109/tvcg.2009.170;10.1109/vast.2011.6102457;10.1109/tvcg.2013.120;10.1109/tvcg.2011.253",
                "AuthorKeywords": "Parameter space analysis, input-output model, simulation, task characterization, literature analysis",
                "AminerCitationCount": 205,
                "CitationCountCrossRef": 146,
                "PubsCitedCrossRef": 77,
                "DownloadsXplore": 2465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1166,
                "i": [
                    1166
                ]
            }
        },
        {
            "name": "Stefan Bruckner",
            "value": 757,
            "numPapers": 196,
            "cluster": "6",
            "visible": 1,
            "index": 381,
            "x": -192.07557080873784,
            "y": -35.45384462223437,
            "vy": 0,
            "vx": 0,
            "r": 1.8716177317213587,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "ViSlang: A System for Interpreted Domain-Specific Languages for Scientific Visualization",
                "DOI": "10.1109/tvcg.2014.2346318",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346318",
                "FirstPage": 2388,
                "LastPage": 2396,
                "PaperType": "J",
                "Abstract": "Researchers from many domains use scientific visualization in their daily practice. Existing implementations of algorithms usually come with a graphical user interface (high-level interface), or as software library or source code (low-level interface). In this paper we present a system that integrates domain-specific languages (DSLs) and facilitates the creation of new DSLs. DSLs provide an effective interface for domain scientists avoiding the difficulties involved with low-level interfaces and at the same time offering more flexibility than high-level interfaces. We describe the design and implementation of ViSlang, an interpreted language specifically tailored for scientific visualization. A major contribution of our design is the extensibility of the ViSlang language. Novel DSLs that are tailored to the problems of the domain can be created and integrated into ViSlang. We show that our approach can be added to existing user interfaces to increase the flexibility for expert users on demand, but at the same time does not interfere with the user experience of novice users. To demonstrate the flexibility of our approach we present new DSLs for volume processing, querying and visualization. We report the implementation effort for new DSLs and compare our approach with Matlab and Python implementations in terms of run-time performance.",
                "AuthorNamesDeduped": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorNames": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorAffiliation": "KAUST;University of Bergen;Vienna University of Technology, VrVis Research Center;KAUST",
                "InternalReferences": "0.1109/visual.2005.1532792;10.1109/visual.1992.235219;10.1109/tvcg.2009.174;10.1109/tvcg.2014.2346322;10.1109/visual.2004.95;10.1109/tvcg.2011.185;10.1109/visual.2005.1532788;10.1109/visual.1992.235202;10.1109/tvcg.2008.184",
                "AuthorKeywords": "Domain-specific languages, Volume visualization, Volume visualization framework",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 767,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1220,
                "i": [
                    1220
                ]
            }
        },
        {
            "name": "Meeshu Agnihotri",
            "value": 31,
            "numPapers": 1,
            "cluster": "5",
            "visible": 1,
            "index": 382,
            "x": 165.7961449139772,
            "y": -103.73831660318895,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "A Heuristic Approach to Value-Driven Evaluation of Visualizations",
                "DOI": "10.1109/tvcg.2018.2865146",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865146",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.",
                "AuthorNamesDeduped": "Emily Wall;Meeshu Agnihotri;Laura E. Matzen;Kristin Divis;Michael Haass;Alex Endert;John T. Stasko",
                "AuthorNames": "Emily Wall;Meeshu Agnihotri;Laura Matzen;Kristin Divis;Michael Haass;Alex Endert;John Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US",
                "InternalReferences": "0.1109/infvis.2001.963289;10.1109/visual.2003.1250401",
                "AuthorKeywords": "Visualization evaluation,heuristics,value of visualization",
                "AminerCitationCount": 61,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2164,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 658,
                "i": [
                    658
                ]
            }
        },
        {
            "name": "Laura E. Matzen",
            "value": 40,
            "numPapers": 1,
            "cluster": "5",
            "visible": 1,
            "index": 383,
            "x": -52.24685427428227,
            "y": 188.73332037147523,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "A Heuristic Approach to Value-Driven Evaluation of Visualizations",
                "DOI": "10.1109/tvcg.2018.2865146",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865146",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.",
                "AuthorNamesDeduped": "Emily Wall;Meeshu Agnihotri;Laura E. Matzen;Kristin Divis;Michael Haass;Alex Endert;John T. Stasko",
                "AuthorNames": "Emily Wall;Meeshu Agnihotri;Laura Matzen;Kristin Divis;Michael Haass;Alex Endert;John Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US",
                "InternalReferences": "0.1109/infvis.2001.963289;10.1109/visual.2003.1250401",
                "AuthorKeywords": "Visualization evaluation,heuristics,value of visualization",
                "AminerCitationCount": 61,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2164,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 658,
                "i": [
                    658
                ]
            }
        },
        {
            "name": "Kristin Divis",
            "value": 31,
            "numPapers": 1,
            "cluster": "5",
            "visible": 1,
            "index": 384,
            "x": -89.0782336644434,
            "y": -174.68562701957714,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "A Heuristic Approach to Value-Driven Evaluation of Visualizations",
                "DOI": "10.1109/tvcg.2018.2865146",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865146",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.",
                "AuthorNamesDeduped": "Emily Wall;Meeshu Agnihotri;Laura E. Matzen;Kristin Divis;Michael Haass;Alex Endert;John T. Stasko",
                "AuthorNames": "Emily Wall;Meeshu Agnihotri;Laura Matzen;Kristin Divis;Michael Haass;Alex Endert;John Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US",
                "InternalReferences": "0.1109/infvis.2001.963289;10.1109/visual.2003.1250401",
                "AuthorKeywords": "Visualization evaluation,heuristics,value of visualization",
                "AminerCitationCount": 61,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2164,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 658,
                "i": [
                    658
                ]
            }
        },
        {
            "name": "Michael Haass",
            "value": 31,
            "numPapers": 1,
            "cluster": "5",
            "visible": 1,
            "index": 385,
            "x": 183.92066592338816,
            "y": 68.7254585019077,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "A Heuristic Approach to Value-Driven Evaluation of Visualizations",
                "DOI": "10.1109/tvcg.2018.2865146",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865146",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "Recently, an approach for determining the value of a visualization was proposed, one moving beyond simple measurements of task accuracy and speed. The value equation contains components for the time savings a visualization provides, the insights and insightful questions it spurs, the overall essence of the data it conveys, and the confidence about the data and its domain it inspires. This articulation of value is purely descriptive, however, providing no actionable method of assessing a visualization's value. In this work, we create a heuristic-based evaluation methodology to accompany the value equation for assessing interactive visualizations. We refer to the methodology colloquially as ICE-T, based on an anagram of the four value components. Our approach breaks the four components down into guidelines, each of which is made up of a small set of low-level heuristics. Evaluators who have knowledge of visualization design principles then assess the visualization with respect to the heuristics. We conducted an initial trial of the methodology on three interactive visualizations of the same data set, each evaluated by 15 visualization experts. We found that the methodology showed promise, obtaining consistent ratings across the three visualizations and mirroring judgments of the utility of the visualizations by instructors of the course in which they were developed.",
                "AuthorNamesDeduped": "Emily Wall;Meeshu Agnihotri;Laura E. Matzen;Kristin Divis;Michael Haass;Alex Endert;John T. Stasko",
                "AuthorNames": "Emily Wall;Meeshu Agnihotri;Laura Matzen;Kristin Divis;Michael Haass;Alex Endert;John Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Sandia National Laboratories, Albuquerque, NM, US;Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US",
                "InternalReferences": "0.1109/infvis.2001.963289;10.1109/visual.2003.1250401",
                "AuthorKeywords": "Visualization evaluation,heuristics,value of visualization",
                "AminerCitationCount": 61,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2164,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 658,
                "i": [
                    658
                ]
            }
        },
        {
            "name": "Yansong Huang",
            "value": 0,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 386,
            "x": -182.27671228207504,
            "y": 73.65595807290566,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Comparative Visual Analytics Framework for Evaluating Evolutionary Processes in Multi-Objective Optimization",
                "DOI": "10.1109/tvcg.2023.3326921",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326921",
                "FirstPage": 661,
                "LastPage": 671,
                "PaperType": "J",
                "Abstract": "Evolutionary multi-objective optimization (EMO) algorithms have been demonstrated to be effective in solving multi-criteria decision-making problems. In real-world applications, analysts often employ several algorithms concurrently and compare their solution sets to gain insight into the characteristics of different algorithms and explore a broader range of feasible solutions. However, EMO algorithms are typically treated as black boxes, leading to difficulties in performing detailed analysis and comparisons between the internal evolutionary processes. Inspired by the successful application of visual analytics tools in explainable AI, we argue that interactive visualization can significantly enhance the comparative analysis between multiple EMO algorithms. In this paper, we present a visual analytics framework that enables the exploration and comparison of evolutionary processes in EMO algorithms. Guided by a literature review and expert interviews, the proposed framework addresses various analytical tasks and establishes a multi-faceted visualization design to support the comparative analysis of intermediate generations in the evolution as well as solution sets. We demonstrate the effectiveness of our framework through case studies on benchmarking and real-world multi-objective optimization problems to elucidate how analysts can leverage our framework to inspect and compare diverse algorithms.",
                "AuthorNamesDeduped": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorNames": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2015.2467851;10.1109/tvcg.2017.2744199;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/tvcg.2020.3028888;10.1109/tvcg.2014.2346578;10.1109/tvcg.2020.3030361;10.1109/tvcg.2020.3030347;10.1109/visual.2005.1532820;10.1109/vast50239.2020.00006;10.1109/tvcg.2021.3114790;10.1109/tvcg.2020.3030418;10.1109/tvcg.2020.3030458;10.1109/tvcg.2021.3114850;10.1109/vast50239.2020.00007;10.1109/tvcg.2020.3030432;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Visual analytics,evolutionary multi-objective optimization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 60,
                "i": [
                    60
                ]
            }
        },
        {
            "name": "Zherui Zhang",
            "value": 0,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 387,
            "x": 84.76072937715807,
            "y": -177.66715722229637,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Comparative Visual Analytics Framework for Evaluating Evolutionary Processes in Multi-Objective Optimization",
                "DOI": "10.1109/tvcg.2023.3326921",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326921",
                "FirstPage": 661,
                "LastPage": 671,
                "PaperType": "J",
                "Abstract": "Evolutionary multi-objective optimization (EMO) algorithms have been demonstrated to be effective in solving multi-criteria decision-making problems. In real-world applications, analysts often employ several algorithms concurrently and compare their solution sets to gain insight into the characteristics of different algorithms and explore a broader range of feasible solutions. However, EMO algorithms are typically treated as black boxes, leading to difficulties in performing detailed analysis and comparisons between the internal evolutionary processes. Inspired by the successful application of visual analytics tools in explainable AI, we argue that interactive visualization can significantly enhance the comparative analysis between multiple EMO algorithms. In this paper, we present a visual analytics framework that enables the exploration and comparison of evolutionary processes in EMO algorithms. Guided by a literature review and expert interviews, the proposed framework addresses various analytical tasks and establishes a multi-faceted visualization design to support the comparative analysis of intermediate generations in the evolution as well as solution sets. We demonstrate the effectiveness of our framework through case studies on benchmarking and real-world multi-objective optimization problems to elucidate how analysts can leverage our framework to inspect and compare diverse algorithms.",
                "AuthorNamesDeduped": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorNames": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2015.2467851;10.1109/tvcg.2017.2744199;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/tvcg.2020.3028888;10.1109/tvcg.2014.2346578;10.1109/tvcg.2020.3030361;10.1109/tvcg.2020.3030347;10.1109/visual.2005.1532820;10.1109/vast50239.2020.00006;10.1109/tvcg.2021.3114790;10.1109/tvcg.2020.3030418;10.1109/tvcg.2020.3030458;10.1109/tvcg.2021.3114850;10.1109/vast50239.2020.00007;10.1109/tvcg.2020.3030432;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Visual analytics,evolutionary multi-objective optimization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 60,
                "i": [
                    60
                ]
            }
        },
        {
            "name": "Ao Jiao",
            "value": 0,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 388,
            "x": 57.58667826977813,
            "y": 188.50404368568084,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Comparative Visual Analytics Framework for Evaluating Evolutionary Processes in Multi-Objective Optimization",
                "DOI": "10.1109/tvcg.2023.3326921",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326921",
                "FirstPage": 661,
                "LastPage": 671,
                "PaperType": "J",
                "Abstract": "Evolutionary multi-objective optimization (EMO) algorithms have been demonstrated to be effective in solving multi-criteria decision-making problems. In real-world applications, analysts often employ several algorithms concurrently and compare their solution sets to gain insight into the characteristics of different algorithms and explore a broader range of feasible solutions. However, EMO algorithms are typically treated as black boxes, leading to difficulties in performing detailed analysis and comparisons between the internal evolutionary processes. Inspired by the successful application of visual analytics tools in explainable AI, we argue that interactive visualization can significantly enhance the comparative analysis between multiple EMO algorithms. In this paper, we present a visual analytics framework that enables the exploration and comparison of evolutionary processes in EMO algorithms. Guided by a literature review and expert interviews, the proposed framework addresses various analytical tasks and establishes a multi-faceted visualization design to support the comparative analysis of intermediate generations in the evolution as well as solution sets. We demonstrate the effectiveness of our framework through case studies on benchmarking and real-world multi-objective optimization problems to elucidate how analysts can leverage our framework to inspect and compare diverse algorithms.",
                "AuthorNamesDeduped": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorNames": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2015.2467851;10.1109/tvcg.2017.2744199;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/tvcg.2020.3028888;10.1109/tvcg.2014.2346578;10.1109/tvcg.2020.3030361;10.1109/tvcg.2020.3030347;10.1109/visual.2005.1532820;10.1109/vast50239.2020.00006;10.1109/tvcg.2021.3114790;10.1109/tvcg.2020.3030418;10.1109/tvcg.2020.3030458;10.1109/tvcg.2021.3114850;10.1109/vast50239.2020.00007;10.1109/tvcg.2020.3030432;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Visual analytics,evolutionary multi-objective optimization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 60,
                "i": [
                    60
                ]
            }
        },
        {
            "name": "Ran Cheng",
            "value": 0,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 389,
            "x": -170.0136625844717,
            "y": -100.22651612529198,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Comparative Visual Analytics Framework for Evaluating Evolutionary Processes in Multi-Objective Optimization",
                "DOI": "10.1109/tvcg.2023.3326921",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326921",
                "FirstPage": 661,
                "LastPage": 671,
                "PaperType": "J",
                "Abstract": "Evolutionary multi-objective optimization (EMO) algorithms have been demonstrated to be effective in solving multi-criteria decision-making problems. In real-world applications, analysts often employ several algorithms concurrently and compare their solution sets to gain insight into the characteristics of different algorithms and explore a broader range of feasible solutions. However, EMO algorithms are typically treated as black boxes, leading to difficulties in performing detailed analysis and comparisons between the internal evolutionary processes. Inspired by the successful application of visual analytics tools in explainable AI, we argue that interactive visualization can significantly enhance the comparative analysis between multiple EMO algorithms. In this paper, we present a visual analytics framework that enables the exploration and comparison of evolutionary processes in EMO algorithms. Guided by a literature review and expert interviews, the proposed framework addresses various analytical tasks and establishes a multi-faceted visualization design to support the comparative analysis of intermediate generations in the evolution as well as solution sets. We demonstrate the effectiveness of our framework through case studies on benchmarking and real-world multi-objective optimization problems to elucidate how analysts can leverage our framework to inspect and compare diverse algorithms.",
                "AuthorNamesDeduped": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorNames": "Yansong Huang;Zherui Zhang;Ao Jiao;Yuxin Ma;Ran Cheng",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2015.2467851;10.1109/tvcg.2017.2744199;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/tvcg.2020.3028888;10.1109/tvcg.2014.2346578;10.1109/tvcg.2020.3030361;10.1109/tvcg.2020.3030347;10.1109/visual.2005.1532820;10.1109/vast50239.2020.00006;10.1109/tvcg.2021.3114790;10.1109/tvcg.2020.3030418;10.1109/tvcg.2020.3030458;10.1109/tvcg.2021.3114850;10.1109/vast50239.2020.00007;10.1109/tvcg.2020.3030432;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Visual analytics,evolutionary multi-objective optimization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 60,
                "i": [
                    60
                ]
            }
        },
        {
            "name": "Fred Hohman",
            "value": 201,
            "numPapers": 23,
            "cluster": "1",
            "visible": 1,
            "index": 390,
            "x": 193.312500381036,
            "y": -40.991184374594106,
            "vy": 0,
            "vx": 0,
            "r": 1.231433506044905,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations",
                "DOI": "10.1109/tvcg.2019.2934659",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934659",
                "FirstPage": 1096,
                "LastPage": 1106,
                "PaperType": "J",
                "Abstract": "Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons. As predictions are often computed from millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture. We present Summit, an interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. Summit introduces two new scalable summarization techniques: (1) activation aggregation discovers important neurons, and (2) neuron-influence aggregation identifies relationships among such neurons. Summit combines these techniques to create the novel attribution graph that reveals and summarizes crucial neuron associations and substructures that contribute to a model's outcomes. Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models into compact, interactive visualizations. We present neural network exploration scenarios where Summit helps us discover multiple surprising insights into a prevalent, large-scale image classifier's learned representations and informs future neural network architecture design. The Summit visualization runs in modern web browsers and is open-sourced.",
                "AuthorNamesDeduped": "Fred Hohman;Haekyu Park;Caleb Robinson;Duen Horng (Polo) Chau",
                "AuthorNames": "Fred Hohman;Haekyu Park;Caleb Robinson;Duen Horng Polo Chau",
                "AuthorAffiliation": "Georgia Tech.;Georgia Tech.;Georgia Tech.;Georgia Tech.",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2009.108;10.1109/tvcg.2017.2744878",
                "AuthorKeywords": "Deep learning interpretability,visual analytics,scalable summarization,attribution graph",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2485,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 597,
                "i": [
                    597
                ]
            }
        },
        {
            "name": "Michael Behrisch 0001",
            "value": 295,
            "numPapers": 111,
            "cluster": "4",
            "visible": 1,
            "index": 391,
            "x": -115.00043980748426,
            "y": 161.01210775617213,
            "vy": 0,
            "vx": 0,
            "r": 1.3396660909614277,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models",
                "DOI": "10.1109/tvcg.2018.2865044",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865044",
                "FirstPage": 353,
                "LastPage": 363,
                "PaperType": "J",
                "Abstract": "Neural sequence-to-sequence models have proven to be accurate and robust for many sequence prediction tasks, and have become the standard approach for automatic translation of text. The models work with a five-stage blackbox pipeline that begins with encoding a source sequence to a vector space and then decoding out to a new target sequence. This process is now standard, but like many deep learning methods remains quite difficult to understand or debug. In this work, we present a visual analysis tool that allows interaction and “what if”-style exploration of trained sequence-to-sequence models through each stage of the translation process. The aim is to identify which patterns have been learned, to detect model errors, and to probe the model with counterfactual scenario. We demonstrate the utility of our tool through several real-world sequence-to-sequence use cases on large-scale models.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Sebastian Gehrmann;Michael Behrisch 0001;Adam Perer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorNames": "Hendrik Strobelt;Sebastian Gehrmann;Michael Behrisch;Adam Perer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorAffiliation": "IBM Reseatch, MIT-IBM Watson AI Lab.;Harvard NLP group;Hatvatd Visual Computing group;IBM Reseatch, MIT-IBM Watson AI Lab.;Hatvatd Visual Computing group;Harvard NLP group",
                "InternalReferences": "0.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744478;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Explainable AI,Visual Debugging,Visual Analytics,Machine Learning,Deep Learning,NLP",
                "AminerCitationCount": 180,
                "CitationCountCrossRef": 108,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2314,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 730,
                "i": [
                    730
                ]
            }
        },
        {
            "name": "Suphanut Jamonnak",
            "value": 5,
            "numPapers": 47,
            "cluster": "1",
            "visible": 1,
            "index": 392,
            "x": -23.9949570869972,
            "y": -196.65767728312355,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "OW-Adapter: Human-Assisted Open-World Object Detection with a Few Examples",
                "DOI": "10.1109/tvcg.2023.3326577",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326577",
                "FirstPage": 694,
                "LastPage": 704,
                "PaperType": "J",
                "Abstract": "Open-world object detection (OWOD) is an emerging computer vision problem that involves not only the identification of predefined object classes, like what general object detectors do, but also detects new unknown objects simultaneously. Recently, several end-to-end deep learning models have been proposed to address the OWOD problem. However, these approaches face several challenges: a) significant changes in both network architecture and training procedure are required; b) they are trained from scratch, which can not leverage existing pre-trained general detectors; c) costly annotations for all unknown classes are needed. To overcome these challenges, we present a visual analytic framework called OW-Adapter. It acts as an adaptor to enable pre-trained general object detectors to handle the OWOD problem. Specifically, OW-Adapter is designed to identify, summarize, and annotate unknown examples with minimal human effort. Moreover, we introduce a lightweight classifier to learn newly annotated unknown classes and plug the classifier into pre-trained general detectors to detect unknown objects. We demonstrate the effectiveness of our framework through two case studies of different domains, including common object recognition and autonomous driving. The studies show that a simple yet powerful adaptor can extend the capability of pre-trained general detectors to detect unknown objects and improve the performance on known classes simultaneously.",
                "AuthorNamesDeduped": "Suphanut Jamonnak;Jiajing Guo;Wenbin He;Liang Gou;Liu Ren",
                "AuthorNames": "Suphanut Jamonnak;Jiajing Guo;Wenbin He;Liang Gou;Liu Ren",
                "AuthorAffiliation": "Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA",
                "InternalReferences": "10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744683;10.1109/tvcg.2015.2467196;10.1109/tvcg.2020.3030350;10.1109/tvcg.2021.3114855;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2022.3209466;10.1109/tvcg.2021.3114683;10.1109/tvcg.2021.3114793;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/vast.2017.8585721;10.1109/tvcg.2019.2934267;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2021.3114794;10.1109/tvcg.2017.2744685;10.1109/vast47406.2019.8986943;10.1109/vast50239.2020.00007;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Open world learning,object detection,continuous learning,human-assisted AI",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 415,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 62,
                "i": [
                    62
                ]
            }
        },
        {
            "name": "Jiajing Guo",
            "value": 0,
            "numPapers": 25,
            "cluster": "1",
            "visible": 1,
            "index": 393,
            "x": 150.72512730090324,
            "y": 128.96486343235708,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "OW-Adapter: Human-Assisted Open-World Object Detection with a Few Examples",
                "DOI": "10.1109/tvcg.2023.3326577",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326577",
                "FirstPage": 694,
                "LastPage": 704,
                "PaperType": "J",
                "Abstract": "Open-world object detection (OWOD) is an emerging computer vision problem that involves not only the identification of predefined object classes, like what general object detectors do, but also detects new unknown objects simultaneously. Recently, several end-to-end deep learning models have been proposed to address the OWOD problem. However, these approaches face several challenges: a) significant changes in both network architecture and training procedure are required; b) they are trained from scratch, which can not leverage existing pre-trained general detectors; c) costly annotations for all unknown classes are needed. To overcome these challenges, we present a visual analytic framework called OW-Adapter. It acts as an adaptor to enable pre-trained general object detectors to handle the OWOD problem. Specifically, OW-Adapter is designed to identify, summarize, and annotate unknown examples with minimal human effort. Moreover, we introduce a lightweight classifier to learn newly annotated unknown classes and plug the classifier into pre-trained general detectors to detect unknown objects. We demonstrate the effectiveness of our framework through two case studies of different domains, including common object recognition and autonomous driving. The studies show that a simple yet powerful adaptor can extend the capability of pre-trained general detectors to detect unknown objects and improve the performance on known classes simultaneously.",
                "AuthorNamesDeduped": "Suphanut Jamonnak;Jiajing Guo;Wenbin He;Liang Gou;Liu Ren",
                "AuthorNames": "Suphanut Jamonnak;Jiajing Guo;Wenbin He;Liang Gou;Liu Ren",
                "AuthorAffiliation": "Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA;Bosch Research North America, USA",
                "InternalReferences": "10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744683;10.1109/tvcg.2015.2467196;10.1109/tvcg.2020.3030350;10.1109/tvcg.2021.3114855;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2022.3209466;10.1109/tvcg.2021.3114683;10.1109/tvcg.2021.3114793;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/vast.2017.8585721;10.1109/tvcg.2019.2934267;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2021.3114794;10.1109/tvcg.2017.2744685;10.1109/vast47406.2019.8986943;10.1109/vast50239.2020.00007;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Open world learning,object detection,continuous learning,human-assisted AI",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 415,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 62,
                "i": [
                    62
                ]
            }
        },
        {
            "name": "Wenbin He",
            "value": 101,
            "numPapers": 57,
            "cluster": "1",
            "visible": 1,
            "index": 394,
            "x": -198.5062834529368,
            "y": 6.72721559802504,
            "vy": 0,
            "vx": 0,
            "r": 1.1162924582613702,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Concept Programming: A Visual Analytics Approach to Injecting Human Intelligence at Scale",
                "DOI": "10.1109/tvcg.2022.3209466",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209466",
                "FirstPage": 74,
                "LastPage": 83,
                "PaperType": "J",
                "Abstract": "Data-centric AI has emerged as a new research area to systematically engineer the data to land AI models for real-world applications. As a core method for data-centric AI, data programming helps experts inject domain knowledge into data and label data at scale using carefully designed labeling functions (e.g., heuristic rules, logistics). Though data programming has shown great success in the NLP domain, it is challenging to program image data because of a) the challenge to describe images using visual vocabulary without human annotations and b) lacking efficient tools for data programming of images. We present Visual Concept Programming, a first-of-its-kind visual analytics approach of using visual concepts to program image data at scale while requiring a few human efforts. Our approach is built upon three unique components. It first uses a self-supervised learning approach to learn visual representation at the pixel level and extract a dictionary of visual concepts from images without using any human annotations. The visual concepts serve as building blocks of labeling functions for experts to inject their domain knowledge. We then design interactive visualizations to explore and understand visual concepts and compose labeling functions with concepts without writing code. Finally, with the composed labeling functions, users can label the image data at scale and use the labeled data to refine the pixel-wise visual representation and concept quality. We evaluate the learned pixel-wise visual representation for the downstream task of semantic segmentation to show the effectiveness and usefulness of our approach. In addition, we demonstrate how our approach tackles real-world problems of image retrieval for autonomous driving.",
                "AuthorNamesDeduped": "Md. Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Liu Ren",
                "AuthorNames": "Md Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Liu Ren",
                "AuthorAffiliation": "University of Maryland, USA;Bosch Research North America, USA;Robert Bosch GmbH, Germany;Bosch Research North America, USA;Bosch Research North America, USA",
                "InternalReferences": "0.1109/tvcg.2017.2744818;10.1109/tvcg.2020.3030350;10.1109/tvcg.2021.3114855;10.1109/tvcg.2019.2934659;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114858;10.1109/tvcg.2017.2744158;10.1109/tvcg.2019.2934619;10.1109/vast47406.2019.8986943;10.1109/tvcg.2021.3114837",
                "AuthorKeywords": "Visual concept programming,data-centric AI,data programming,self-supervised learning,semantic segmentation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1576,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 164,
                "i": [
                    164
                ]
            }
        },
        {
            "name": "Arvind Kumar Shekar",
            "value": 90,
            "numPapers": 22,
            "cluster": "1",
            "visible": 1,
            "index": 395,
            "x": 142.00782979768138,
            "y": -139.2256308161423,
            "vy": 0,
            "vx": 0,
            "r": 1.1036269430051813,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Concept Programming: A Visual Analytics Approach to Injecting Human Intelligence at Scale",
                "DOI": "10.1109/tvcg.2022.3209466",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209466",
                "FirstPage": 74,
                "LastPage": 83,
                "PaperType": "J",
                "Abstract": "Data-centric AI has emerged as a new research area to systematically engineer the data to land AI models for real-world applications. As a core method for data-centric AI, data programming helps experts inject domain knowledge into data and label data at scale using carefully designed labeling functions (e.g., heuristic rules, logistics). Though data programming has shown great success in the NLP domain, it is challenging to program image data because of a) the challenge to describe images using visual vocabulary without human annotations and b) lacking efficient tools for data programming of images. We present Visual Concept Programming, a first-of-its-kind visual analytics approach of using visual concepts to program image data at scale while requiring a few human efforts. Our approach is built upon three unique components. It first uses a self-supervised learning approach to learn visual representation at the pixel level and extract a dictionary of visual concepts from images without using any human annotations. The visual concepts serve as building blocks of labeling functions for experts to inject their domain knowledge. We then design interactive visualizations to explore and understand visual concepts and compose labeling functions with concepts without writing code. Finally, with the composed labeling functions, users can label the image data at scale and use the labeled data to refine the pixel-wise visual representation and concept quality. We evaluate the learned pixel-wise visual representation for the downstream task of semantic segmentation to show the effectiveness and usefulness of our approach. In addition, we demonstrate how our approach tackles real-world problems of image retrieval for autonomous driving.",
                "AuthorNamesDeduped": "Md. Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Liu Ren",
                "AuthorNames": "Md Naimul Hoque;Wenbin He;Arvind Kumar Shekar;Liang Gou;Liu Ren",
                "AuthorAffiliation": "University of Maryland, USA;Bosch Research North America, USA;Robert Bosch GmbH, Germany;Bosch Research North America, USA;Bosch Research North America, USA",
                "InternalReferences": "0.1109/tvcg.2017.2744818;10.1109/tvcg.2020.3030350;10.1109/tvcg.2021.3114855;10.1109/tvcg.2019.2934659;10.1109/tvcg.2018.2864843;10.1109/tvcg.2021.3114858;10.1109/tvcg.2017.2744158;10.1109/tvcg.2019.2934619;10.1109/vast47406.2019.8986943;10.1109/tvcg.2021.3114837",
                "AuthorKeywords": "Visual concept programming,data-centric AI,data programming,self-supervised learning,semantic segmentation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1576,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 164,
                "i": [
                    164
                ]
            }
        },
        {
            "name": "Lincan Zou",
            "value": 76,
            "numPapers": 13,
            "cluster": "1",
            "visible": 1,
            "index": 396,
            "x": -10.680068239680088,
            "y": 198.8364557680401,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection",
                "DOI": "10.1109/tvcg.2020.3030350",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030350",
                "FirstPage": 261,
                "LastPage": 271,
                "PaperType": "J",
                "Abstract": "Traffic light detection is crucial for environment perception and decision-making in autonomous driving. State-of-the-art detectors are built upon deep Convolutional Neural Networks (CNNs) and have exhibited promising performance. However, one looming concern with CNN based detectors is how to thoroughly evaluate the performance of accuracy and robustness before they can be deployed to autonomous vehicles. In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications. The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization, and the semantic adversarial learning efficiently exposes interpretable robustness risks and enables minimal human interaction for actionable insights. We also demonstrate the effectiveness of various performance improvement strategies derived from actionable insights with our visual analytics system, VATLD, and illustrate some practical implications for safety-critical applications in autonomous driving.",
                "AuthorNamesDeduped": "Liang Gou;Lincan Zou;Nanxiang Li;Michael Hofmann 0010;Arvind Kumar Shekar;Axel Wendt;Liu Ren",
                "AuthorNames": "Liang Gou;Lincan Zou;Nanxiang Li;Michael Hofmann;Arvind Kumar Shekar;Axel Wendt;Liu Ren",
                "AuthorAffiliation": "Robert Bosch Research and Technology Center, USA;Robert Bosch Research and Technology Center, USA;Robert Bosch Research and Technology Center, USA;Robert Bosch GmbH, Germany;Robert Bosch GmbH, Germany;Robert Bosch GmbH, Germany;Robert Bosch Research and Technology Center, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864812;10.1109/tvcg.2018.2864504;10.1109/tvcg.2017.2744683",
                "AuthorKeywords": "Traffic light detection,representation learning,semantic adversarial learning,model diagnosing,autonomous driving",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2266,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 465,
                "i": [
                    465
                ]
            }
        },
        {
            "name": "Sebastian Gehrmann",
            "value": 320,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 397,
            "x": -126.59628697102448,
            "y": -154.02395958145604,
            "vy": 0,
            "vx": 0,
            "r": 1.3684513529073115,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models",
                "DOI": "10.1109/tvcg.2018.2865044",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865044",
                "FirstPage": 353,
                "LastPage": 363,
                "PaperType": "J",
                "Abstract": "Neural sequence-to-sequence models have proven to be accurate and robust for many sequence prediction tasks, and have become the standard approach for automatic translation of text. The models work with a five-stage blackbox pipeline that begins with encoding a source sequence to a vector space and then decoding out to a new target sequence. This process is now standard, but like many deep learning methods remains quite difficult to understand or debug. In this work, we present a visual analysis tool that allows interaction and “what if”-style exploration of trained sequence-to-sequence models through each stage of the translation process. The aim is to identify which patterns have been learned, to detect model errors, and to probe the model with counterfactual scenario. We demonstrate the utility of our tool through several real-world sequence-to-sequence use cases on large-scale models.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Sebastian Gehrmann;Michael Behrisch 0001;Adam Perer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorNames": "Hendrik Strobelt;Sebastian Gehrmann;Michael Behrisch;Adam Perer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorAffiliation": "IBM Reseatch, MIT-IBM Watson AI Lab.;Harvard NLP group;Hatvatd Visual Computing group;IBM Reseatch, MIT-IBM Watson AI Lab.;Hatvatd Visual Computing group;Harvard NLP group",
                "InternalReferences": "0.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744478;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Explainable AI,Visual Debugging,Visual Analytics,Machine Learning,Deep Learning,NLP",
                "AminerCitationCount": 180,
                "CitationCountCrossRef": 108,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2314,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 730,
                "i": [
                    730
                ]
            }
        },
        {
            "name": "Alexander M. Rush",
            "value": 328,
            "numPapers": 25,
            "cluster": "1",
            "visible": 1,
            "index": 398,
            "x": 197.6379850349966,
            "y": 28.093181936307257,
            "vy": 0,
            "vx": 0,
            "r": 1.3776626367299942,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models",
                "DOI": "10.1109/tvcg.2022.3209479",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209479",
                "FirstPage": 1146,
                "LastPage": 1156,
                "PaperType": "J",
                "Abstract": "State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorNames": "Hendrik Strobelt;Albert Webson;Victor Sanh;Benjamin Hoover;Johanna Beyer;Hanspeter Pfister;Alexander M. Rush",
                "AuthorAffiliation": "IBM Research, China;Brown University, USA;Huggingface, USA;IBM Research, China;Harvard SEAS, USA;Harvard SEAS, USA;Huggingface, USA",
                "InternalReferences": "0.1109/tvcg.2020.3028976;10.1109/tvcg.2021.3114683;10.1109/tvcg.2018.2865230;10.1109/vast.2017.8585721;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Natural language processing,language modeling,zero-shot models",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 3637,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 133,
                "i": [
                    133
                ]
            }
        },
        {
            "name": "Yang Wang",
            "value": 174,
            "numPapers": 26,
            "cluster": "1",
            "visible": 1,
            "index": 399,
            "x": -164.9153024792708,
            "y": 112.92892901365275,
            "vy": 0,
            "vx": 0,
            "r": 1.2003454231433506,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models",
                "DOI": "10.1109/tvcg.2018.2864499",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864499",
                "FirstPage": 364,
                "LastPage": 373,
                "PaperType": "J",
                "Abstract": "Interpretation and diagnosis of machine learning models have gained renewed interest in recent years with breakthroughs in new approaches. We present Manifold, a framework that utilizes visual analysis techniques to support interpretation, debugging, and comparison of machine learning models in a more transparent and interactive manner. Conventional techniques usually focus on visualizing the internal logic of a specific model type (i.e., deep neural networks), lacking the ability to extend to a more complex scenario where different model types are integrated. To this end, Manifold is designed as a generic framework that does not rely on or access the internal logic of the model and solely observes the input (i.e., instances or features) and the output (i.e., the predicted result and probability distribution). We describe the workflow of Manifold as an iterative process consisting of three major phases that are commonly involved in the model development and diagnosis process: inspection (hypothesis), explanation (reasoning), and refinement (verification). The visual components supporting these tasks include a scatterplot-based visual summary that overviews the models' outcome and a customizable tabular view that reveals feature discrimination. We demonstrate current applications of the framework on the classification and regression tasks and discuss other potential machine learning use scenarios where Manifold can be applied.",
                "AuthorNamesDeduped": "Jiawei Zhang 0003;Yang Wang;Piero Molino;Lezhi Li;David S. Ebert",
                "AuthorNames": "Jiawei Zhang;Yang Wang;Piero Molino;Lezhi Li;David S. Ebert",
                "AuthorAffiliation": "Purdue University;Uber Technologies, Inc;Uber AI Labs;Uber Technologies, Inc;Purdue University",
                "InternalReferences": "0.1109/tvcg.2014.2346660;10.1109/vast.2015.7347637;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.212;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/tvcg.2013.125;10.1109/tvcg.2014.2346578;10.1109/tvcg.2009.111;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/infvis.2000.885086;10.1109/tvcg.2017.2744158;10.1109/tvcg.2016.2598829;10.1109/tvcg.2017.2744878",
                "AuthorKeywords": "Interactive machine learning,performance analysis,model comparison,model debugging",
                "AminerCitationCount": 181,
                "CitationCountCrossRef": 118,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 2742,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 728,
                "i": [
                    728
                ]
            }
        },
        {
            "name": "Andrea Batch",
            "value": 87,
            "numPapers": 34,
            "cluster": "5",
            "visible": 1,
            "index": 400,
            "x": 45.37770279845648,
            "y": -194.91245236960862,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Information Olfactation: Harnessing Scent to Convey Data",
                "DOI": "10.1109/tvcg.2018.2865237",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865237",
                "FirstPage": 726,
                "LastPage": 736,
                "PaperType": "J",
                "Abstract": "Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present viScent: A six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. Finally, we present three applications that make use of the viScent system: A 2D graph visualization, a 2D line and point chart, and an immersive analytics graph visualization in 3D virtual reality. We close the paper with a review of possible extensions of viScent and applications of information olfactation for general visualization beyond the examples in this paper.",
                "AuthorNamesDeduped": "Biswaksen Patnaik;Andrea Batch;Niklas Elmqvist",
                "AuthorNames": "Biswaksen Patnaik;Andrea Batch;Niklas Elmqvist",
                "AuthorAffiliation": "University of Maryland at College Park, College Park, MD, US;University of Maryland at College Park, College Park, MD, US;University of Maryland at College Park, College Park, MD, US",
                "InternalReferences": "0.1109/tvcg.2016.2599107",
                "AuthorKeywords": "Olfaction,smell,scent,olfactory display,immersive analytics,immersion",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 104,
                "DownloadsXplore": 1412,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 666,
                "i": [
                    666
                ]
            }
        },
        {
            "name": "Peter W. S. Butcher",
            "value": 0,
            "numPapers": 32,
            "cluster": "5",
            "visible": 1,
            "index": 401,
            "x": 98.3238860280636,
            "y": 174.59213452025946,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Wizualization: A “Hard Magic” Visualization System for Immersive and Ubiquitous Analytics",
                "DOI": "10.1109/tvcg.2023.3326580",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326580",
                "FirstPage": 507,
                "LastPage": 517,
                "PaperType": "J",
                "Abstract": "What if magic could be used as an effective metaphor to perform data visualization and analysis using speech and gestures while mobile and on-the-go? In this paper, we introduce Wizualization, a visual analytics system for eXtended Reality (XR) that enables an analyst to author and interact with visualizations using such a magic system through gestures, speech commands, and touch interaction. Wizualization is a rendering system for current XR headsets that comprises several components: a cross-device (or Arcane Focuses) infrastructure for signalling and view control (Weave), a code notebook (Spellbook), and a grammar of graphics for XR (Optomancy). The system offers users three modes of input: gestures, spoken commands, and materials. We demonstrate Wizualization and its components using a motivating scenario on collaborative data analysis of pandemic data across time and space.",
                "AuthorNamesDeduped": "Andrea Batch;Peter W. S. Butcher;Panagiotis D. Ritsos;Niklas Elmqvist",
                "AuthorNames": "Andrea Batch;Peter W. S. Butcher;Panagiotis D. Ritsos;Niklas Elmqvist",
                "AuthorAffiliation": "U.S. Bureau of Economic Analysis, Washington, D.C., United States;Bangor University, Bangor, United Kingdom;Bangor University, Bangor, United Kingdom;Aarhus University, Aarhus, Denmark",
                "InternalReferences": "10.1109/tvcg.2017.2745941;10.1109/vast.2016.7883506;10.1109/tvcg.2019.2934803;10.1109/tvcg.2019.2934785;10.1109/tvcg.2019.2934415;10.1109/tvcg.2015.2468292;10.1109/tvcg.2012.204;10.1109/tvcg.2013.191;10.1109/tvcg.2013.225;10.1109/tvcg.2020.3030378;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2015.2467153;10.1109/tvcg.2018.2865152;10.1109/tvcg.2021.3114844;10.1109/tvcg.2007.70515;10.1109/tvcg.2019.2934668;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Immersive analytics,situated analytics,ubiquitous analytics,gestural interaction,voice interaction",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 82,
                "DownloadsXplore": 389,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 63,
                "i": [
                    63
                ]
            }
        },
        {
            "name": "Max Piochowiak",
            "value": 0,
            "numPapers": 7,
            "cluster": "6",
            "visible": 1,
            "index": 402,
            "x": -190.67327436384585,
            "y": -62.39953880734663,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Fast Compressed Segmentation Volumes for Scientific Visualization",
                "DOI": "10.1109/tvcg.2023.3326573",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326573",
                "FirstPage": 12,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement.",
                "AuthorNamesDeduped": "Max Piochowiak;Carsten Dachsbacher",
                "AuthorNames": "Max Piochowiak;Carsten Dachsbacher",
                "AuthorAffiliation": "Karlsruhe Institute of Technology, Germany;Karlsruhe Institute of Technology, Germany",
                "InternalReferences": "10.1109/tvcg.2015.2467441;10.1109/tvcg.2020.3030451;10.1109/tvcg.2013.142;10.1109/tvcg.2018.2864847;10.1109/tvcg.2017.2744238;10.1109/tvcg.2012.240;10.1109/tvcg.2006.143",
                "AuthorKeywords": "Segmentation volumes,lossless compression,volume rendering",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 375,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 64,
                "i": [
                    64
                ]
            }
        },
        {
            "name": "Ali K. Al-Awami",
            "value": 142,
            "numPapers": 59,
            "cluster": "6",
            "visible": 1,
            "index": 403,
            "x": 182.9736950846796,
            "y": -82.88924482138012,
            "vy": 0,
            "vx": 0,
            "r": 1.1635002878526195,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects",
                "DOI": "10.1109/tvcg.2015.2467441",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467441",
                "FirstPage": 738,
                "LastPage": 746,
                "PaperType": "J",
                "Abstract": "In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.",
                "AuthorNamesDeduped": "Ali K. Al-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorNames": "Ali K. Ai-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST);School of Engineering and Applied Sciences, Harvard University;School of Engineering and Applied Sciences, Harvard University;School of Medicine, Boston University;Center for Brain Science, Harvard University;School of Engineering and Applied Sciences, Harvard University;King Abdullah University of Science and Technology (KAUST)",
                "InternalReferences": "0.1109/tvcg.2014.2346312;10.1109/visual.2005.1532788;10.1109/tvcg.2013.142;10.1109/tvcg.2009.121;10.1109/tvcg.2012.240;10.1109/tvcg.2014.2346371;10.1109/tvcg.2013.174;10.1109/tvcg.2014.2346249;10.1109/tvcg.2007.70584",
                "AuthorKeywords": "Neuroscience, Segmentation, Proofreading, Data and Provenance Tracking",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1052,
                "i": [
                    1052
                ]
            }
        },
        {
            "name": "Markus Hadwiger",
            "value": 421,
            "numPapers": 164,
            "cluster": "6",
            "visible": 1,
            "index": 404,
            "x": -79.02597185984142,
            "y": 184.94565626585432,
            "vy": 0,
            "vx": 0,
            "r": 1.4847438111686817,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "ViSlang: A System for Interpreted Domain-Specific Languages for Scientific Visualization",
                "DOI": "10.1109/tvcg.2014.2346318",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346318",
                "FirstPage": 2388,
                "LastPage": 2396,
                "PaperType": "J",
                "Abstract": "Researchers from many domains use scientific visualization in their daily practice. Existing implementations of algorithms usually come with a graphical user interface (high-level interface), or as software library or source code (low-level interface). In this paper we present a system that integrates domain-specific languages (DSLs) and facilitates the creation of new DSLs. DSLs provide an effective interface for domain scientists avoiding the difficulties involved with low-level interfaces and at the same time offering more flexibility than high-level interfaces. We describe the design and implementation of ViSlang, an interpreted language specifically tailored for scientific visualization. A major contribution of our design is the extensibility of the ViSlang language. Novel DSLs that are tailored to the problems of the domain can be created and integrated into ViSlang. We show that our approach can be added to existing user interfaces to increase the flexibility for expert users on demand, but at the same time does not interfere with the user experience of novice users. To demonstrate the flexibility of our approach we present new DSLs for volume processing, querying and visualization. We report the implementation effort for new DSLs and compare our approach with Matlab and Python implementations in terms of run-time performance.",
                "AuthorNamesDeduped": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorNames": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorAffiliation": "KAUST;University of Bergen;Vienna University of Technology, VrVis Research Center;KAUST",
                "InternalReferences": "0.1109/visual.2005.1532792;10.1109/visual.1992.235219;10.1109/tvcg.2009.174;10.1109/tvcg.2014.2346322;10.1109/visual.2004.95;10.1109/tvcg.2011.185;10.1109/visual.2005.1532788;10.1109/visual.1992.235202;10.1109/tvcg.2008.184",
                "AuthorKeywords": "Domain-specific languages, Volume visualization, Volume visualization framework",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 767,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1220,
                "i": [
                    1220
                ]
            }
        },
        {
            "name": "Carsten Dachsbacher",
            "value": 22,
            "numPapers": 43,
            "cluster": "6",
            "visible": 1,
            "index": 405,
            "x": -66.74004788646732,
            "y": -189.98885758936507,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Fast Compressed Segmentation Volumes for Scientific Visualization",
                "DOI": "10.1109/tvcg.2023.3326573",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326573",
                "FirstPage": 12,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement.",
                "AuthorNamesDeduped": "Max Piochowiak;Carsten Dachsbacher",
                "AuthorNames": "Max Piochowiak;Carsten Dachsbacher",
                "AuthorAffiliation": "Karlsruhe Institute of Technology, Germany;Karlsruhe Institute of Technology, Germany",
                "InternalReferences": "10.1109/tvcg.2015.2467441;10.1109/tvcg.2020.3030451;10.1109/tvcg.2013.142;10.1109/tvcg.2018.2864847;10.1109/tvcg.2017.2744238;10.1109/tvcg.2012.240;10.1109/tvcg.2006.143",
                "AuthorKeywords": "Segmentation volumes,lossless compression,volume rendering",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 375,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 64,
                "i": [
                    64
                ]
            }
        },
        {
            "name": "Yangqiu Song",
            "value": 353,
            "numPapers": 28,
            "cluster": "1",
            "visible": 1,
            "index": 406,
            "x": 177.766453116053,
            "y": 95.12669523608045,
            "vy": 0,
            "vx": 0,
            "r": 1.406447898675878,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "KG4Vis: A Knowledge Graph-Based Approach for Visualization Recommendation",
                "DOI": "10.1109/tvcg.2021.3114863",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114863",
                "FirstPage": 195,
                "LastPage": 205,
                "PaperType": "J",
                "Abstract": "Visualization recommendation or automatic visualization generation can significantly lower the barriers for general users to rapidly create effective data visualizations, especially for those users without a background in data visualizations. However, existing rule-based approaches require tedious manual specifications of visualization rules by visualization experts. Other machine learning-based approaches often work like black-box and are difficult to understand why a specific visualization is recommended, limiting the wider adoption of these approaches. This paper fills the gap by presenting KG4Vis, a knowledge graph (KG)-based approach for visualization recommendation. It does not require manual specifications of visualization rules and can also guarantee good explainability. Specifically, we propose a framework for building knowledge graphs, consisting of three types of entities (i.e., data features, data columns and visualization design choices) and the relations between them, to model the mapping rules between data and effective visualizations. A TransE-based embedding technique is employed to learn the embeddings of both entities and relations of the knowledge graph from existing dataset-visualization pairs. Such embeddings intrinsically model the desirable visualization rules. Then, given a new dataset, effective visualizations can be inferred from the knowledge graph with semantically meaningful rules. We conducted extensive evaluations to assess the proposed approach, including quantitative comparisons, case studies and expert interviews. The results demonstrate the effectiveness of our approach.",
                "AuthorNamesDeduped": "Haotian Li 0001;Yong Wang 0021;Songheng Zhang;Yangqiu Song;Huamin Qu",
                "AuthorNames": "Haotian Li;Yong Wang;Songheng Zhang;Yangqiu Song;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology and Singapore Management University, Hong Kong;Singapore Management University, Singapore;Singapore Management University, Singapore;Hong Kong University of Science and Technology, Hong Kong;Hong Kong University of Science and Technology, Hong Kong",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2020.3030338;10.1109/tvcg.2019.2934810;10.1109/tvcg.2020.3030469;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2864812;10.1109/tvcg.2018.2865240;10.1109/tvcg.2015.2467091;10.1109/tvcg.2019.2934798;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030423",
                "AuthorKeywords": "Data visualization,Visualization recommendation,Knowledge graph",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 48,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2773,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 252,
                "i": [
                    252
                ]
            }
        },
        {
            "name": "James Wexler",
            "value": 205,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 407,
            "x": -195.57672829972273,
            "y": 49.99743340989051,
            "vy": 0,
            "vx": 0,
            "r": 1.2360391479562465,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "The What-If Tool: Interactive Probing of Machine Learning Models",
                "DOI": "10.1109/tvcg.2019.2934619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934619",
                "FirstPage": 56,
                "LastPage": 65,
                "PaperType": "J",
                "Abstract": "A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.",
                "AuthorNamesDeduped": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda B. Viégas;Jimbo Wilson",
                "AuthorNames": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda Viégas;Jimbo Wilson",
                "AuthorAffiliation": "Google Research;Google Research;Google Research;Google Research;Google Research;Google Research",
                "InternalReferences": "0.1109/vast.2017.8585720;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Interactive Machine Learning,Model Debugging,Model Comparison",
                "AminerCitationCount": 311,
                "CitationCountCrossRef": 133,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 20752,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 596,
                "i": [
                    596
                ]
            }
        },
        {
            "name": "Mahima Pushkarna",
            "value": 106,
            "numPapers": 3,
            "cluster": "1",
            "visible": 1,
            "index": 408,
            "x": 110.57483716667485,
            "y": -169.1839395024343,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "The What-If Tool: Interactive Probing of Machine Learning Models",
                "DOI": "10.1109/tvcg.2019.2934619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934619",
                "FirstPage": 56,
                "LastPage": 65,
                "PaperType": "J",
                "Abstract": "A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.",
                "AuthorNamesDeduped": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda B. Viégas;Jimbo Wilson",
                "AuthorNames": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda Viégas;Jimbo Wilson",
                "AuthorAffiliation": "Google Research;Google Research;Google Research;Google Research;Google Research;Google Research",
                "InternalReferences": "0.1109/vast.2017.8585720;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Interactive Machine Learning,Model Debugging,Model Comparison",
                "AminerCitationCount": 311,
                "CitationCountCrossRef": 133,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 20752,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 596,
                "i": [
                    596
                ]
            }
        },
        {
            "name": "Tolga Bolukbasi",
            "value": 106,
            "numPapers": 3,
            "cluster": "1",
            "visible": 1,
            "index": 409,
            "x": 32.78772377487987,
            "y": 199.68716826491922,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "The What-If Tool: Interactive Probing of Machine Learning Models",
                "DOI": "10.1109/tvcg.2019.2934619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934619",
                "FirstPage": 56,
                "LastPage": 65,
                "PaperType": "J",
                "Abstract": "A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.",
                "AuthorNamesDeduped": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda B. Viégas;Jimbo Wilson",
                "AuthorNames": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda Viégas;Jimbo Wilson",
                "AuthorAffiliation": "Google Research;Google Research;Google Research;Google Research;Google Research;Google Research",
                "InternalReferences": "0.1109/vast.2017.8585720;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Interactive Machine Learning,Model Debugging,Model Comparison",
                "AminerCitationCount": 311,
                "CitationCountCrossRef": 133,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 20752,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 596,
                "i": [
                    596
                ]
            }
        },
        {
            "name": "Jimbo Wilson",
            "value": 205,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 410,
            "x": -159.25748931615456,
            "y": -125.24796244536243,
            "vy": 0,
            "vx": 0,
            "r": 1.2360391479562465,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "The What-If Tool: Interactive Probing of Machine Learning Models",
                "DOI": "10.1109/tvcg.2019.2934619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934619",
                "FirstPage": 56,
                "LastPage": 65,
                "PaperType": "J",
                "Abstract": "A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.",
                "AuthorNamesDeduped": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda B. Viégas;Jimbo Wilson",
                "AuthorNames": "James Wexler;Mahima Pushkarna;Tolga Bolukbasi;Martin Wattenberg;Fernanda Viégas;Jimbo Wilson",
                "AuthorAffiliation": "Google Research;Google Research;Google Research;Google Research;Google Research;Google Research",
                "InternalReferences": "0.1109/vast.2017.8585720;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Interactive Machine Learning,Model Debugging,Model Comparison",
                "AminerCitationCount": 311,
                "CitationCountCrossRef": 133,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 20752,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 596,
                "i": [
                    596
                ]
            }
        },
        {
            "name": "Keshav Dasu",
            "value": 3,
            "numPapers": 21,
            "cluster": "6",
            "visible": 1,
            "index": 411,
            "x": 202.28123395238165,
            "y": -15.24146944037355,
            "vy": 0,
            "vx": 0,
            "r": 1.003454231433506,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Character-Oriented Design for Visual Data Storytelling",
                "DOI": "10.1109/tvcg.2023.3326578",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326578",
                "FirstPage": 98,
                "LastPage": 108,
                "PaperType": "J",
                "Abstract": "When telling a data story, an author has an intention they seek to convey to an audience. This intention can be of many forms such as to persuade, to educate, to inform, or even to entertain. In addition to expressing their intention, the story plot must balance being consumable and enjoyable while preserving scientific integrity. In data stories, numerous methods have been identified for constructing and presenting a plot. However, there is an opportunity to expand how we think and create the visual elements that present the story. Stories are brought to life by characters; often they are what make a story captivating, enjoyable, memorable, and facilitate following the plot until the end. Through the analysis of 160 existing data stories, we systematically investigate and identify distinguishable features of characters in data stories, and we illustrate how they feed into the broader concept of “character-oriented design”. We identify the roles and visual representations data characters assume as well as the types of relationships these roles have with one another. We identify characteristics of antagonists as well as define conflict in data stories. We find the need for an identifiable central character that the audience latches on to in order to follow the narrative and identify their visual representations. We then illustrate “character-oriented design” by showing how to develop data characters with common data story plots. With this work, we present a framework for data characters derived from our analysis; we then offer our extension to the data storytelling process using character-oriented design. To access our supplemental materials please visit https://chaorientdesignds.github.io/.",
                "AuthorNamesDeduped": "Keshav Dasu;Yun-Hsin Kuo;Kwan-Liu Ma",
                "AuthorNames": "Keshav Dasu;Yun-Hsin Kuo;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis, USA;University of California, Davis, USA;University of California, Davis, USA",
                "InternalReferences": "10.1109/tvcg.2016.2598647;10.1109/tvcg.2020.3030437;10.1109/tvcg.2016.2598876;10.1109/tvcg.2020.3030412;10.1109/tvcg.2007.70539;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2018.2865145;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2865232;10.1109/tvcg.2019.2934398;10.1109/tvcg.2021.3114774",
                "AuthorKeywords": "Storytelling,Explanatory,Narrative visualization,Visual metaphor",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 366,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 66,
                "i": [
                    66
                ]
            }
        },
        {
            "name": "Jonathan Woodring",
            "value": 192,
            "numPapers": 50,
            "cluster": "6",
            "visible": 1,
            "index": 412,
            "x": -139.02904416940052,
            "y": 148.05716759867752,
            "vy": 0,
            "vx": 0,
            "r": 1.221070811744387,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Temporal Summary Images: An Approach to Narrative Visualization via Interactive Annotation Generation and Placement",
                "DOI": "10.1109/tvcg.2016.2598876",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598876",
                "FirstPage": 511,
                "LastPage": 520,
                "PaperType": "J",
                "Abstract": "Visualization is a powerful technique for analysis and communication of complex, multidimensional, and time-varying data. However, it can be difficult to manually synthesize a coherent narrative in a chart or graph due to the quantity of visualized attributes, a variety of salient features, and the awareness required to interpret points of interest (POls). We present Temporal Summary Images (TSIs) as an approach for both exploring this data and creating stories from it. As a visualization, a TSI is composed of three common components: (1) a temporal layout, (2) comic strip-style data snapshots, and (3) textual annotations. To augment user analysis and exploration, we have developed a number of interactive techniques that recommend relevant data features and design choices, including an automatic annotations workflow. As the analysis and visual design processes converge, the resultant image becomes appropriate for data storytelling. For validation, we use a prototype implementation for TSIs to conduct two case studies with large-scale, scientific simulation datasets.",
                "AuthorNamesDeduped": "Chris Bryan;Kwan-Liu Ma;Jonathan Woodring",
                "AuthorNames": "Chris Bryan;Kwan-Liu Ma;Jonathan Woodring",
                "AuthorAffiliation": "University of California, Davis;University of California, Davis;Los Alamos National Laboratory",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2007.70594;10.1109/tvcg.2011.255;10.1109/tvcg.2010.179;10.1109/vast.2010.5652890;10.1109/tvcg.2012.229;10.1109/tvcg.2012.212;10.1109/tvcg.2011.195;10.1109/vast.2012.6400487",
                "AuthorKeywords": "Narrative visualization;storytelling;annotations;comic strip visualization;time-varying data",
                "AminerCitationCount": 81,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2775,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 897,
                "i": [
                    897
                ]
            }
        },
        {
            "name": "Linquan Huang",
            "value": 40,
            "numPapers": 22,
            "cluster": "1",
            "visible": 1,
            "index": 413,
            "x": 2.50754454353718,
            "y": -203.3315327743392,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Interactive Visual Cluster Analysis by Contrastive Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2022.3209423",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209423",
                "FirstPage": 734,
                "LastPage": 744,
                "PaperType": "J",
                "Abstract": "We propose a contrastive dimensionality reduction approach (CDR) for interactive visual cluster analysis. Although dimensionality reduction of high-dimensional data is widely used in visual cluster analysis in conjunction with scatterplots, there are several limitations on effective visual cluster analysis. First, it is non-trivial for an embedding to present clear visual cluster separation when keeping neighborhood structures. Second, as cluster analysis is a subjective task, user steering is required. However, it is also non-trivial to enable interactions in dimensionality reduction. To tackle these problems, we introduce contrastive learning into dimensionality reduction for high-quality embedding. We then redefine the gradient of the loss function to the negative pairs to enhance the visual cluster separation of embedding results. Based on the contrastive learning scheme, we employ link-based interactions to steer embeddings. After that, we implement a prototype visual interface that integrates the proposed algorithms and a set of visualizations. Quantitative experiments demonstrate that CDR outperforms existing techniques in terms of preserving correct neighborhood structures and improving visual cluster separation. The ablation experiment demonstrates the effectiveness of gradient redefinition. The user study verifies that CDR outperforms t-SNE and UMAP in the task of cluster identification. We also showcase two use cases on real-world datasets to present the effectiveness of link-based interactions.",
                "AuthorNamesDeduped": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu 0004;Yang Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Jiazhi Xia;Linquan Huang;Weixing Lin;Xin Zhao;Jing Wu;Yang Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;Cardiff University, UK;School of Computer Science and Engineering, Central South University, China;School of Computer Science and Engineering, Central South University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/tvcg.2018.2864477;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/tvcg.2015.2467615;10.1109/tvcg.2017.2745085;10.1109/tvcg.2016.2598446;10.1109/tvcg.2010.138;10.1109/tvcg.2012.207;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2745258;10.1109/vast50239.2020.00015;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Dimensionality reduction,visual cluster analysis,contrastive learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 1384,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 141,
                "i": [
                    141
                ]
            }
        },
        {
            "name": "Jia-Kai Chou",
            "value": 125,
            "numPapers": 16,
            "cluster": "4",
            "visible": 1,
            "index": 414,
            "x": 135.66323726851547,
            "y": 151.80739788899115,
            "vy": 0,
            "vx": 0,
            "r": 1.1439263097294186,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "An Incremental Dimensionality Reduction Method for Visualizing Streaming Multidimensional Data",
                "DOI": "10.1109/tvcg.2019.2934433",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934433",
                "FirstPage": 418,
                "LastPage": 428,
                "PaperType": "J",
                "Abstract": "Dimensionality reduction (DR) methods are commonly used for analyzing and visualizing multidimensional data. However, when data is a live streaming feed, conventional DR methods cannot be directly used because of their computational complexity and inability to preserve the projected data positions at previous time points. In addition, the problem becomes even more challenging when the dynamic data records have a varying number of dimensions as often found in real-world applications. This paper presents an incremental DR solution. We enhance an existing incremental PCA method in several ways to ensure its usability for visualizing streaming multidimensional data. First, we use geometric transformation and animation methods to help preserve a viewer's mental map when visualizing the incremental results. Second, to handle data dimension variants, we use an optimization method to estimate the projected data positions, and also convey the resulting uncertainty in the visualization. We demonstrate the effectiveness of our design with two case studies using real-world datasets.",
                "AuthorNamesDeduped": "Takanori Fujiwara;Jia-Kai Chou;Shilpika;Panpan Xu;Liu Ren;Kwan-Liu Ma",
                "AuthorNames": "Takanori Fujiwara;Jia-Kai Chou;Shilpika Shilpika;Panpan Xu;Liu Ren;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis;University of California, Davis;University of California, Davis;Bosch Research North America;Bosch Research North America;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2013.186;10.1109/tvcg.2017.2744419;10.1109/tvcg.2017.2744318;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346578;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598495;10.1109/tvcg.2014.2346574;10.1109/tvcg.2016.2598470;10.1109/tvcg.2015.2468078;10.1109/infvis.2003.1249004;10.1109/infvis.2004.60;10.1109/tvcg.2016.2598664",
                "AuthorKeywords": "Dimensionality reduction,principal component analysis,streaming data,uncertainty,visual analytics",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 1691,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 526,
                "i": [
                    526
                ]
            }
        },
        {
            "name": "Paulo Joia",
            "value": 108,
            "numPapers": 23,
            "cluster": "11",
            "visible": 1,
            "index": 415,
            "x": -202.82249034817224,
            "y": -20.323321799488934,
            "vy": 0,
            "vx": 0,
            "r": 1.1243523316062176,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Local Affine Multidimensional Projection",
                "DOI": "10.1109/tvcg.2011.220",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.220",
                "FirstPage": 2563,
                "LastPage": 2571,
                "PaperType": "J",
                "Abstract": "Multidimensional projection techniques have experienced many improvements lately, mainly regarding computational times and accuracy. However, existing methods do not yet provide flexible enough mechanisms for visualization-oriented fully interactive applications. This work presents a new multidimensional projection technique designed to be more flexible and versatile than other methods. This novel approach, called Local Affine Multidimensional Projection (LAMP), relies on orthogonal mapping theory to build accurate local transformations that can be dynamically modified according to user knowledge. The accuracy, flexibility and computational efficiency of LAMP is confirmed by a comprehensive set of comparisons. LAMP's versatility is exploited in an application which seeks to correlate data that, in principle, has no connection as well as in visual exploration of textual documents.",
                "AuthorNamesDeduped": "Paulo Joia;Danilo Barbosa Coimbra;José Alberto Cuminato;Fernando Vieira Paulovich;Luis Gustavo Nonato",
                "AuthorNames": "Paulo Joia;Danilo Coimbra;Jose A. Cuminato;Fernando V. Paulovich;Luis G. Nonato",
                "AuthorAffiliation": "Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil",
                "InternalReferences": "0.1109/visual.1996.567787;10.1109/tvcg.2009.140;10.1109/tvcg.2007.70580;10.1109/infvis.2002.1173159;10.1109/tvcg.2010.207;10.1109/tvcg.2010.170;10.1109/infvis.2002.1173161",
                "AuthorKeywords": "Multidimensional Projection, High Dimensional Data, Visual Data Mining",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 174,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 1429,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1539,
                "i": [
                    1539
                ]
            }
        },
        {
            "name": "Fernando Vieira Paulovich",
            "value": 317,
            "numPapers": 28,
            "cluster": "11",
            "visible": 1,
            "index": 416,
            "x": 163.4795707073749,
            "y": -122.16558419347243,
            "vy": 0,
            "vx": 0,
            "r": 1.3649971214738055,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Local Affine Multidimensional Projection",
                "DOI": "10.1109/tvcg.2011.220",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.220",
                "FirstPage": 2563,
                "LastPage": 2571,
                "PaperType": "J",
                "Abstract": "Multidimensional projection techniques have experienced many improvements lately, mainly regarding computational times and accuracy. However, existing methods do not yet provide flexible enough mechanisms for visualization-oriented fully interactive applications. This work presents a new multidimensional projection technique designed to be more flexible and versatile than other methods. This novel approach, called Local Affine Multidimensional Projection (LAMP), relies on orthogonal mapping theory to build accurate local transformations that can be dynamically modified according to user knowledge. The accuracy, flexibility and computational efficiency of LAMP is confirmed by a comprehensive set of comparisons. LAMP's versatility is exploited in an application which seeks to correlate data that, in principle, has no connection as well as in visual exploration of textual documents.",
                "AuthorNamesDeduped": "Paulo Joia;Danilo Barbosa Coimbra;José Alberto Cuminato;Fernando Vieira Paulovich;Luis Gustavo Nonato",
                "AuthorNames": "Paulo Joia;Danilo Coimbra;Jose A. Cuminato;Fernando V. Paulovich;Luis G. Nonato",
                "AuthorAffiliation": "Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil;Universidade de São Paulo, Brazil",
                "InternalReferences": "0.1109/visual.1996.567787;10.1109/tvcg.2009.140;10.1109/tvcg.2007.70580;10.1109/infvis.2002.1173159;10.1109/tvcg.2010.207;10.1109/tvcg.2010.170;10.1109/infvis.2002.1173161",
                "AuthorKeywords": "Multidimensional Projection, High Dimensional Data, Visual Data Mining",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 174,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 1429,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1539,
                "i": [
                    1539
                ]
            }
        },
        {
            "name": "Oh-Hyun Kwon",
            "value": 151,
            "numPapers": 23,
            "cluster": "2",
            "visible": 1,
            "index": 417,
            "x": -38.06869981590561,
            "y": 200.75052700883867,
            "vy": 0,
            "vx": 0,
            "r": 1.1738629821531377,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "A Deep Generative Model for Graph Layout",
                "DOI": "10.1109/tvcg.2019.2934396",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934396",
                "FirstPage": 665,
                "LastPage": 675,
                "PaperType": "J",
                "Abstract": "Different layouts can characterize different aspects of the same graph. Finding a “good” layout of a graph is thus an important task for graph visualization. In practice, users often visualize a graph in multiple layouts by using different methods and varying parameter settings until they find a layout that best suits the purpose of the visualization. However, this trial-and-error process is often haphazard and time-consuming. To provide users with an intuitive way to navigate the layout design space, we present a technique to systematically visualize a graph in diverse layouts using deep generative models. We design an encoder-decoder architecture to learn a model from a collection of example layouts, where the encoder represents training examples in a latent space and the decoder produces layouts from the latent space. In particular, we train the model to construct a two-dimensional latent space for users to easily explore and generate various layouts. We demonstrate our approach through quantitative and qualitative evaluations of the generated layouts. The results of our evaluations show that our model is capable of learning and generalizing abstract concepts of graph layouts, not just memorizing the training examples. In summary, this paper presents a fundamentally new approach to graph visualization where a machine learning model learns to visualize a graph from examples without manually-defined heuristics.",
                "AuthorNamesDeduped": "Oh-Hyun Kwon;Kwan-Liu Ma",
                "AuthorNames": "Oh-Hyun Kwon;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2014.2346277;10.1109/tvcg.2017.2743858;10.1109/tvcg.2015.2467451;10.1109/tvcg.2007.70580;10.1109/tvcg.2018.2865139;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Graph,network,visualization,layout,machine learning,deep learning,neural network,generative model,autoencoder",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 87,
                "DownloadsXplore": 1760,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 532,
                "i": [
                    532
                ]
            }
        },
        {
            "name": "Xiaolong Luke Zhang",
            "value": 66,
            "numPapers": 40,
            "cluster": "1",
            "visible": 1,
            "index": 418,
            "x": -107.66306472783461,
            "y": -173.95017819309672,
            "vy": 0,
            "vx": 0,
            "r": 1.075993091537133,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data",
                "DOI": "10.1109/tvcg.2015.2467619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467619",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.",
                "AuthorNamesDeduped": "Siming Chen 0001;Xiaoru Yuan;Zhenhuang Wang;Cong Guo 0004;Jie Liang 0004;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorNames": "Siming Chen;Xiaoru Yuan;Zhenhuang Wang;Cong Guo;Jie Liang;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;College of Information Sciences and Technology, Pennsylvania State University;School of Computer Science and Technology, and School of Computer Software, Tianjin University",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2008.4677356;10.1109/tvcg.2009.182;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/tvcg.2009.143;10.1109/infvis.2004.27;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spatial temporal visual analytics, Geo-tagged social media, Sparsely sampling, Uncertainty, Movement",
                "AminerCitationCount": 141,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2690,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1103,
                "i": [
                    1103
                ]
            }
        },
        {
            "name": "Cong Xie",
            "value": 147,
            "numPapers": 38,
            "cluster": "6",
            "visible": 1,
            "index": 419,
            "x": 197.12414179042642,
            "y": 55.60640901360081,
            "vy": 0,
            "vx": 0,
            "r": 1.1692573402417963,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "HealthPrism: A Visual Analytics System for Exploring Children's Physical and Mental Health Profiles with Multimodal Data",
                "DOI": "10.1109/tvcg.2023.3326943",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326943",
                "FirstPage": 1205,
                "LastPage": 1215,
                "PaperType": "J",
                "Abstract": "The correlation between children's personal and family characteristics (e.g., demographics and socioeconomic status) and their physical and mental health status has been extensively studied across various research domains, such as public health, medicine, and data science. Such studies can provide insights into the underlying factors affecting children's health and aid in the development of targeted interventions to improve their health outcomes. However, with the availability of multiple data sources, including context data (i.e., the background information of children) and motion data (i.e., sensor data measuring activities of children), new challenges have arisen due to the large-scale, heterogeneous, and multimodal nature of the data. Existing statistical hypothesis-based and learning model-based approaches have been inadequate for comprehensively analyzing the complex correlation between multimodal features and multi-dimensional health outcomes due to the limited information revealed. In this work, we first distill a set of design requirements from multiple levels through conducting a literature review and iteratively interviewing 11 experts from multiple domains (e.g., public health and medicine). Then, we propose HealthPrism, an interactive visual and analytics system for assisting researchers in exploring the importance and influence of various context and motion features on children's health status from multi-level perspectives. Within HealthPrism, a multimodal learning model with a gate mechanism is proposed for health profiling and cross-modality feature importance comparison. A set of visualization components is designed for experts to explore and understand multimodal data freely. We demonstrate the effectiveness and usability of HealthPrism through quantitative evaluation of the model performance, case studies, and expert interviews in associated domains.",
                "AuthorNamesDeduped": "Zhihan Jiang;Handi Chen;Rui Zhou;Jing Deng;Xinchen Zhang;Running Zhao;Cong Xie;Yifang Wang 0001;Edith C. H. Ngai",
                "AuthorNames": "Zhihan Jiang;Handi Chen;Rui Zhou;Jing Deng;Xinchen Zhang;Running Zhao;Cong Xie;Yifang Wang;Edith C.H. Ngai",
                "AuthorAffiliation": "University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;University of Hong Kong, China;Tencent, China;Kellogg School of Management, Northwestern University, USA;University of Hong Kong, China",
                "InternalReferences": "10.1109/tvcg.2021.3114836;10.1109/tvcg.2020.3030424;10.1109/tvcg.2018.2864885;10.1109/tvcg.2016.2598588;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/tvcg.2015.2467555;10.1109/tvcg.2015.2467325;10.1109/tvcg.2021.3114794",
                "AuthorKeywords": "Visual Analytics,Health Profiling,Multimodal Learning,Context Data,Motion Data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 328,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 36,
                "i": [
                    36
                ]
            }
        },
        {
            "name": "Honghui Mei",
            "value": 278,
            "numPapers": 65,
            "cluster": "1",
            "visible": 1,
            "index": 420,
            "x": -183.13268234306955,
            "y": 92.26278045795266,
            "vy": 0,
            "vx": 0,
            "r": 1.3200921128382268,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "ViDX: Visual Diagnostics of Assembly Line Performance in Smart Factories",
                "DOI": "10.1109/tvcg.2016.2598664",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598664",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "Visual analytics plays a key role in the era of connected industry (or industry 4.0, industrial internet) as modern machines and assembly lines generate large amounts of data and effective visual exploration techniques are needed for troubleshooting, process optimization, and decision making. However, developing effective visual analytics solutions for this application domain is a challenging task due to the sheer volume and the complexity of the data collected in the manufacturing processes. We report the design and implementation of a comprehensive visual analytics system, ViDX. It supports both real-time tracking of assembly line performance and historical data exploration to identify inefficiencies, locate anomalies, and form hypotheses about their causes and effects. The system is designed based on a set of requirements gathered through discussions with the managers and operators from manufacturing sites. It features interlinked views displaying data at different levels of detail. In particular, we apply and extend the Marey's graph by introducing a time-aware outlier-preserving visual aggregation technique to support effective troubleshooting in manufacturing processes. We also introduce two novel interaction techniques, namely the quantiles brush and samples brush, for the users to interactively steer the outlier detection algorithms. We evaluate the system with example use cases and an in-depth user interview, both conducted together with the managers and operators from manufacturing plants. The result demonstrates its effectiveness and reports a successful pilot application of visual analytics for manufacturing in smart factories.",
                "AuthorNamesDeduped": "Panpan Xu;Honghui Mei;Liu Ren;Wei Chen 0001",
                "AuthorNames": "Panpan Xu;Honghui Mei;Liu Ren;Wei Chen",
                "AuthorAffiliation": "Bosch Research North America;Zhejiang University;Bosch Research North America;Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2014.2346454;10.1109/tvcg.2015.2467592;10.1109/tvcg.2006.170;10.1109/tvcg.2015.2467622;10.1109/tvcg.2014.2346682;10.1109/tvcg.2012.225;10.1109/tvcg.2013.200;10.1109/infvis.2002.1173149;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Temporal Data;Marey's Graph;Visual Analytics;Manufacturing;Smart Factory;Connected Industry;Industry 4.0",
                "AminerCitationCount": 119,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 3370,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 961,
                "i": [
                    961
                ]
            }
        },
        {
            "name": "Conglei Shi",
            "value": 374,
            "numPapers": 47,
            "cluster": "1",
            "visible": 1,
            "index": 421,
            "x": 72.8001374870448,
            "y": -191.9639028095318,
            "vy": 0,
            "vx": 0,
            "r": 1.4306275187104203,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Time Curves: Folding Time to Visualize Patterns of Temporal Evolution in Data",
                "DOI": "10.1109/tvcg.2015.2467851",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467851",
                "FirstPage": 559,
                "LastPage": 568,
                "PaperType": "J",
                "Abstract": "We introduce time curves as a general approach for visualizing patterns of evolution in temporal data. Examples of such patterns include slow and regular progressions, large sudden changes, and reversals to previous states. These patterns can be of interest in a range of domains, such as collaborative document editing, dynamic network analysis, and video analysis. Time curves employ the metaphor of folding a timeline visualization into itself so as to bring similar time points close to each other. This metaphor can be applied to any dataset where a similarity metric between temporal snapshots can be defined, thus it is largely datatype-agnostic. We illustrate how time curves can visually reveal informative patterns in a range of different datasets.",
                "AuthorNamesDeduped": "Benjamin Bach;Conglei Shi;Nicolas Heulot;Tara M. Madhyastha;Thomas J. Grabowski;Pierre Dragicevic",
                "AuthorNames": "Benjamin Bach;Conglei Shi;Nicolas Heulot;Tara Madhyastha;Tom Grabowski;Pierre Dragicevic",
                "AuthorAffiliation": "Microsoft Research-Inria Joint Centre;IBM T.J, Watson Research Center, Yorktown Height, NY;IRT SystemX;Department of Radiology, University of Washington;Department of Radiology and Neurology, University of Washington;Inria",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2007.70535;10.1109/infvis.2004.1;10.1109/tvcg.2014.2346325;10.1109/tvcg.2013.192;10.1109/infvis.2002.1173155",
                "AuthorKeywords": "Temporal data visualization, information visualization, multidimensional scaling",
                "AminerCitationCount": 188,
                "CitationCountCrossRef": 128,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 3408,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1006,
                "i": [
                    1006
                ]
            }
        },
        {
            "name": "Jie Bao 0003",
            "value": 316,
            "numPapers": 79,
            "cluster": "3",
            "visible": 1,
            "index": 422,
            "x": 76.07928548910058,
            "y": 190.95010426671135,
            "vy": 0,
            "vx": 0,
            "r": 1.36384571099597,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "AirVis: Visual Analytics of Air Pollution Propagation",
                "DOI": "10.1109/tvcg.2019.2934670",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934670",
                "FirstPage": 800,
                "LastPage": 810,
                "PaperType": "J",
                "Abstract": "Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao 0003;Yu Zheng 0004;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao;Yu Zheng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Research Center for Air Pollution and Health, Zhejiang University;JD Intelligent City Research, Beijing, China;JD Intelligent City Research, Beijing, China;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.193;10.1109/tvcg.2011.202;10.1109/tvcg.2018.2864826;10.1109/tvcg.2015.2467619;10.1109/tvcg.2017.2745083;10.1109/tvcg.2013.226;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865149;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2016.2598919;10.1109/tvcg.2012.213;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468111;10.1109/tvcg.2018.2865126;10.1109/tvcg.2015.2467194;10.1109/tvcg.2018.2865041;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Air pollution propagation,pattern mining,graph visualization",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 2525,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 611,
                "i": [
                    611
                ]
            }
        },
        {
            "name": "Yu Zheng 0004",
            "value": 316,
            "numPapers": 79,
            "cluster": "3",
            "visible": 1,
            "index": 423,
            "x": -185.30234371014754,
            "y": -89.51559314178918,
            "vy": 0,
            "vx": 0,
            "r": 1.36384571099597,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "AirVis: Visual Analytics of Air Pollution Propagation",
                "DOI": "10.1109/tvcg.2019.2934670",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934670",
                "FirstPage": 800,
                "LastPage": 810,
                "PaperType": "J",
                "Abstract": "Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao 0003;Yu Zheng 0004;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao;Yu Zheng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Research Center for Air Pollution and Health, Zhejiang University;JD Intelligent City Research, Beijing, China;JD Intelligent City Research, Beijing, China;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.193;10.1109/tvcg.2011.202;10.1109/tvcg.2018.2864826;10.1109/tvcg.2015.2467619;10.1109/tvcg.2017.2745083;10.1109/tvcg.2013.226;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865149;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2016.2598919;10.1109/tvcg.2012.213;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468111;10.1109/tvcg.2018.2865126;10.1109/tvcg.2015.2467194;10.1109/tvcg.2018.2865041;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Air pollution propagation,pattern mining,graph visualization",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 2525,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 611,
                "i": [
                    611
                ]
            }
        },
        {
            "name": "Shifu Chen",
            "value": 0,
            "numPapers": 24,
            "cluster": "3",
            "visible": 1,
            "index": 424,
            "x": 197.3356657444317,
            "y": -59.233732156617634,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Visualizing Large-Scale Spatial Time Series with GeoChron",
                "DOI": "10.1109/tvcg.2023.3327162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327162",
                "FirstPage": 1194,
                "LastPage": 1204,
                "PaperType": "J",
                "Abstract": "In geo-related fields such as urban informatics, atmospheric science, and geography, large-scale spatial time (ST) series (i.e., geo-referred time series) are collected for monitoring and understanding important spatiotemporal phenomena. ST series visualization is an effective means of understanding the data and reviewing spatiotemporal phenomena, which is a prerequisite for in-depth data analysis. However, visualizing these series is challenging due to their large scales, inherent dynamics, and spatiotemporal nature. In this study, we introduce the notion of patterns of evolution in ST series. Each evolution pattern is characterized by 1) a set of ST series that are close in space and 2) a time period when the trends of these ST series are correlated. We then leverage Storyline techniques by considering an analogy between evolution patterns and sessions, and finally design a novel visualization called GeoChron, which is capable of visualizing large-scale ST series in an evolution pattern-aware and narrative-preserving manner. GeoChron includes a mining framework to extract evolution patterns and two-level visualizations to enhance its visual scalability. We evaluate GeoChron with two case studies, an informal user study, an ablation study, parameter analysis, and running time analysis.",
                "AuthorNamesDeduped": "Zikun Deng;Shifu Chen;Tobias Schreck;Dazhen Deng;Tan Tang;Mingliang Xu;Di Weng;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Shifu Chen;Tobias Schreck;Dazhen Deng;Tan Tang;Mingliang Xu;Di Weng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;School of Software Technology, Zhejiang University, China;Graz University of Technology, Austria;School of Software Technology, Zhejiang University, China;School of Art and Archaeology, Zhejiang University, China;School of Computer and Artificial Intelligence, Zhengzhou University, China;School of Software Technology, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2015.2467851;10.1109/tvcg.2019.2934670;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209480;10.1109/tvcg.2019.2934555;10.1109/tvcg.2021.3114762;10.1109/vast.2014.7042489;10.1109/tvcg.2018.2865018;10.1109/tvcg.2022.3209430;10.1109/tvcg.2013.196;10.1109/tvcg.2021.3114868;10.1109/vast.2012.6400491;10.1109/tvcg.2007.70523;10.1109/tvcg.2012.212;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864899;10.1109/tvcg.2021.3114781;10.1109/tvcg.2018.2865146;10.1109/tvcg.2013.228;10.1109/tvcg.2022.3209447;10.1109/tvcg.2021.3114877;10.1109/tvcg.2019.2934660;10.1109/tvcg.2022.3209469;10.1109/tvcg.2021.3114865",
                "AuthorKeywords": "Spatiotemporal visualization,spatial time series,Storyline",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 365,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 68,
                "i": [
                    68
                ]
            }
        },
        {
            "name": "Mingliang Xu",
            "value": 178,
            "numPapers": 156,
            "cluster": "3",
            "visible": 1,
            "index": 425,
            "x": -105.62155493440845,
            "y": 177.18376656239627,
            "vy": 0,
            "vx": 0,
            "r": 1.204951065054692,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Effects of View Layout on Situated Analytics for Multiple-View Representations in Immersive Visualization",
                "DOI": "10.1109/tvcg.2022.3209475",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209475",
                "FirstPage": 440,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "Multiple-view (MV) representations enabling multi-perspective exploration of large and complex data are often employed on 2D displays. The technique also shows great potential in addressing complex analytic tasks in immersive visualization. However, although useful, the design space of MV representations in immersive visualization lacks in deep exploration. In this paper, we propose a new perspective to this line of research, by examining the effects of view layout for MV representations on situated analytics. Specifically, we disentangle situated analytics in perspectives of situatedness regarding spatial relationship between visual representations and physical referents, and analytics regarding cross-view data analysis including filtering, refocusing, and connecting tasks. Through an in-depth analysis of existing layout paradigms, we summarize design trade-offs for achieving high situatedness and effective analytics simultaneously. We then distill a list of design requirements for a desired layout that balances situatedness and analytics, and develop a prototype system with an automatic layout adaptation method to fulfill the requirements. The method mainly includes a cylindrical paradigm for egocentric reference frame, and a force-directed method for proper view-view, view-user, and view-referent proximities and high view visibility. We conducted a formal user study that compares layouts by our method with linked and embedded layouts. Quantitative results show that participants finished filtering- and connecting-centered tasks significantly faster with our layouts, and user feedback confirms high usability of the prototype system.",
                "AuthorNamesDeduped": "Zhen Wen;Wei Zeng 0004;Luoxuan Weng;Yihan Liu;Mingliang Xu;Wei Chen 0001",
                "AuthorNames": "Zhen Wen;Wei Zeng;Luoxuan Weng;Yihan Liu;Mingliang Xu;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;The Hong Kong University of Science and Technology (Guangzhou), China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhengzhou University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2021.3114835;10.1109/tvcg.2020.3030338;10.1109/tvcg.2021.3114806;10.1109/tvcg.2019.2934332;10.1109/tvcg.2021.3114861;10.1109/vast.2015.7347628;10.1109/tvcg.2007.70521;10.1109/tvcg.2018.2865191;10.1109/tvcg.2020.3030419;10.1109/tvcg.2017.2744198;10.1109/tvcg.2021.3114801;10.1109/tvcg.2019.2934282;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Situated analytics,multiple-view representations,view layout,immersive visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1320,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 146,
                "i": [
                    146
                ]
            }
        },
        {
            "name": "Jiahui Chen",
            "value": 72,
            "numPapers": 19,
            "cluster": "3",
            "visible": 1,
            "index": 426,
            "x": -41.85296155067413,
            "y": -202.23335434452397,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "AirVis: Visual Analytics of Air Pollution Propagation",
                "DOI": "10.1109/tvcg.2019.2934670",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934670",
                "FirstPage": 800,
                "LastPage": 810,
                "PaperType": "J",
                "Abstract": "Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao 0003;Yu Zheng 0004;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao;Yu Zheng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Research Center for Air Pollution and Health, Zhejiang University;JD Intelligent City Research, Beijing, China;JD Intelligent City Research, Beijing, China;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.193;10.1109/tvcg.2011.202;10.1109/tvcg.2018.2864826;10.1109/tvcg.2015.2467619;10.1109/tvcg.2017.2745083;10.1109/tvcg.2013.226;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865149;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2016.2598919;10.1109/tvcg.2012.213;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468111;10.1109/tvcg.2018.2865126;10.1109/tvcg.2015.2467194;10.1109/tvcg.2018.2865041;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Air pollution propagation,pattern mining,graph visualization",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 2525,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 611,
                "i": [
                    611
                ]
            }
        },
        {
            "name": "Ren Liu",
            "value": 72,
            "numPapers": 19,
            "cluster": "3",
            "visible": 1,
            "index": 427,
            "x": 167.66395234005023,
            "y": 120.99090497104882,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "AirVis: Visual Analytics of Air Pollution Propagation",
                "DOI": "10.1109/tvcg.2019.2934670",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934670",
                "FirstPage": 800,
                "LastPage": 810,
                "PaperType": "J",
                "Abstract": "Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao 0003;Yu Zheng 0004;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao;Yu Zheng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Research Center for Air Pollution and Health, Zhejiang University;JD Intelligent City Research, Beijing, China;JD Intelligent City Research, Beijing, China;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.193;10.1109/tvcg.2011.202;10.1109/tvcg.2018.2864826;10.1109/tvcg.2015.2467619;10.1109/tvcg.2017.2745083;10.1109/tvcg.2013.226;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865149;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2016.2598919;10.1109/tvcg.2012.213;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468111;10.1109/tvcg.2018.2865126;10.1109/tvcg.2015.2467194;10.1109/tvcg.2018.2865041;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Air pollution propagation,pattern mining,graph visualization",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 2525,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 611,
                "i": [
                    611
                ]
            }
        },
        {
            "name": "Zhibin Wang",
            "value": 72,
            "numPapers": 19,
            "cluster": "3",
            "visible": 1,
            "index": 428,
            "x": -205.59840735708897,
            "y": 24.068545702399895,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "AirVis: Visual Analytics of Air Pollution Propagation",
                "DOI": "10.1109/tvcg.2019.2934670",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934670",
                "FirstPage": 800,
                "LastPage": 810,
                "PaperType": "J",
                "Abstract": "Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.",
                "AuthorNamesDeduped": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao 0003;Yu Zheng 0004;Yingcai Wu",
                "AuthorNames": "Zikun Deng;Di Weng;Jiahui Chen;Ren Liu;Zhibin Wang;Jie Bao;Yu Zheng;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Research Center for Air Pollution and Health, Zhejiang University;JD Intelligent City Research, Beijing, China;JD Intelligent City Research, Beijing, China;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.193;10.1109/tvcg.2011.202;10.1109/tvcg.2018.2864826;10.1109/tvcg.2015.2467619;10.1109/tvcg.2017.2745083;10.1109/tvcg.2013.226;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865149;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2016.2598919;10.1109/tvcg.2012.213;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468111;10.1109/tvcg.2018.2865126;10.1109/tvcg.2015.2467194;10.1109/tvcg.2018.2865041;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Air pollution propagation,pattern mining,graph visualization",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 2525,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 611,
                "i": [
                    611
                ]
            }
        },
        {
            "name": "Shuhan Liu",
            "value": 78,
            "numPapers": 25,
            "cluster": "3",
            "visible": 1,
            "index": 429,
            "x": 135.50163333514192,
            "y": -156.80978082858465,
            "vy": 0,
            "vx": 0,
            "r": 1.0898100172711571,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning",
                "DOI": "10.1109/tvcg.2020.3030467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030467",
                "FirstPage": 294,
                "LastPage": 303,
                "PaperType": "J",
                "Abstract": "Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.",
                "AuthorNamesDeduped": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch 0001;Lingyun Yu 0001;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorNames": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch;Lingyun Yu;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;Department of Computer Science and Software Engineering, Xi 'an Jiaotong-Liverpool University.;Alibaba Group;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2019.2934396;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2019.2934798",
                "AuthorKeywords": "Storyline visualization,reinforcement learning,mixed-initiative design",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 376,
                "i": [
                    376
                ]
            }
        },
        {
            "name": "Enxun Wei",
            "value": 250,
            "numPapers": 23,
            "cluster": "1",
            "visible": 1,
            "index": 430,
            "x": 6.015788685799249,
            "y": 207.3977104176606,
            "vy": 0,
            "vx": 0,
            "r": 1.2878526194588371,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "StoryFlow: Tracking the Evolution of Stories",
                "DOI": "10.1109/tvcg.2013.196",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.196",
                "FirstPage": 2436,
                "LastPage": 2445,
                "PaperType": "J",
                "Abstract": "Storyline visualizations, which are useful in many applications, aim to illustrate the dynamic relationships between entities in a story. However, the growing complexity and scalability of stories pose great challenges for existing approaches. In this paper, we propose an efficient optimization approach to generating an aesthetically appealing storyline visualization, which effectively handles the hierarchical relationships between entities over time. The approach formulates the storyline layout as a novel hybrid optimization approach that combines discrete and continuous optimization. The discrete method generates an initial layout through the ordering and alignment of entities, and the continuous method optimizes the initial layout to produce the optimal one. The efficient approach makes real-time interactions (e.g., bundling and straightening) possible, thus enabling users to better understand and track how the story evolves. Experiments and case studies are conducted to demonstrate the effectiveness and usefulness of the optimization approach.",
                "AuthorNamesDeduped": "Shixia Liu;Yingcai Wu;Enxun Wei;Mengchen Liu;Yang Liu 0014",
                "AuthorNames": "Shixia Liu;Yingcai Wu;Enxun Wei;Mengchen Liu;Yang Liu",
                "AuthorAffiliation": "Microsoft Research, Asia, Russia;Microsoft Research, Asia, Russia;Shanghai Jiao Tong University, China;Tsinghua University, Beijing, Beijing, CN;Microsoft Research, Asia, Russia",
                "InternalReferences": "0.1109/tvcg.2012.253;10.1109/tvcg.2011.255;10.1109/tvcg.2010.179;10.1109/tvcg.2011.226;10.1109/vast.2008.4677364;10.1109/tvcg.2012.212;10.1109/tvcg.2013.221;10.1109/tvcg.2012.225;10.1109/vast.2006.261421;10.1109/vast.2009.5333437;10.1109/tvcg.2011.239",
                "AuthorKeywords": "Storylines, story-telling visualization, user interactions, level-of-detail, optimization",
                "AminerCitationCount": 215,
                "CitationCountCrossRef": 135,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2426,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1301,
                "i": [
                    1301
                ]
            }
        },
        {
            "name": "Yang Liu 0014",
            "value": 135,
            "numPapers": 10,
            "cluster": "3",
            "visible": 1,
            "index": 431,
            "x": -144.69876253080378,
            "y": -149.03780769339724,
            "vy": 0,
            "vx": 0,
            "r": 1.155440414507772,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "StoryFlow: Tracking the Evolution of Stories",
                "DOI": "10.1109/tvcg.2013.196",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.196",
                "FirstPage": 2436,
                "LastPage": 2445,
                "PaperType": "J",
                "Abstract": "Storyline visualizations, which are useful in many applications, aim to illustrate the dynamic relationships between entities in a story. However, the growing complexity and scalability of stories pose great challenges for existing approaches. In this paper, we propose an efficient optimization approach to generating an aesthetically appealing storyline visualization, which effectively handles the hierarchical relationships between entities over time. The approach formulates the storyline layout as a novel hybrid optimization approach that combines discrete and continuous optimization. The discrete method generates an initial layout through the ordering and alignment of entities, and the continuous method optimizes the initial layout to produce the optimal one. The efficient approach makes real-time interactions (e.g., bundling and straightening) possible, thus enabling users to better understand and track how the story evolves. Experiments and case studies are conducted to demonstrate the effectiveness and usefulness of the optimization approach.",
                "AuthorNamesDeduped": "Shixia Liu;Yingcai Wu;Enxun Wei;Mengchen Liu;Yang Liu 0014",
                "AuthorNames": "Shixia Liu;Yingcai Wu;Enxun Wei;Mengchen Liu;Yang Liu",
                "AuthorAffiliation": "Microsoft Research, Asia, Russia;Microsoft Research, Asia, Russia;Shanghai Jiao Tong University, China;Tsinghua University, Beijing, Beijing, CN;Microsoft Research, Asia, Russia",
                "InternalReferences": "0.1109/tvcg.2012.253;10.1109/tvcg.2011.255;10.1109/tvcg.2010.179;10.1109/tvcg.2011.226;10.1109/vast.2008.4677364;10.1109/tvcg.2012.212;10.1109/tvcg.2013.221;10.1109/tvcg.2012.225;10.1109/vast.2006.261421;10.1109/vast.2009.5333437;10.1109/tvcg.2011.239",
                "AuthorKeywords": "Storylines, story-telling visualization, user interactions, level-of-detail, optimization",
                "AminerCitationCount": 215,
                "CitationCountCrossRef": 135,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2426,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1301,
                "i": [
                    1301
                ]
            }
        },
        {
            "name": "Yuzuru Tanahashi",
            "value": 114,
            "numPapers": 8,
            "cluster": "3",
            "visible": 1,
            "index": 432,
            "x": 207.61010734205124,
            "y": 12.16730575854736,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Design Considerations for Optimizing Storyline Visualizations",
                "DOI": "10.1109/tvcg.2012.212",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.212",
                "FirstPage": 2679,
                "LastPage": 2688,
                "PaperType": "J",
                "Abstract": "Storyline visualization is a technique used to depict the temporal dynamics of social interactions. This visualization technique was first introduced as a hand-drawn illustration in XKCD's “Movie Narrative Charts” [21]. If properly constructed, the visualization can convey both global trends and local interactions in the data. However, previous methods for automating storyline visualizations are overly simple, failing to achieve some of the essential principles practiced by professional illustrators. This paper presents a set of design considerations for generating aesthetically pleasing and legible storyline visualizations. Our layout algorithm is based on evolutionary computation, allowing us to effectively incorporate multiple objective functions. We show that the resulting visualizations have significantly improved aesthetics and legibility compared to existing techniques.",
                "AuthorNamesDeduped": "Yuzuru Tanahashi;Kwan-Liu Ma",
                "AuthorNames": "Yuzuru Tanahashi;Kwan-Liu Ma",
                "AuthorAffiliation": "ViDi Research Group, University of California,슠Davis, USA;ViDi Research Group, University of California, Davis, USA",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2008.135;10.1109/tvcg.2011.190;10.1109/tvcg.2011.239;10.1109/tvcg.2006.193;10.1109/tvcg.2007.70535;10.1109/infvis.2003.1249008;10.1109/tvcg.2008.125;10.1109/infvis.2002.1173160",
                "AuthorKeywords": "Layout algorithm, timeline visualization, storyline visualization, design study",
                "AminerCitationCount": 200,
                "CitationCountCrossRef": 123,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 2636,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1404,
                "i": [
                    1404
                ]
            }
        },
        {
            "name": "Renzhong Li",
            "value": 70,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 433,
            "x": -161.4904999706878,
            "y": 131.41848583520232,
            "vy": 0,
            "vx": 0,
            "r": 1.0805987334484743,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning",
                "DOI": "10.1109/tvcg.2020.3030467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030467",
                "FirstPage": 294,
                "LastPage": 303,
                "PaperType": "J",
                "Abstract": "Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.",
                "AuthorNamesDeduped": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch 0001;Lingyun Yu 0001;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorNames": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch;Lingyun Yu;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;Department of Computer Science and Software Engineering, Xi 'an Jiaotong-Liverpool University.;Alibaba Group;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2019.2934396;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2019.2934798",
                "AuthorKeywords": "Storyline visualization,reinforcement learning,mixed-initiative design",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 376,
                "i": [
                    376
                ]
            }
        },
        {
            "name": "Xinke Wu",
            "value": 70,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 434,
            "x": 30.341092178497462,
            "y": -206.22661837264346,
            "vy": 0,
            "vx": 0,
            "r": 1.0805987334484743,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning",
                "DOI": "10.1109/tvcg.2020.3030467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030467",
                "FirstPage": 294,
                "LastPage": 303,
                "PaperType": "J",
                "Abstract": "Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.",
                "AuthorNamesDeduped": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch 0001;Lingyun Yu 0001;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorNames": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch;Lingyun Yu;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;Department of Computer Science and Software Engineering, Xi 'an Jiaotong-Liverpool University.;Alibaba Group;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2019.2934396;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2019.2934798",
                "AuthorKeywords": "Storyline visualization,reinforcement learning,mixed-initiative design",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 376,
                "i": [
                    376
                ]
            }
        },
        {
            "name": "Johannes Knittel",
            "value": 88,
            "numPapers": 57,
            "cluster": "1",
            "visible": 1,
            "index": 435,
            "x": 117.06598338764272,
            "y": 172.7586626872417,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning",
                "DOI": "10.1109/tvcg.2020.3030467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030467",
                "FirstPage": 294,
                "LastPage": 303,
                "PaperType": "J",
                "Abstract": "Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.",
                "AuthorNamesDeduped": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch 0001;Lingyun Yu 0001;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorNames": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch;Lingyun Yu;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;Department of Computer Science and Software Engineering, Xi 'an Jiaotong-Liverpool University.;Alibaba Group;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2019.2934396;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2019.2934798",
                "AuthorKeywords": "Storyline visualization,reinforcement learning,mixed-initiative design",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 376,
                "i": [
                    376
                ]
            }
        },
        {
            "name": "Peiran Ren",
            "value": 70,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 436,
            "x": -203.25056473972427,
            "y": -48.36535881168614,
            "vy": 0,
            "vx": 0,
            "r": 1.0805987334484743,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning",
                "DOI": "10.1109/tvcg.2020.3030467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030467",
                "FirstPage": 294,
                "LastPage": 303,
                "PaperType": "J",
                "Abstract": "Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.",
                "AuthorNamesDeduped": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch 0001;Lingyun Yu 0001;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorNames": "Tan Tang;Renzhong Li;Xinke Wu;Shuhan Liu;Johannes Knittel;Steffen Koch;Lingyun Yu;Peiran Ren;Thomas Ertl;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;VIS/VISUS, University of Stuttgart;Department of Computer Science and Software Engineering, Xi 'an Jiaotong-Liverpool University.;Alibaba Group;Zhejiang Lab and State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2019.2934396;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2019.2934798",
                "AuthorKeywords": "Storyline visualization,reinforcement learning,mixed-initiative design",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 376,
                "i": [
                    376
                ]
            }
        },
        {
            "name": "Sadia Rubab",
            "value": 60,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 437,
            "x": 182.7499479135118,
            "y": -101.74702225425926,
            "vy": 0,
            "vx": 0,
            "r": 1.0690846286701208,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "iStoryline: Effective Convergence to Hand-drawn Storylines",
                "DOI": "10.1109/tvcg.2018.2864899",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864899",
                "FirstPage": 769,
                "LastPage": 778,
                "PaperType": "J",
                "Abstract": "Storyline visualization techniques have progressed significantly to generate illustrations of complex stories automatically. However, the visual layouts of storylines are not enhanced accordingly despite the improvement in the performance and extension of its application area. Existing methods attempt to achieve several shared optimization goals, such as reducing empty space and minimizing line crossings and wiggles. However, these goals do not always produce optimal results when compared to hand-drawn storylines. We conducted a preliminary study to learn how users translate a narrative into a hand-drawn storyline and check whether the visual elements in hand-drawn illustrations can be mapped back to appropriate narrative contexts. We also compared the hand-drawn storylines with storylines generated by the state-of-the-art methods and found they have significant differences. Our findings led to a design space that summarizes (1) how artists utilize narrative elements and (2) the sequence of actions artists follow to portray expressive and attractive storylines. We developed iStoryline, an authoring tool for integrating high-level user interactions into optimization algorithms and achieving a balance between hand-drawn storylines and automatic layouts. iStoryline allows users to create novel storyline visualizations easily according to their preferences by modifying the automatically generated layouts. The effectiveness and usability of iStoryline are studied with qualitative evaluations.",
                "AuthorNamesDeduped": "Tan Tang;Sadia Rubab;Jiewen Lai;Weiwei Cui;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Tan Tang;Sadia Rubab;Jiewen Lai;Weiwei Cui;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Microsoft Research;Rijksuniversiteit Groningen, Groningen, Groningen, NL;Zhejiang University, Hangzhou, Zhejiang, CN",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/vast.2017.8585487;10.1109/tvcg.2017.2743990;10.1109/tvcg.2009.109;10.1109/tvcg.2015.2467531;10.1109/tvcg.2015.2467451;10.1109/tvcg.2016.2598620;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2014.2346291;10.1109/tvcg.2017.2745878;10.1109/tvcg.2012.212;10.1109/tvcg.2014.2346913",
                "AuthorKeywords": "Hand-drawn illustrations,automatic layout,design space,interactions,optimization",
                "AminerCitationCount": 32,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 1321,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 678,
                "i": [
                    678
                ]
            }
        },
        {
            "name": "Jiewen Lai",
            "value": 60,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 438,
            "x": -66.10041213057532,
            "y": 198.6975981640646,
            "vy": 0,
            "vx": 0,
            "r": 1.0690846286701208,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "iStoryline: Effective Convergence to Hand-drawn Storylines",
                "DOI": "10.1109/tvcg.2018.2864899",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864899",
                "FirstPage": 769,
                "LastPage": 778,
                "PaperType": "J",
                "Abstract": "Storyline visualization techniques have progressed significantly to generate illustrations of complex stories automatically. However, the visual layouts of storylines are not enhanced accordingly despite the improvement in the performance and extension of its application area. Existing methods attempt to achieve several shared optimization goals, such as reducing empty space and minimizing line crossings and wiggles. However, these goals do not always produce optimal results when compared to hand-drawn storylines. We conducted a preliminary study to learn how users translate a narrative into a hand-drawn storyline and check whether the visual elements in hand-drawn illustrations can be mapped back to appropriate narrative contexts. We also compared the hand-drawn storylines with storylines generated by the state-of-the-art methods and found they have significant differences. Our findings led to a design space that summarizes (1) how artists utilize narrative elements and (2) the sequence of actions artists follow to portray expressive and attractive storylines. We developed iStoryline, an authoring tool for integrating high-level user interactions into optimization algorithms and achieving a balance between hand-drawn storylines and automatic layouts. iStoryline allows users to create novel storyline visualizations easily according to their preferences by modifying the automatically generated layouts. The effectiveness and usability of iStoryline are studied with qualitative evaluations.",
                "AuthorNamesDeduped": "Tan Tang;Sadia Rubab;Jiewen Lai;Weiwei Cui;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Tan Tang;Sadia Rubab;Jiewen Lai;Weiwei Cui;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Microsoft Research;Rijksuniversiteit Groningen, Groningen, Groningen, NL;Zhejiang University, Hangzhou, Zhejiang, CN",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/vast.2017.8585487;10.1109/tvcg.2017.2743990;10.1109/tvcg.2009.109;10.1109/tvcg.2015.2467531;10.1109/tvcg.2015.2467451;10.1109/tvcg.2016.2598620;10.1109/tvcg.2013.191;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2014.2346291;10.1109/tvcg.2017.2745878;10.1109/tvcg.2012.212;10.1109/tvcg.2014.2346913",
                "AuthorKeywords": "Hand-drawn illustrations,automatic layout,design space,interactions,optimization",
                "AminerCitationCount": 32,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 1321,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 678,
                "i": [
                    678
                ]
            }
        },
        {
            "name": "Huub van de Wetering",
            "value": 356,
            "numPapers": 56,
            "cluster": "3",
            "visible": 1,
            "index": 439,
            "x": -85.57532307756355,
            "y": -191.3814622166176,
            "vy": 0,
            "vx": 0,
            "r": 1.409902130109384,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Visual Traffic Jam Analysis Based on Trajectory Data",
                "DOI": "10.1109/tvcg.2013.228",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.228",
                "FirstPage": 2159,
                "LastPage": 2168,
                "PaperType": "J",
                "Abstract": "In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.",
                "AuthorNamesDeduped": "Zuchao Wang;Min Lu 0002;Xiaoru Yuan;Junping Zhang;Huub van de Wetering",
                "AuthorNames": "Zuchao Wang;Min Lu;Xiaoru Yuan;Junping Zhang;Huub van de Wetering",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), Peking University, China;Shanghai Key Laboratory of Intelligent Information Processing, and School of Computer Science, Fudan University, China and Key Laboratory of Machine Perception (Ministry of Education), Peking University;Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, China;Department of Mathematics and Computer Science, Technische Universiteit Eindhoven, Eindhoven, Noord-Brabant, NL",
                "InternalReferences": "0.1109/visual.1997.663866;10.1109/vast.2011.6102454;10.1109/tvcg.2009.145;10.1109/vast.2012.6400556;10.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2011.202;10.1109/vast.2012.6400553;10.1109/tvcg.2012.265;10.1109/tvcg.2011.181;10.1109/vast.2009.5332593;10.1109/tvcg.2008.125;10.1109/vast.2011.6102455;10.1109/vast.2010.5653580",
                "AuthorKeywords": "Traffic visualization, traffic jam propagation",
                "AminerCitationCount": 401,
                "CitationCountCrossRef": 258,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 7486,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1366,
                "i": [
                    1366
                ]
            }
        },
        {
            "name": "Min Lu 0002",
            "value": 222,
            "numPapers": 52,
            "cluster": "3",
            "visible": 1,
            "index": 440,
            "x": 192.59563466321788,
            "y": 83.40816212261426,
            "vy": 0,
            "vx": 0,
            "r": 1.2556131260794472,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Winglets: Visualizing Association with Uncertainty in Multi-class Scatterplots",
                "DOI": "10.1109/tvcg.2019.2934811",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934811",
                "FirstPage": 770,
                "LastPage": 779,
                "PaperType": "J",
                "Abstract": "This work proposes Winglets, an enhancement to the classic scatterplot to better perceptually pronounce multiple classes by improving the perception of association and uncertainty of points to their related cluster. Designed as a pair of dual-sided strokes belonging to a data point, Winglets leverage the Gestalt principle of Closure to shape the perception of the form of the clusters, rather than use an explicit divisive encoding. Through a subtle design of two dominant attributes, length and orientation, Winglets enable viewers to perform a mental completion of the clusters. A controlled user study was conducted to examine the efficiency of Winglets in perceiving the cluster association and the uncertainty of certain points. The results show Winglets form a more prominent association of points into clusters and improve the perception of associating uncertainty.",
                "AuthorNamesDeduped": "Min Lu 0002;Shuaiqi Wang;Joel Lanir;Noa Fish;Yang Yue 0001;Daniel Cohen-Or;Hui Huang 0004",
                "AuthorNames": "Min Lu;Shuaiqi Wang;Joel Lanir;Noa Fish;Yang Yue;Daniel Cohen-Or;Hui Huang",
                "AuthorAffiliation": "Shenzhen University;Shenzhen University;University of Haifa;Tel Aviv Univeristy;Shenzhen University;Shenzhen University;Shenzhen University",
                "InternalReferences": "0.1109/vast.2010.5652460;10.1109/tvcg.2014.2346594;10.1109/tvcg.2009.122;10.1109/tvcg.2013.183;10.1109/tvcg.2018.2865141;10.1109/tvcg.2018.2865141;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.153;10.1109/vast.2009.5332628;10.1109/tvcg.2018.2864912",
                "AuthorKeywords": "Scatterplot,Gestalt laws,Association,Uncertainty",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 559,
                "i": [
                    559
                ]
            }
        },
        {
            "name": "Lu Ying",
            "value": 78,
            "numPapers": 42,
            "cluster": "3",
            "visible": 1,
            "index": 441,
            "x": -198.58045144427004,
            "y": 68.67171400358315,
            "vy": 0,
            "vx": 0,
            "r": 1.0898100172711571,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization",
                "DOI": "10.1109/tvcg.2022.3209447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209447",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.",
                "AuthorNamesDeduped": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209468;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2468292;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.206;10.1109/tvcg.2017.2745258;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/vast50239.2020.00014;10.1109/tvcg.2022.3209360;10.1109/tvcg.2019.2934613;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Glyph-based visualization,metaphor,machine learning,automatic visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 152,
                "i": [
                    152
                ]
            }
        },
        {
            "name": "Xinhuan Shu",
            "value": 200,
            "numPapers": 53,
            "cluster": "3",
            "visible": 1,
            "index": 442,
            "x": 100.15319985625253,
            "y": -184.98469276822212,
            "vy": 0,
            "vx": 0,
            "r": 1.2302820955670697,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization",
                "DOI": "10.1109/tvcg.2022.3209447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209447",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.",
                "AuthorNamesDeduped": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209468;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2468292;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.206;10.1109/tvcg.2017.2745258;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/vast50239.2020.00014;10.1109/tvcg.2022.3209360;10.1109/tvcg.2019.2934613;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Glyph-based visualization,metaphor,machine learning,automatic visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 152,
                "i": [
                    152
                ]
            }
        },
        {
            "name": "Yuzhe Luo",
            "value": 53,
            "numPapers": 24,
            "cluster": "3",
            "visible": 1,
            "index": 443,
            "x": 51.1632256908586,
            "y": 204.2849097141203,
            "vy": 0,
            "vx": 0,
            "r": 1.0610247553252734,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "GlyphCreator: Towards Example-based Automatic Generation of Circular Glyphs",
                "DOI": "10.1109/tvcg.2021.3114877",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114877",
                "FirstPage": 400,
                "LastPage": 410,
                "PaperType": "J",
                "Abstract": "Circular glyphs are used across disparate fields to represent multidimensional data. However, although these glyphs are extremely effective, creating them is often laborious, even for those with professional design skills. This paper presents GlyphCreator, an interactive tool for the example-based generation of circular glyphs. Given an example circular glyph and multidimensional input data, GlyphCreator promptly generates a list of design candidates, any of which can be edited to satisfy the requirements of a particular representation. To develop GlyphCreator, we first derive a design space of circular glyphs by summarizing relationships between different visual elements. With this design space, we build a circular glyph dataset and develop a deep learning model for glyph parsing. The model can deconstruct a circular glyph bitmap into a series of visual elements. Next, we introduce an interface that helps users bind the input data attributes to visual elements and customize visual styles. We evaluate the parsing model through a quantitative experiment, demonstrate the use of GlyphCreator through two use scenarios, and validate its effectiveness through user interviews.",
                "AuthorNamesDeduped": "Lu Ying;Tan Tang;Yuzhe Luo;Lvkeshen Shen;Xiao Xie;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Tan Tangl;Yuzhe Luo;Lvkeshen Shen;Xiao Xie;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;Department of Sport Science, Zhejiang University, Hangrhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2015.2467196;10.1109/vast.2016.7883517;10.1109/tvcg.2019.2934810;10.1109/infvis.2005.1532140;10.1109/tvcg.2019.2934785;10.1109/tvcg.2019.2934670;10.1109/tvcg.2012.271;10.1109/tvcg.2016.2599378;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2009.191;10.1109/tvcg.2017.2744320;10.1109/tvcg.2020.3030448;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.213;10.1109/tvcg.2020.3030403;10.1109/vast.2014.7042494;10.1109/tvcg.2019.2934398;10.1109/tvcg.2020.3030359;10.1109/tvcg.2018.2864825;10.1109/tvcg.2020.3030392;10.1109/tvcg.2020.3030367;10.1109/tvcg.2020.3030458;10.1109/tvcg.2013.234",
                "AuthorKeywords": "Glyph-based visualization,machine learning,automatic visualization",
                "AminerCitationCount": 10,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 891,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 287,
                "i": [
                    287
                ]
            }
        },
        {
            "name": "Lvkeshen Shen",
            "value": 53,
            "numPapers": 24,
            "cluster": "3",
            "visible": 1,
            "index": 444,
            "x": -175.91663700374835,
            "y": -116.20385890877905,
            "vy": 0,
            "vx": 0,
            "r": 1.0610247553252734,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "GlyphCreator: Towards Example-based Automatic Generation of Circular Glyphs",
                "DOI": "10.1109/tvcg.2021.3114877",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114877",
                "FirstPage": 400,
                "LastPage": 410,
                "PaperType": "J",
                "Abstract": "Circular glyphs are used across disparate fields to represent multidimensional data. However, although these glyphs are extremely effective, creating them is often laborious, even for those with professional design skills. This paper presents GlyphCreator, an interactive tool for the example-based generation of circular glyphs. Given an example circular glyph and multidimensional input data, GlyphCreator promptly generates a list of design candidates, any of which can be edited to satisfy the requirements of a particular representation. To develop GlyphCreator, we first derive a design space of circular glyphs by summarizing relationships between different visual elements. With this design space, we build a circular glyph dataset and develop a deep learning model for glyph parsing. The model can deconstruct a circular glyph bitmap into a series of visual elements. Next, we introduce an interface that helps users bind the input data attributes to visual elements and customize visual styles. We evaluate the parsing model through a quantitative experiment, demonstrate the use of GlyphCreator through two use scenarios, and validate its effectiveness through user interviews.",
                "AuthorNamesDeduped": "Lu Ying;Tan Tang;Yuzhe Luo;Lvkeshen Shen;Xiao Xie;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Tan Tangl;Yuzhe Luo;Lvkeshen Shen;Xiao Xie;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;Department of Sport Science, Zhejiang University, Hangrhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2015.2467196;10.1109/vast.2016.7883517;10.1109/tvcg.2019.2934810;10.1109/infvis.2005.1532140;10.1109/tvcg.2019.2934785;10.1109/tvcg.2019.2934670;10.1109/tvcg.2012.271;10.1109/tvcg.2016.2599378;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2009.191;10.1109/tvcg.2017.2744320;10.1109/tvcg.2020.3030448;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.213;10.1109/tvcg.2020.3030403;10.1109/vast.2014.7042494;10.1109/tvcg.2019.2934398;10.1109/tvcg.2020.3030359;10.1109/tvcg.2018.2864825;10.1109/tvcg.2020.3030392;10.1109/tvcg.2020.3030367;10.1109/tvcg.2020.3030458;10.1109/tvcg.2013.234",
                "AuthorKeywords": "Glyph-based visualization,machine learning,automatic visualization",
                "AminerCitationCount": 10,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 891,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 287,
                "i": [
                    287
                ]
            }
        },
        {
            "name": "Fangfang Zhou",
            "value": 165,
            "numPapers": 64,
            "cluster": "1",
            "visible": 1,
            "index": 445,
            "x": 208.44410754461626,
            "y": -33.182134197915275,
            "vy": 0,
            "vx": 0,
            "r": 1.1899827288428324,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Evaluating Multi-Dimensional Visualizations for Understanding Fuzzy Clusters",
                "DOI": "10.1109/tvcg.2018.2865020",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865020",
                "FirstPage": 12,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "Fuzzy clustering assigns a probability of membership for a datum to a cluster, which veritably reflects real-world clustering scenarios but significantly increases the complexity of understanding fuzzy clusters. Many studies have demonstrated that visualization techniques for multi-dimensional data are beneficial to understand fuzzy clusters. However, no empirical evidence exists on the effectiveness and efficiency of these visualization techniques in solving analytical tasks featured by fuzzy clusters. In this paper, we conduct a controlled experiment to evaluate the ability of fuzzy clusters analysis to use four multi-dimensional visualization techniques, namely, parallel coordinate plot, scatterplot matrix, principal component analysis, and Radviz. First, we define the analytical tasks and their representative questions specific to fuzzy clusters analysis. Then, we design objective questionnaires to compare the accuracy, time, and satisfaction in using the four techniques to solve the questions. We also design subjective questionnaires to collect the experience of the volunteers with the four techniques in terms of ease of use, informativeness, and helpfulness. With a complete experiment process and a detailed result analysis, we test against four hypotheses that are formulated on the basis of our experience, and provide instructive guidance for analysts in selecting appropriate and efficient visualization techniques to analyze fuzzy clusters.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "Central South University;Central South University;Central South University;Central South University;Central South University;Central South University;Shandong University;Beijing Technology, Business University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2745138;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2009.153;10.1109/tvcg.2016.2598831;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744198;10.1109/tvcg.2015.2467324;10.1109/tvcg.2013.153;10.1109/tvcg.2008.173;10.1109/visual.1990.146375;10.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598479;10.1109/infvis.2003.1249015",
                "AuthorKeywords": "Evaluation,multi-dimensional visualization,fuzzy clustering,parallel coordinate plot,scatterplot matrix,principal component analysis,radviz",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1464,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 739,
                "i": [
                    739
                ]
            }
        },
        {
            "name": "Haesun Park",
            "value": 420,
            "numPapers": 43,
            "cluster": "4",
            "visible": 1,
            "index": 446,
            "x": -131.43325232314166,
            "y": 165.45482822740885,
            "vy": 0,
            "vx": 0,
            "r": 1.4835924006908463,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization",
                "DOI": "10.1109/tvcg.2013.212",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.212",
                "FirstPage": 1992,
                "LastPage": 2001,
                "PaperType": "J",
                "Abstract": "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.",
                "AuthorNamesDeduped": "Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park",
                "AuthorNames": "Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park",
                "AuthorAffiliation": "Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/tvcg.2012.258;10.1109/vast.2009.5332629;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485;10.1109/vast.2007.4388999;10.1109/vast.2007.4389006;10.1109/tvcg.2008.138;10.1109/vast.2010.5652443",
                "AuthorKeywords": "Latent Dirichlet allocation, nonnegative matrix factorization, topic modeling, visual analytics, interactive clustering, text analytics",
                "AminerCitationCount": 317,
                "CitationCountCrossRef": 179,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 3014,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1368,
                "i": [
                    1368
                ]
            }
        },
        {
            "name": "Qiaomu Shen",
            "value": 151,
            "numPapers": 40,
            "cluster": "1",
            "visible": 1,
            "index": 447,
            "x": -14.864958980430504,
            "y": -211.01903467343914,
            "vy": 0,
            "vx": 0,
            "r": 1.1738629821531377,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "QEVIS: Multi-Grained Visualization of Distributed Query Execution",
                "DOI": "10.1109/tvcg.2023.3326930",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326930",
                "FirstPage": 153,
                "LastPage": 163,
                "PaperType": "J",
                "Abstract": "Distributed query processing systems such as Apache Hive and Spark are widely-used in many organizations for large-scale data analytics. Analyzing and understanding the query execution process of these systems are daily routines for engineers and crucial for identifying performance problems, optimizing system configurations, and rectifying errors. However, existing visualization tools for distributed query execution are insufficient because (i) most of them (if not all) do not provide fine-grained visualization (i.e., the atomic task level), which can be crucial for understanding query performance and reasoning about the underlying execution anomalies, and (ii) they do not support proper linkages between system status and query execution, which makes it difficult to identify the causes of execution problems. To tackle these limitations, we propose QEVIS, which visualizes distributed query execution process with multiple views that focus on different granularities and complement each other. Specifically, we first devise a query logical plan layout algorithm to visualize the overall query execution progress compactly and clearly. We then propose two novel scoring methods to summarize the anomaly degrees of the jobs and machines during query execution, and visualize the anomaly scores intuitively, which allow users to easily identify the components that are worth paying attention to. Moreover, we devise a scatter plot-based task view to show a massive number of atomic tasks, where task distribution patterns are informative for execution problems. We also equip QEVIS with a suite of auxiliary views and interaction methods to support easy and effective cross-view exploration, which makes it convenient to track the causes of execution problems. QEVIS has been used in the production environment of our industry partner, and we present three use cases from real-world applications and user interview to demonstrate its effectiveness. QEVIS is open-source at https://github.com/DBGroup-SUSTech/QEVIS.",
                "AuthorNamesDeduped": "Qiaomu Shen;Zhengxin You;Xiao Yan 0002;Chaozu Zhang;Ke Xu;Dan Zeng 0002;Jianbin Qin;Bo Tang 0016",
                "AuthorNames": "Qiaomu Shen;Zhengxin You;Xiao Yan;Chaozu Zhang;Ke Xu;Dan Zeng;Jianbin Qin;Bo Tang",
                "AuthorAffiliation": "Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Department of Computer Science and Engineering, Southern University of Science and Technology, China;Huawei Technologies Co., Ltd., China;Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, China;Shenzhen Institute of Computing Sciences, Shenzhen University, China;Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2014.2346594;10.1109/tvcg.2021.3114756;10.1109/tvcg.2019.2934661;10.1109/tvcg.2022.3209375;10.1109/tvcg.2012.213;10.1109/vast50239.2020.00009;10.1109/tvcg.2018.2865026;10.1109/tvcg.2017.2744738",
                "AuthorKeywords": "visual analytics system,distributed query execution,performance analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 345,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 70,
                "i": [
                    70
                ]
            }
        },
        {
            "name": "Salvatore Rinzivillo",
            "value": 101,
            "numPapers": 2,
            "cluster": "3",
            "visible": 1,
            "index": 448,
            "x": 153.67368315709223,
            "y": 145.72027691757117,
            "vy": 0,
            "vx": 0,
            "r": 1.1162924582613702,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "Interactive visual clustering of large collections of trajectories",
                "DOI": "10.1109/vast.2009.5332584",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5332584",
                "FirstPage": 3,
                "LastPage": 10,
                "PaperType": "C",
                "Abstract": "One of the most common operations in exploration and analysis of various kinds of data is clustering, i.e. discovery and interpretation of groups of objects having similar properties and/or behaviors. In clustering, objects are often treated as points in multi-dimensional space of properties. However, structurally complex objects, such as trajectories of moving entities and other kinds of spatio-temporal data, cannot be adequately represented in this manner. Such data require sophisticated and computationally intensive clustering algorithms, which are very hard to scale effectively to large datasets not fitting in the computer main memory. We propose an approach to extracting meaningful clusters from large databases by combining clustering and classification, which are driven by a human analyst through an interactive visual interface.",
                "AuthorNamesDeduped": "Gennady L. Andrienko;Natalia V. Andrienko;Salvatore Rinzivillo;Mirco Nanni;Dino Pedreschi;Fosca Giannotti",
                "AuthorNames": "Gennady Andrienko;Natalia Andrienko;Salvatore Rinzivillo;Mirco Nanni;Dino Pedreschi;Fosca Giannotti",
                "AuthorAffiliation": "Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt Augustin, Germany;Fraunhofer Institute of Intelligent Analysis and Information Systems (IAIS), Sankt Augustin, Germany;KDD Lab-ISTI-CNR, Pisa, Italy;KDD Lab-ISTI-CNR, Pisa, Italy;University of Pisa, Pisa, Italy;KDD Lab-ISTI-CNR, Pisa, Italy",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/vast.2007.4388999",
                "AuthorKeywords": "Spatio-temporal data, movement data, trajectories, clustering, classification, scalable visualization, geovisualization",
                "AminerCitationCount": 291,
                "CitationCountCrossRef": 138,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 1578,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1852,
                "i": [
                    1852
                ]
            }
        },
        {
            "name": "Zhenhuang Wang",
            "value": 80,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 449,
            "x": -211.98275375680922,
            "y": -3.648576390864182,
            "vy": 0,
            "vx": 0,
            "r": 1.092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data",
                "DOI": "10.1109/tvcg.2015.2467619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467619",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.",
                "AuthorNamesDeduped": "Siming Chen 0001;Xiaoru Yuan;Zhenhuang Wang;Cong Guo 0004;Jie Liang 0004;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorNames": "Siming Chen;Xiaoru Yuan;Zhenhuang Wang;Cong Guo;Jie Liang;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;College of Information Sciences and Technology, Pennsylvania State University;School of Computer Science and Technology, and School of Computer Software, Tianjin University",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2008.4677356;10.1109/tvcg.2009.182;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/tvcg.2009.143;10.1109/infvis.2004.27;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spatial temporal visual analytics, Geo-tagged social media, Sparsely sampling, Uncertainty, Movement",
                "AminerCitationCount": 141,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2690,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1103,
                "i": [
                    1103
                ]
            }
        },
        {
            "name": "Jie Liang 0004",
            "value": 100,
            "numPapers": 72,
            "cluster": "1",
            "visible": 1,
            "index": 450,
            "x": 158.95057699301515,
            "y": -140.65814613305395,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data",
                "DOI": "10.1109/tvcg.2015.2467619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467619",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.",
                "AuthorNamesDeduped": "Siming Chen 0001;Xiaoru Yuan;Zhenhuang Wang;Cong Guo 0004;Jie Liang 0004;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorNames": "Siming Chen;Xiaoru Yuan;Zhenhuang Wang;Cong Guo;Jie Liang;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;College of Information Sciences and Technology, Pennsylvania State University;School of Computer Science and Technology, and School of Computer Software, Tianjin University",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2008.4677356;10.1109/tvcg.2009.182;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/tvcg.2009.143;10.1109/infvis.2004.27;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spatial temporal visual analytics, Geo-tagged social media, Sparsely sampling, Uncertainty, Movement",
                "AminerCitationCount": 141,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2690,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1103,
                "i": [
                    1103
                ]
            }
        },
        {
            "name": "Heidrun Schumann",
            "value": 401,
            "numPapers": 74,
            "cluster": "3",
            "visible": 1,
            "index": 451,
            "x": -22.216612858654276,
            "y": 211.32066182247462,
            "vy": 0,
            "vx": 0,
            "r": 1.4617156016119748,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Stacking-Based Visualization of Trajectory Attribute Data",
                "DOI": "10.1109/tvcg.2012.265",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.265",
                "FirstPage": 2565,
                "LastPage": 2574,
                "PaperType": "J",
                "Abstract": "Visualizing trajectory attribute data is challenging because it involves showing the trajectories in their spatio-temporal context as well as the attribute values associated with the individual points of trajectories. Previous work on trajectory visualization addresses selected aspects of this problem, but not all of them. We present a novel approach to visualizing trajectory attribute data. Our solution covers space, time, and attribute values. Based on an analysis of relevant visualization tasks, we designed the visualization solution around the principle of stacking trajectory bands. The core of our approach is a hybrid 2D/3D display. A 2D map serves as a reference for the spatial context, and the trajectories are visualized as stacked 3D trajectory bands along which attribute values are encoded by color. Time is integrated through appropriate ordering of bands and through a dynamic query mechanism that feeds temporally aggregated information to a circular time display. An additional 2D time graph shows temporal information in full detail by stacking 2D trajectory bands. Our solution is equipped with analytical and interactive mechanisms for selecting and ordering of trajectories, and adjusting the color mapping, as well as coordinated highlighting and dedicated 3D navigation. We demonstrate the usefulness of our novel visualization by three examples related to radiation surveillance, traffic analysis, and maritime navigation. User feedback obtained in a small experiment indicates that our hybrid 2D/3D solution can be operated quite well.",
                "AuthorNamesDeduped": "Christian Tominski;Heidrun Schumann;Gennady L. Andrienko;Natalia V. Andrienko",
                "AuthorNames": "Christian Tominski;Heidrun Schumann;Gennady Andrienko;Natalia Andrienko",
                "AuthorAffiliation": "University of Rostock, Germany;University of Rostock, Germany;Fraunhofer Institute IAIS, Germany;Fraunhofer Institute IAIS, Germany",
                "InternalReferences": "0.1109/tvcg.2010.197;10.1109/vast.2011.6102455;10.1109/vast.2009.5332593;10.1109/visual.1995.480803;10.1109/infvis.2004.27;10.1109/infvis.2005.1532144;10.1109/vast.2011.6102454;10.1109/vast.2010.5653580",
                "AuthorKeywords": "Visualization, interaction, exploratory analysis, trajectory attribute data, spatio-temporal data",
                "AminerCitationCount": 366,
                "CitationCountCrossRef": 222,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 3595,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1398,
                "i": [
                    1398
                ]
            }
        },
        {
            "name": "Jun Tao 0002",
            "value": 12,
            "numPapers": 54,
            "cluster": "6",
            "visible": 1,
            "index": 452,
            "x": -126.50307685838602,
            "y": -171.0174597675959,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "FlowNL: Asking the Flow Data in Natural Languages",
                "DOI": "10.1109/tvcg.2022.3209453",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209453",
                "FirstPage": 1200,
                "LastPage": 1210,
                "PaperType": "J",
                "Abstract": "Flow visualization is essentially a tool to answer domain experts' questions about flow fields using rendered images. Static flow visualization approaches require domain experts to raise their questions to visualization experts, who develop specific techniques to extract and visualize the flow structures of interest. Interactive visualization approaches allow domain experts to ask the system directly through the visual analytic interface, which provides flexibility to support various tasks. However, in practice, the visual analytic interface may require extra learning effort, which often discourages domain experts and limits its usage in real-world scenarios. In this paper, we propose FlowNL, a novel interactive system with a natural language interface. FlowNL allows users to manipulate the flow visualization system using plain English, which greatly reduces the learning effort. We develop a natural language parser to interpret user intention and translate textual input into a declarative language. We design the declarative language as an intermediate layer between the natural language and the programming language specifically for flow visualization. The declarative language provides selection and composition rules to derive relatively complicated flow structures from primitive objects that encode various kinds of information about scalar fields, flow patterns, regions of interest, connectivities, etc. We demonstrate the effectiveness of FlowNL using multiple usage scenarios and an empirical evaluation.",
                "AuthorNamesDeduped": "Jieying Huang;Yang Xi;Junnan Hu;Jun Tao 0002",
                "AuthorNames": "Jieying Huang;Yang Xi;Junnan Hu;Jun Tao",
                "AuthorAffiliation": "School of Computer Science and Engineering, Sun Yat-sen University, China;School of Computer Science and Engineering, Sun Yat-sen University, China;School of Computer Science and Engineering, Sun Yat-sen University, China;Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), School of Computer Science and Engineering, Sun Yat-sen University, National Supercomputer Center in Guangzhou, China",
                "InternalReferences": "0.1109/tvcg.2019.2934310;10.1109/tvcg.2011.185;10.1109/visual.2005.1532856;10.1109/tvcg.2014.2346322;10.1109/tvcg.2019.2934785;10.1109/tvcg.2017.2744684;10.1109/tvcg.2013.121;10.1109/tvcg.2018.2864806;10.1109/tvcg.2013.189;10.1109/tvcg.2019.2934537;10.1109/tvcg.2020.3030453;10.1109/tvcg.2021.3114848;10.1109/tvcg.2020.3030378;10.1109/tvcg.2020.3030378;10.1109/tvcg.2019.2934367;10.1109/tvcg.2014.2346318;10.1109/visual.2004.128;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2017.2745219;10.1109/visual.2003.1250376;10.1109/vast47406.2019.8986918;10.1109/tvcg.2018.2864841;10.1109/tvcg.2010.131;10.1109/visual.2005.1532831;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Flow visualization,natural language interface,interactive exploration,declarative grammar",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 746,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 186,
                "i": [
                    186
                ]
            }
        },
        {
            "name": "Jianping Kelvin Li",
            "value": 24,
            "numPapers": 25,
            "cluster": "5",
            "visible": 1,
            "index": 453,
            "x": 209.03065705583288,
            "y": 40.69624565984916,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "P5: Portable Progressive Parallel Processing Pipelines for Interactive Data Analysis and Visualization",
                "DOI": "10.1109/tvcg.2019.2934537",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934537",
                "FirstPage": 1151,
                "LastPage": 1160,
                "PaperType": "J",
                "Abstract": "We present P5, a web-based visualization toolkit that combines declarative visualization grammar and GPU computing for progressive data analysis and visualization. To interactively analyze and explore big data, progressive analytics and visualization methods have recently emerged. Progressive visualizations of incrementally refining results have the advantages of allowing users to steer the analysis process and make early decisions. P5 leverages declarative grammar for specifying visualization designs and exploits GPU computing to accelerate progressive data processing and rendering. The declarative specifications can be modified during progressive processing to create different visualizations for analyzing the intermediate results. To enable user interactions for progressive data analysis, P5 utilizes the GPU to automatically aggregate and index data based on declarative interaction specifications to facilitate effective interactive visualization. We demonstrate the effectiveness and usefulness of P5 through a variety of example applications and several performance benchmark tests.",
                "AuthorNamesDeduped": "Jianping Kelvin Li;Kwan-Liu Ma",
                "AuthorNames": "Jianping Kelvin Li;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2014.2346319;10.1109/tvcg.2010.144;10.1109/tvcg.2014.2346452;10.1109/tvcg.2009.191;10.1109/tvcg.2014.2346578;10.1109/tvcg.2017.2744358;10.1109/tvcg.2009.110;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2014.2346574;10.1109/infvis.2002.1173141;10.1109/tvcg.2016.2598470;10.1109/tvcg.2013.179",
                "AuthorKeywords": "Information visualization,progressive analytics,visualization software,GPU computing,data exploration",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 982,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 560,
                "i": [
                    560
                ]
            }
        },
        {
            "name": "Halldór Janetzko",
            "value": 114,
            "numPapers": 8,
            "cluster": "3",
            "visible": 1,
            "index": 454,
            "x": -181.82275521255843,
            "y": 111.31255853188375,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Feature-Driven Visual Analytics of Soccer Data",
                "DOI": "10.1109/vast.2014.7042477",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042477",
                "FirstPage": 13,
                "LastPage": 22,
                "PaperType": "C",
                "Abstract": "Soccer is one the most popular sports today and also very interesting from an scientific point of view. We present a system for analyzing high-frequency position-based soccer data at various levels of detail, allowing to interactively explore and analyze for movement features and game events. Our Visual Analytics method covers single-player, multi-player and event-based analytical views. Depending on the task the most promising features are semi-automatically selected, processed, and visualized. Our aim is to help soccer analysts in finding the most important and interesting events in a match. We present a flexible, modular, and expandable layer-based system allowing in-depth analysis. The integration of Visual Analytics techniques into the analysis process enables the analyst to find interesting events based on classification and allows, by a set of custom views, to communicate the found results. The feedback loop in the Visual Analytics pipeline helps to further improve the classification results. We evaluate our approach by investigating real-world soccer matches and collecting additional expert feedback. Several use cases and findings illustrate the capabilities of our approach.",
                "AuthorNamesDeduped": "Halldór Janetzko;Dominik Sacha;Manuel Stein;Tobias Schreck;Daniel A. Keim;Oliver Deussen",
                "AuthorNames": "Halld'or Janetzko;Dominik Sacha;Manuel Stein;Tobias Schreck;Daniel A. Keim;Oliver Deussen",
                "AuthorAffiliation": "University oj Konstanz;University oj Konstanz;University oj Konstanz;University oj Konstanz;University oj Konstanz;University oj Konstanz",
                "InternalReferences": "0.1109/tvcg.2012.263;10.1109/vast.2008.4677350;10.1109/tvcg.2007.70621;10.1109/tvcg.2013.228;10.1109/tvcg.2013.193;10.1109/tvcg.2013.207;10.1109/tvcg.2013.186;10.1109/tvcg.2013.192",
                "AuthorKeywords": "Visual Analytics, Sport Analytics, Soccer Analysis",
                "AminerCitationCount": 104,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 2238,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1262,
                "i": [
                    1262
                ]
            }
        },
        {
            "name": "Romain Vuillemot",
            "value": 409,
            "numPapers": 63,
            "cluster": "3",
            "visible": 1,
            "index": 455,
            "x": 58.944626866521226,
            "y": -205.12320922695852,
            "vy": 0,
            "vx": 0,
            "r": 1.4709268854346575,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Visual Sedimentation",
                "DOI": "10.1109/tvcg.2013.227",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.227",
                "FirstPage": 2446,
                "LastPage": 2455,
                "PaperType": "J",
                "Abstract": "We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.",
                "AuthorNamesDeduped": "Samuel Huron;Romain Vuillemot;Jean-Daniel Fekete",
                "AuthorNames": "Samuel Huron;Romain Vuillemot;Jean-Daniel Fekete",
                "AuthorAffiliation": "IRI, INRIA, France;INRIA, France;INRIA, France",
                "InternalReferences": "0.1109/vast.2012.6400552;10.1109/tvcg.2012.291;10.1109/tvcg.2011.179;10.1109/infvis.2003.1249014;10.1109/tvcg.2011.185;10.1109/tvcg.2008.166;10.1109/tvcg.2008.171;10.1109/infvis.2004.65;10.1109/tvcg.2007.70539;10.1109/tvcg.2013.227",
                "AuthorKeywords": "Design, Information Visualization, Dynamic visualization, Dynamic data, Data stream, Real time, Metaphor",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 1132,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1312,
                "i": [
                    1312
                ]
            }
        },
        {
            "name": "Hannah Pileggi",
            "value": 55,
            "numPapers": 5,
            "cluster": "3",
            "visible": 1,
            "index": 456,
            "x": 95.19913142587616,
            "y": 191.27761336800174,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "SnapShot: Visualization to Propel Ice Hockey Analytics",
                "DOI": "10.1109/tvcg.2012.263",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.263",
                "FirstPage": 2819,
                "LastPage": 2828,
                "PaperType": "J",
                "Abstract": "Sports analysts live in a world of dynamic games flattened into tables of numbers, divorced from the rinks, pitches, and courts where they were generated. Currently, these professional analysts use R, Stata, SAS, and other statistical software packages for uncovering insights from game data. Quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams, clubs, and squads. In order for the information visualization community to support the members of this blossoming industry, it must recognize where and how visualization can enhance the existing analytical workflow. In this paper, we identify three primary stages of today's sports analyst's routine where visualization can be beneficially integrated: 1) exploring a dataspace; 2) sharing hypotheses with internal colleagues; and 3) communicating findings to stakeholders.Working closely with professional ice hockey analysts, we designed and built SnapShot, a system to integrate visualization into the hockey intelligence gathering process. SnapShot employs a variety of information visualization techniques to display shot data, yet given the importance of a specific hockey statistic, shot length, we introduce a technique, the radial heat map. Through a user study, we received encouraging feedback from several professional analysts, both independent consultants and professional team personnel.",
                "AuthorNamesDeduped": "Hannah Pileggi;Charles D. Stolper;J. Michael Boyle;John T. Stasko",
                "AuthorNames": "Hannah Pileggi;Charles D. Stolper;J. Michael Boyle;John T. Stasko",
                "AuthorAffiliation": "School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;Sports Analytics Institute LLC, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/tvcg.2010.179;10.1109/tvcg.2007.70537;10.1109/infvis.1997.636793;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70577;10.1109/infvis.1996.559229",
                "AuthorKeywords": "Visual knowledge discovery, visual knowledge representation, hypothesis testing, visual evidence, human computer interaction",
                "AminerCitationCount": 129,
                "CitationCountCrossRef": 75,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 2740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1408,
                "i": [
                    1408
                ]
            }
        },
        {
            "name": "J. Michael Boyle",
            "value": 55,
            "numPapers": 5,
            "cluster": "3",
            "visible": 1,
            "index": 457,
            "x": -199.6213328107018,
            "y": -76.82007216137609,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "SnapShot: Visualization to Propel Ice Hockey Analytics",
                "DOI": "10.1109/tvcg.2012.263",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.263",
                "FirstPage": 2819,
                "LastPage": 2828,
                "PaperType": "J",
                "Abstract": "Sports analysts live in a world of dynamic games flattened into tables of numbers, divorced from the rinks, pitches, and courts where they were generated. Currently, these professional analysts use R, Stata, SAS, and other statistical software packages for uncovering insights from game data. Quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams, clubs, and squads. In order for the information visualization community to support the members of this blossoming industry, it must recognize where and how visualization can enhance the existing analytical workflow. In this paper, we identify three primary stages of today's sports analyst's routine where visualization can be beneficially integrated: 1) exploring a dataspace; 2) sharing hypotheses with internal colleagues; and 3) communicating findings to stakeholders.Working closely with professional ice hockey analysts, we designed and built SnapShot, a system to integrate visualization into the hockey intelligence gathering process. SnapShot employs a variety of information visualization techniques to display shot data, yet given the importance of a specific hockey statistic, shot length, we introduce a technique, the radial heat map. Through a user study, we received encouraging feedback from several professional analysts, both independent consultants and professional team personnel.",
                "AuthorNamesDeduped": "Hannah Pileggi;Charles D. Stolper;J. Michael Boyle;John T. Stasko",
                "AuthorNames": "Hannah Pileggi;Charles D. Stolper;J. Michael Boyle;John T. Stasko",
                "AuthorAffiliation": "School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;Sports Analytics Institute LLC, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/tvcg.2010.179;10.1109/tvcg.2007.70537;10.1109/infvis.1997.636793;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70577;10.1109/infvis.1996.559229",
                "AuthorKeywords": "Visual knowledge discovery, visual knowledge representation, hypothesis testing, visual evidence, human computer interaction",
                "AminerCitationCount": 129,
                "CitationCountCrossRef": 75,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 2740,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1408,
                "i": [
                    1408
                ]
            }
        },
        {
            "name": "Tom Polk",
            "value": 163,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 458,
            "x": 199.30323267614625,
            "y": -78.2829575631753,
            "vy": 0,
            "vx": 0,
            "r": 1.1876799078871618,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "CourtTime: Generating Actionable Insights into Tennis Matches Using Visual Analytics",
                "DOI": "10.1109/tvcg.2019.2934243",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934243",
                "FirstPage": 397,
                "LastPage": 406,
                "PaperType": "J",
                "Abstract": "Tennis players and coaches of all proficiency levels seek to understand and improve their play. Summary statistics alone are inadequate to provide the insights players need to improve their games. Spatio-temporal data capturing player and ball movements is likely to provide the actionable insights needed to identify player strengths, weaknesses, and strategies. To fully utilize this spatio-temporal data, we need to integrate it with domain-relevant context meta-data. In this paper, we propose CourtTime, a novel approach to perform data-driven visual analysis of individual tennis matches. Our visual approach introduces a novel visual metaphor, namely 1–D Space-Time Charts that enable the analysis of single points at a glance based on small multiples. We also employ user-driven sorting and clustering techniques and a layout technique that aligns the last few shots in a point to facilitate shot pattern discovery. We discuss the usefulness of CourtTime via an extensive case study and report on feedback from an amateur tennis player and three tennis coaches.",
                "AuthorNamesDeduped": "Tom Polk;Dominik Jäckle;Johannes Häußler;Jing Yang 0001",
                "AuthorNames": "Tom Polk;Dominik Jäckle;Johannes Häußler;Jing Yang",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz;University of Konstanz;University of North Carolina",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/infvis.1996.559229;10.1109/visual.2001.964496;10.1109/tvcg.2017.2744218;10.1109/tvcg.2012.263;10.1109/tvcg.2013.192;10.1109/tvcg.2014.2346445",
                "AuthorKeywords": "Visual analytics,tennis analysis,sports analytics,spatio-temporal analysis",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1235,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 619,
                "i": [
                    619
                ]
            }
        },
        {
            "name": "Dominik Jäckle",
            "value": 187,
            "numPapers": 62,
            "cluster": "3",
            "visible": 1,
            "index": 459,
            "x": -94.18316325481175,
            "y": 192.560462606729,
            "vy": 0,
            "vx": 0,
            "r": 1.2153137593552101,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Temporal MDS Plots for Analysis of Multivariate Data",
                "DOI": "10.1109/tvcg.2015.2467553",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467553",
                "FirstPage": 141,
                "LastPage": 150,
                "PaperType": "J",
                "Abstract": "Multivariate time series data can be found in many application domains. Examples include data from computer networks, healthcare, social networks, or financial markets. Often, patterns in such data evolve over time among multiple dimensions and are hard to detect. Dimensionality reduction methods such as PCA and MDS allow analysis and visualization of multivariate data, but per se do not provide means to explore multivariate patterns over time. We propose Temporal Multidimensional Scaling (TMDS), a novel visualization technique that computes temporal one-dimensional MDS plots for multivariate data which evolve over time. Using a sliding window approach, MDS is computed for each data window separately, and the results are plotted sequentially along the time axis, taking care of plot alignment. Our TMDS plots enable visual identification of patterns based on multidimensional similarity of the data evolving over time. We demonstrate the usefulness of our approach in the field of network security and show in two case studies how users can iteratively explore the data to identify previously unknown, temporally evolving patterns.",
                "AuthorNamesDeduped": "Dominik Jäckle;Fabian Fischer 0001;Tobias Schreck;Daniel A. Keim",
                "AuthorNames": "Dominik Jäckle;Fabian Fischer;Tobias Schreck;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Graz University of Technology, Austria;University of Konstanz, Germany",
                "InternalReferences": "0.1109/vast.2009.5332593;10.1109/visual.1990.146402;10.1109/visual.1995.485140;10.1109/visual.1990.146386;10.1109/tvcg.2007.70592;10.1109/vast.2009.5332628",
                "AuthorKeywords": "Multivariate Data, Time Series, Data Reduction, Multidimensional Scaling",
                "AminerCitationCount": 87,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1993,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1111,
                "i": [
                    1111
                ]
            }
        },
        {
            "name": "Johannes Häußler",
            "value": 56,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 460,
            "x": -60.6909227070832,
            "y": -205.83151338160746,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "CourtTime: Generating Actionable Insights into Tennis Matches Using Visual Analytics",
                "DOI": "10.1109/tvcg.2019.2934243",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934243",
                "FirstPage": 397,
                "LastPage": 406,
                "PaperType": "J",
                "Abstract": "Tennis players and coaches of all proficiency levels seek to understand and improve their play. Summary statistics alone are inadequate to provide the insights players need to improve their games. Spatio-temporal data capturing player and ball movements is likely to provide the actionable insights needed to identify player strengths, weaknesses, and strategies. To fully utilize this spatio-temporal data, we need to integrate it with domain-relevant context meta-data. In this paper, we propose CourtTime, a novel approach to perform data-driven visual analysis of individual tennis matches. Our visual approach introduces a novel visual metaphor, namely 1–D Space-Time Charts that enable the analysis of single points at a glance based on small multiples. We also employ user-driven sorting and clustering techniques and a layout technique that aligns the last few shots in a point to facilitate shot pattern discovery. We discuss the usefulness of CourtTime via an extensive case study and report on feedback from an amateur tennis player and three tennis coaches.",
                "AuthorNamesDeduped": "Tom Polk;Dominik Jäckle;Johannes Häußler;Jing Yang 0001",
                "AuthorNames": "Tom Polk;Dominik Jäckle;Johannes Häußler;Jing Yang",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz;University of Konstanz;University of North Carolina",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/infvis.1996.559229;10.1109/visual.2001.964496;10.1109/tvcg.2017.2744218;10.1109/tvcg.2012.263;10.1109/tvcg.2013.192;10.1109/tvcg.2014.2346445",
                "AuthorKeywords": "Visual analytics,tennis analysis,sports analytics,spatio-temporal analysis",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1235,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 619,
                "i": [
                    619
                ]
            }
        },
        {
            "name": "Yueqi Hu",
            "value": 173,
            "numPapers": 10,
            "cluster": "3",
            "visible": 1,
            "index": 461,
            "x": 183.9882324255462,
            "y": 110.89783735007273,
            "vy": 0,
            "vx": 0,
            "r": 1.1991940126655152,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "TenniVis: Visualization for Tennis Match Analysis",
                "DOI": "10.1109/tvcg.2014.2346445",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346445",
                "FirstPage": 2339,
                "LastPage": 2348,
                "PaperType": "J",
                "Abstract": "Existing research efforts into tennis visualization have primarily focused on using ball and player tracking data to enhance professional tennis broadcasts and to aid coaches in helping their students. Gathering and analyzing this data typically requires the use of an array of synchronized cameras, which are expensive for non-professional tennis matches. In this paper, we propose TenniVis, a novel tennis match visualization system that relies entirely on data that can be easily collected, such as score, point outcomes, point lengths, service information, and match videos that can be captured by one consumer-level camera. It provides two new visualizations to allow tennis coaches and players to quickly gain insights into match performance. It also provides rich interactions to support ad hoc hypothesis development and testing. We first demonstrate the usefulness of the system by analyzing the 2007 Australian Open men's singles final. We then validate its usability by two pilot user studies where two college tennis coaches analyzed the matches of their own players. The results indicate that useful insights can quickly be discovered and ad hoc hypotheses based on these insights can conveniently be tested through linked match videos.",
                "AuthorNamesDeduped": "Tom Polk;Jing Yang 0001;Yueqi Hu;Ye Zhao 0003",
                "AuthorNames": "Tom Polk;Jing Yang;Yueqi Hu;Ye Zhao",
                "AuthorAffiliation": "University of North Carolina at Charlotte;University of North Carolina at Charlotte;University of North Carolina at Charlotte;Kent State University",
                "InternalReferences": "0.1109/tvcg.2012.263;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/infvis.1996.559229;10.1109/infvis.2002.1173148",
                "AuthorKeywords": "Visual knowledge discovery, sports analytics, tennis visualization",
                "AminerCitationCount": 91,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 2569,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1180,
                "i": [
                    1180
                ]
            }
        },
        {
            "name": "Ye Zhao 0003",
            "value": 207,
            "numPapers": 70,
            "cluster": "3",
            "visible": 1,
            "index": 462,
            "x": -210.80563060829482,
            "y": 42.555682391887004,
            "vy": 0,
            "vx": 0,
            "r": 1.238341968911917,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "TenniVis: Visualization for Tennis Match Analysis",
                "DOI": "10.1109/tvcg.2014.2346445",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346445",
                "FirstPage": 2339,
                "LastPage": 2348,
                "PaperType": "J",
                "Abstract": "Existing research efforts into tennis visualization have primarily focused on using ball and player tracking data to enhance professional tennis broadcasts and to aid coaches in helping their students. Gathering and analyzing this data typically requires the use of an array of synchronized cameras, which are expensive for non-professional tennis matches. In this paper, we propose TenniVis, a novel tennis match visualization system that relies entirely on data that can be easily collected, such as score, point outcomes, point lengths, service information, and match videos that can be captured by one consumer-level camera. It provides two new visualizations to allow tennis coaches and players to quickly gain insights into match performance. It also provides rich interactions to support ad hoc hypothesis development and testing. We first demonstrate the usefulness of the system by analyzing the 2007 Australian Open men's singles final. We then validate its usability by two pilot user studies where two college tennis coaches analyzed the matches of their own players. The results indicate that useful insights can quickly be discovered and ad hoc hypotheses based on these insights can conveniently be tested through linked match videos.",
                "AuthorNamesDeduped": "Tom Polk;Jing Yang 0001;Yueqi Hu;Ye Zhao 0003",
                "AuthorNames": "Tom Polk;Jing Yang;Yueqi Hu;Ye Zhao",
                "AuthorAffiliation": "University of North Carolina at Charlotte;University of North Carolina at Charlotte;University of North Carolina at Charlotte;Kent State University",
                "InternalReferences": "0.1109/tvcg.2012.263;10.1109/tvcg.2013.192;10.1109/visual.2001.964496;10.1109/infvis.1996.559229;10.1109/infvis.2002.1173148",
                "AuthorKeywords": "Visual knowledge discovery, sports analytics, tennis visualization",
                "AminerCitationCount": 91,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 2569,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1180,
                "i": [
                    1180
                ]
            }
        },
        {
            "name": "Anqi Cao",
            "value": 64,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 463,
            "x": 126.83245512987659,
            "y": -173.96415816405354,
            "vy": 0,
            "vx": 0,
            "r": 1.0736902705814624,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Action-Evaluator: A Visualization Approach for Player Action Evaluation in Soccer",
                "DOI": "10.1109/tvcg.2023.3326524",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326524",
                "FirstPage": 880,
                "LastPage": 890,
                "PaperType": "J",
                "Abstract": "In soccer, player action evaluation provides a fine-grained method to analyze player performance and plays an important role in improving winning chances in future matches. However, previous studies on action evaluation only provide a score for each action, and hardly support inspecting and comparing player actions integrated with complex match context information such as team tactics and player locations. In this work, we collaborate with soccer analysts and coaches to characterize the domain problems of evaluating player performance based on action scores. We design a tailored visualization of soccer player actions that places the action choice together with the tactic it belongs to as well as the player locations in the same view. Based on the design, we introduce a visual analytics system, Action-Evaluator, to facilitate a comprehensive player action evaluation through player navigation, action investigation, and action explanation. With the system, analysts can find players to be analyzed efficiently, learn how they performed under various match situations, and obtain valuable insights to improve their action choices. The usefulness and effectiveness of this work are demonstrated by two case studies on a real-world dataset and an expert interview.",
                "AuthorNamesDeduped": "Anqi Cao;Xiao Xie;Mingxu Zhou;Hui Zhang 0051;Mingliang Xu;Yingcai Wu",
                "AuthorNames": "Anqi Cao;Xiao Xie;Mingxu Zhou;Hui Zhang;Mingliang Xu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Ministry of Education, School of Computer and Artificial Intelligence, Zhengzhou University, Engineering Research Center of Intelligent Swarm Systems, National Supercomputing Center, Zhengzhou, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/vast.2014.7042478;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192;10.1109/tvcg.2012.263;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2022.3209352;10.1109/tvcg.2022.3209452;10.1109/tvcg.2021.3114832;10.1109/tvcg.2022.3209373;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359",
                "AuthorKeywords": "Soccer Visualization,Player Evaluation,Design Study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 321,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 74,
                "i": [
                    74
                ]
            }
        },
        {
            "name": "Zheng Zhou",
            "value": 102,
            "numPapers": 36,
            "cluster": "3",
            "visible": 1,
            "index": 464,
            "x": 24.01465918385524,
            "y": 214.18052232703909,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kori: Interactive Synthesis of Text and Charts in Data Documents",
                "DOI": "10.1109/tvcg.2021.3114802",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114802",
                "FirstPage": 184,
                "LastPage": 194,
                "PaperType": "J",
                "Abstract": "Charts go hand in hand with text to communicate complex data and are widely adopted in news articles, online blogs, and academic papers. They provide graphical summaries of the data, while text explains the message and context. However, synthesizing information across text and charts is difficult; it requires readers to frequently shift their attention. We investigated ways to support the tight coupling of text and charts in data documents. To understand their interplay, we analyzed the design space of chart-text references through news articles and scientific papers. Informed by the analysis, we developed a mixed-initiative interface enabling users to construct interactive references between text and charts. It leverages natural language processing to automatically suggest references as well as allows users to manually construct other references effortlessly. A user study complemented with algorithmic evaluation of the system suggests that the interface provides an effective way to compose interactive data documents.",
                "AuthorNamesDeduped": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck 0001;Nam Wook Kim",
                "AuthorNames": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck;Nam Wook Kim",
                "AuthorAffiliation": "University of Duisburg-Essen, Germany;Boston College, USA;Harvard University, USA;University of Duisburg-Essen, Germany;Boston College, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2018.2865119;10.1109/tvcg.2015.2467732;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2018.2865022;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2011.183;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Data-driven storytelling,interaction design,authoring,visualization-text linking,mixed-initiative interface,interactive documents",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 992,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 268,
                "i": [
                    268
                ]
            }
        },
        {
            "name": "Mingxu Zhou",
            "value": 0,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 465,
            "x": -162.55922734495581,
            "y": -141.86083887039078,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Action-Evaluator: A Visualization Approach for Player Action Evaluation in Soccer",
                "DOI": "10.1109/tvcg.2023.3326524",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326524",
                "FirstPage": 880,
                "LastPage": 890,
                "PaperType": "J",
                "Abstract": "In soccer, player action evaluation provides a fine-grained method to analyze player performance and plays an important role in improving winning chances in future matches. However, previous studies on action evaluation only provide a score for each action, and hardly support inspecting and comparing player actions integrated with complex match context information such as team tactics and player locations. In this work, we collaborate with soccer analysts and coaches to characterize the domain problems of evaluating player performance based on action scores. We design a tailored visualization of soccer player actions that places the action choice together with the tactic it belongs to as well as the player locations in the same view. Based on the design, we introduce a visual analytics system, Action-Evaluator, to facilitate a comprehensive player action evaluation through player navigation, action investigation, and action explanation. With the system, analysts can find players to be analyzed efficiently, learn how they performed under various match situations, and obtain valuable insights to improve their action choices. The usefulness and effectiveness of this work are demonstrated by two case studies on a real-world dataset and an expert interview.",
                "AuthorNamesDeduped": "Anqi Cao;Xiao Xie;Mingxu Zhou;Hui Zhang 0051;Mingliang Xu;Yingcai Wu",
                "AuthorNames": "Anqi Cao;Xiao Xie;Mingxu Zhou;Hui Zhang;Mingliang Xu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Ministry of Education, School of Computer and Artificial Intelligence, Zhengzhou University, Engineering Research Center of Intelligent Swarm Systems, National Supercomputing Center, Zhengzhou, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/vast.2014.7042478;10.1109/vast.2014.7042477;10.1109/tvcg.2013.192;10.1109/tvcg.2012.263;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2022.3209352;10.1109/tvcg.2022.3209452;10.1109/tvcg.2021.3114832;10.1109/tvcg.2022.3209373;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359",
                "AuthorKeywords": "Soccer Visualization,Player Evaluation,Design Study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 321,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 74,
                "i": [
                    74
                ]
            }
        },
        {
            "name": "Jiang Wu",
            "value": 93,
            "numPapers": 63,
            "cluster": "3",
            "visible": 1,
            "index": 466,
            "x": 215.92328807192115,
            "y": -5.208998772331924,
            "vy": 0,
            "vx": 0,
            "r": 1.1070811744386875,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "RASIPAM: Interactive Pattern Mining of Multivariate Event Sequences in Racket Sports",
                "DOI": "10.1109/tvcg.2022.3209452",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209452",
                "FirstPage": 940,
                "LastPage": 950,
                "PaperType": "J",
                "Abstract": "Experts in racket sports like tennis and badminton use tactical analysis to gain insight into competitors' playing styles. Many data-driven methods apply pattern mining to racket sports data — which is often recorded as multivariate event sequences — to uncover sports tactics. However, tactics obtained in this way are often inconsistent with those deduced by experts through their domain knowledge, which can be confusing to those experts. This work introduces RASIPAM, a RAcket-Sports Interactive PAttern Mining system, which allows experts to incorporate their knowledge into data mining algorithms to discover meaningful tactics interactively. RASIPAM consists of a constraint-based pattern mining algorithm that responds to the analysis demands of experts: Experts provide suggestions for finding tactics in intuitive written language, and these suggestions are translated into constraints to run the algorithm. RASIPAM further introduces a tailored visual interface that allows experts to compare the new tactics with the original ones and decide whether to apply a given adjustment. This interactive workflow iteratively progresses until experts are satisfied with all tactics. We conduct a quantitative experiment to show that our algorithm supports real-time interaction. Two case studies in tennis and in badminton respectively, each involving two domain experts, are conducted to show the effectiveness and usefulness of the system.",
                "AuthorNamesDeduped": "Jiang Wu;Dongyu Liu;Ziyang Guo;Yingcai Wu",
                "AuthorNames": "Jiang Wu;Dongyu Liu;Ziyang Guo;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;MIT, USA;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2021.3114861;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2018.2865018;10.1109/tvcg.2015.2467325;10.1109/tvcg.2021.3114848;10.1109/tvcg.2012.271;10.1109/tvcg.2012.213;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2019.2934630;10.1109/vast50239.2020.00009;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209447;10.1109/tvcg.2019.2934668;10.1109/tvcg.2019.2934267;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Sports Analytics,Multivariate Event Sequence,Interactive Pattern Mining,Comparative Visual Design",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 500,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 215,
                "i": [
                    215
                ]
            }
        },
        {
            "name": "Ziyang Guo",
            "value": 96,
            "numPapers": 84,
            "cluster": "3",
            "visible": 1,
            "index": 467,
            "x": -155.86327248935012,
            "y": 149.855397930507,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "RASIPAM: Interactive Pattern Mining of Multivariate Event Sequences in Racket Sports",
                "DOI": "10.1109/tvcg.2022.3209452",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209452",
                "FirstPage": 940,
                "LastPage": 950,
                "PaperType": "J",
                "Abstract": "Experts in racket sports like tennis and badminton use tactical analysis to gain insight into competitors' playing styles. Many data-driven methods apply pattern mining to racket sports data — which is often recorded as multivariate event sequences — to uncover sports tactics. However, tactics obtained in this way are often inconsistent with those deduced by experts through their domain knowledge, which can be confusing to those experts. This work introduces RASIPAM, a RAcket-Sports Interactive PAttern Mining system, which allows experts to incorporate their knowledge into data mining algorithms to discover meaningful tactics interactively. RASIPAM consists of a constraint-based pattern mining algorithm that responds to the analysis demands of experts: Experts provide suggestions for finding tactics in intuitive written language, and these suggestions are translated into constraints to run the algorithm. RASIPAM further introduces a tailored visual interface that allows experts to compare the new tactics with the original ones and decide whether to apply a given adjustment. This interactive workflow iteratively progresses until experts are satisfied with all tactics. We conduct a quantitative experiment to show that our algorithm supports real-time interaction. Two case studies in tennis and in badminton respectively, each involving two domain experts, are conducted to show the effectiveness and usefulness of the system.",
                "AuthorNamesDeduped": "Jiang Wu;Dongyu Liu;Ziyang Guo;Yingcai Wu",
                "AuthorNames": "Jiang Wu;Dongyu Liu;Ziyang Guo;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;MIT, USA;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2021.3114861;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2018.2865018;10.1109/tvcg.2015.2467325;10.1109/tvcg.2021.3114848;10.1109/tvcg.2012.271;10.1109/tvcg.2012.213;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2019.2934630;10.1109/vast50239.2020.00009;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209447;10.1109/tvcg.2019.2934668;10.1109/tvcg.2019.2934267;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Sports Analytics,Multivariate Event Sequence,Interactive Pattern Mining,Comparative Visual Design",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 500,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 215,
                "i": [
                    215
                ]
            }
        },
        {
            "name": "Qingyang Xu",
            "value": 76,
            "numPapers": 41,
            "cluster": "3",
            "visible": 1,
            "index": 468,
            "x": 13.717507160287855,
            "y": -216.01349494257866,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "TacticFlow: Visual Analytics of Ever-Changing Tactics in Racket Sports",
                "DOI": "10.1109/tvcg.2021.3114832",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114832",
                "FirstPage": 835,
                "LastPage": 845,
                "PaperType": "J",
                "Abstract": "Event sequence mining is often used to summarize patterns from hundreds of sequences but faces special challenges when handling racket sports data. In racket sports (e.g., tennis and badminton), a player hitting the ball is considered a multivariate event consisting of multiple attributes (e.g., hit technique and ball position). A rally (i.e., a series of consecutive hits beginning with one player serving the ball and ending with one player winning a point) thereby can be viewed as a multivariate event sequence. Mining frequent patterns and depicting how patterns change over time is instructive and meaningful to players who want to learn more short-term competitive strategies (i.e., tactics) that encompass multiple hits. However, players in racket sports usually change their tactics rapidly according to the opponent's reaction, resulting in ever-changing tactic progression. In this work, we introduce a tailored visualization system built on a novel multivariate sequence pattern mining algorithm to facilitate explorative identification and analysis of various tactics and tactic progression. The algorithm can mine multiple non-overlapping multivariate patterns from hundreds of sequences effectively. Based on the mined results, we propose a glyph-based Sankey diagram to visualize the ever-changing tactic progression and support interactive data exploration. Through two case studies with four domain experts in tennis and badminton, we demonstrate that our system can effectively obtain insights about tactic progression in most racket sports. We further discuss the strengths and the limitations of our system based on domain experts' feedback.",
                "AuthorNamesDeduped": "Jiang Wu;Dongyu Liu;Ziyang Guo;Qingyang Xu;Yingcai Wu",
                "AuthorNames": "Jiang Wu;Dongyu Liu;Ziyang Guo;Qingyang Xu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Massachusetts Institute of Technology, United States;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114861;10.1109/tvcg.2019.2934670;10.1109/tvcg.2020.3030442;10.1109/tvcg.2014.2346682;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2020.3030465;10.1109/tvcg.2011.179;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.200;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2016.2598591;10.1109/vast50239.2020.00009;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2020.3030458;10.1109/tvcg.2019.2934630",
                "AuthorKeywords": "Sports Analytics,Multivariate Event Sequence,Sequential Pattern Mining,Progression Analysis",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 1436,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 261,
                "i": [
                    261
                ]
            }
        },
        {
            "name": "David Feng 0001",
            "value": 72,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 469,
            "x": 135.94500993974864,
            "y": 168.72745559772318,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Matching Visual Saliency to Confidence in Plots of Uncertain Data",
                "DOI": "10.1109/tvcg.2010.176",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.176",
                "FirstPage": 980,
                "LastPage": 989,
                "PaperType": "J",
                "Abstract": "Conveying data uncertainty in visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. This paper proposes a methodology for efficiently generating density plots of uncertain multivariate data sets that draws viewers to preattentively identify values of high certainty while not calling attention to uncertain values. We demonstrate how to augment scatter plots and parallel coordinates plots to incorporate statistically modeled uncertainty and show how to integrate them with existing multivariate analysis techniques, including outlier detection and interactive brushing. Computing high quality density plots can be expensive for large data sets, so we also describe a probabilistic plotting technique that summarizes the data without requiring explicit density plot computation. These techniques have been useful for identifying brain tumors in multivariate magnetic resonance spectroscopy data and we describe how to extend them to visualize ensemble data sets.",
                "AuthorNamesDeduped": "David Feng 0001;Lester Kwock;Yueh Z. Lee;Russell M. Taylor II",
                "AuthorNames": "David Feng;Lester Kwock;Yueh Lee;Russell Taylor",
                "AuthorAffiliation": "University of North Carolina, Chapel Hill, USA;Department of Radiology, UNC Hospital, USA;Department of Radiology, UNC Hospital, USA;University of North Carolina, Chapel Hill, USA",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/infvis.2001.963286;10.1109/tvcg.2008.167;10.1109/tvcg.2009.179;10.1109/infvis.2002.1173145;10.1109/tvcg.2009.131;10.1109/visual.1999.809866;10.1109/tvcg.2009.114;10.1109/tvcg.2006.170;10.1109/infvis.2004.3;10.1109/visual.1994.346302;10.1109/tvcg.2008.153;10.1109/tvcg.2009.118",
                "AuthorKeywords": "Uncertainty visualization, brushing, scatter plots, parallel coordinates, multivariate data",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1081,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1705,
                "i": [
                    1705
                ]
            }
        },
        {
            "name": "Lester Kwock",
            "value": 72,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 470,
            "x": -214.44338813787112,
            "y": -32.61952304909372,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Matching Visual Saliency to Confidence in Plots of Uncertain Data",
                "DOI": "10.1109/tvcg.2010.176",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.176",
                "FirstPage": 980,
                "LastPage": 989,
                "PaperType": "J",
                "Abstract": "Conveying data uncertainty in visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. This paper proposes a methodology for efficiently generating density plots of uncertain multivariate data sets that draws viewers to preattentively identify values of high certainty while not calling attention to uncertain values. We demonstrate how to augment scatter plots and parallel coordinates plots to incorporate statistically modeled uncertainty and show how to integrate them with existing multivariate analysis techniques, including outlier detection and interactive brushing. Computing high quality density plots can be expensive for large data sets, so we also describe a probabilistic plotting technique that summarizes the data without requiring explicit density plot computation. These techniques have been useful for identifying brain tumors in multivariate magnetic resonance spectroscopy data and we describe how to extend them to visualize ensemble data sets.",
                "AuthorNamesDeduped": "David Feng 0001;Lester Kwock;Yueh Z. Lee;Russell M. Taylor II",
                "AuthorNames": "David Feng;Lester Kwock;Yueh Lee;Russell Taylor",
                "AuthorAffiliation": "University of North Carolina, Chapel Hill, USA;Department of Radiology, UNC Hospital, USA;Department of Radiology, UNC Hospital, USA;University of North Carolina, Chapel Hill, USA",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/infvis.2001.963286;10.1109/tvcg.2008.167;10.1109/tvcg.2009.179;10.1109/infvis.2002.1173145;10.1109/tvcg.2009.131;10.1109/visual.1999.809866;10.1109/tvcg.2009.114;10.1109/tvcg.2006.170;10.1109/infvis.2004.3;10.1109/visual.1994.346302;10.1109/tvcg.2008.153;10.1109/tvcg.2009.118",
                "AuthorKeywords": "Uncertainty visualization, brushing, scatter plots, parallel coordinates, multivariate data",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1081,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1705,
                "i": [
                    1705
                ]
            }
        },
        {
            "name": "Yueh Z. Lee",
            "value": 72,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 471,
            "x": 180.34940395225502,
            "y": -120.93011409101682,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Matching Visual Saliency to Confidence in Plots of Uncertain Data",
                "DOI": "10.1109/tvcg.2010.176",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.176",
                "FirstPage": 980,
                "LastPage": 989,
                "PaperType": "J",
                "Abstract": "Conveying data uncertainty in visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. This paper proposes a methodology for efficiently generating density plots of uncertain multivariate data sets that draws viewers to preattentively identify values of high certainty while not calling attention to uncertain values. We demonstrate how to augment scatter plots and parallel coordinates plots to incorporate statistically modeled uncertainty and show how to integrate them with existing multivariate analysis techniques, including outlier detection and interactive brushing. Computing high quality density plots can be expensive for large data sets, so we also describe a probabilistic plotting technique that summarizes the data without requiring explicit density plot computation. These techniques have been useful for identifying brain tumors in multivariate magnetic resonance spectroscopy data and we describe how to extend them to visualize ensemble data sets.",
                "AuthorNamesDeduped": "David Feng 0001;Lester Kwock;Yueh Z. Lee;Russell M. Taylor II",
                "AuthorNames": "David Feng;Lester Kwock;Yueh Lee;Russell Taylor",
                "AuthorAffiliation": "University of North Carolina, Chapel Hill, USA;Department of Radiology, UNC Hospital, USA;Department of Radiology, UNC Hospital, USA;University of North Carolina, Chapel Hill, USA",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/infvis.2001.963286;10.1109/tvcg.2008.167;10.1109/tvcg.2009.179;10.1109/infvis.2002.1173145;10.1109/tvcg.2009.131;10.1109/visual.1999.809866;10.1109/tvcg.2009.114;10.1109/tvcg.2006.170;10.1109/infvis.2004.3;10.1109/visual.1994.346302;10.1109/tvcg.2008.153;10.1109/tvcg.2009.118",
                "AuthorKeywords": "Uncertainty visualization, brushing, scatter plots, parallel coordinates, multivariate data",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1081,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1705,
                "i": [
                    1705
                ]
            }
        },
        {
            "name": "Russell M. Taylor II",
            "value": 98,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 472,
            "x": -51.35128807110573,
            "y": 211.21800399927633,
            "vy": 0,
            "vx": 0,
            "r": 1.112838226827864,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Matching Visual Saliency to Confidence in Plots of Uncertain Data",
                "DOI": "10.1109/tvcg.2010.176",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.176",
                "FirstPage": 980,
                "LastPage": 989,
                "PaperType": "J",
                "Abstract": "Conveying data uncertainty in visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. This paper proposes a methodology for efficiently generating density plots of uncertain multivariate data sets that draws viewers to preattentively identify values of high certainty while not calling attention to uncertain values. We demonstrate how to augment scatter plots and parallel coordinates plots to incorporate statistically modeled uncertainty and show how to integrate them with existing multivariate analysis techniques, including outlier detection and interactive brushing. Computing high quality density plots can be expensive for large data sets, so we also describe a probabilistic plotting technique that summarizes the data without requiring explicit density plot computation. These techniques have been useful for identifying brain tumors in multivariate magnetic resonance spectroscopy data and we describe how to extend them to visualize ensemble data sets.",
                "AuthorNamesDeduped": "David Feng 0001;Lester Kwock;Yueh Z. Lee;Russell M. Taylor II",
                "AuthorNames": "David Feng;Lester Kwock;Yueh Lee;Russell Taylor",
                "AuthorAffiliation": "University of North Carolina, Chapel Hill, USA;Department of Radiology, UNC Hospital, USA;Department of Radiology, UNC Hospital, USA;University of North Carolina, Chapel Hill, USA",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/infvis.2001.963286;10.1109/tvcg.2008.167;10.1109/tvcg.2009.179;10.1109/infvis.2002.1173145;10.1109/tvcg.2009.131;10.1109/visual.1999.809866;10.1109/tvcg.2009.114;10.1109/tvcg.2006.170;10.1109/infvis.2004.3;10.1109/visual.1994.346302;10.1109/tvcg.2008.153;10.1109/tvcg.2009.118",
                "AuthorKeywords": "Uncertainty visualization, brushing, scatter plots, parallel coordinates, multivariate data",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1081,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1705,
                "i": [
                    1705
                ]
            }
        },
        {
            "name": "Waqas Javed",
            "value": 140,
            "numPapers": 5,
            "cluster": "5",
            "visible": 1,
            "index": 473,
            "x": -104.92172235444332,
            "y": -190.63428909348158,
            "vy": 0,
            "vx": 0,
            "r": 1.1611974668969487,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Graphical Perception of Multiple Time Series",
                "DOI": "10.1109/tvcg.2010.162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.162",
                "FirstPage": 927,
                "LastPage": 934,
                "PaperType": "J",
                "Abstract": "Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759-1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series--such as small multiples and horizon graphs--are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques--like standard line graphs--are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced.",
                "AuthorNamesDeduped": "Waqas Javed;Bryan McDonnel;Niklas Elmqvist",
                "AuthorNames": "Waqas Javed;Bryan McDonnel;Niklas Elmqvist",
                "AuthorAffiliation": "Purdue University, West Lafayette, IN, USA;Purdue University, West Lafayette, IN, USA;Purdue University, West Lafayette, IN, USA",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2007.70583;10.1109/tvcg.2007.70535;10.1109/infvis.1999.801851;10.1109/tvcg.2008.125;10.1109/infvis.2005.1532144",
                "AuthorKeywords": "Line graphs, braided graphs, horizon graphs, small multiples, stacked graphs, evaluation, design guidelines",
                "AminerCitationCount": 321,
                "CitationCountCrossRef": 183,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 3978,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1685,
                "i": [
                    1685
                ]
            }
        },
        {
            "name": "Bryan McDonnel",
            "value": 158,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 474,
            "x": 206.35518429358592,
            "y": 69.76774265489881,
            "vy": 0,
            "vx": 0,
            "r": 1.181922855497985,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Graphical Perception of Multiple Time Series",
                "DOI": "10.1109/tvcg.2010.162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.162",
                "FirstPage": 927,
                "LastPage": 934,
                "PaperType": "J",
                "Abstract": "Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759-1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series--such as small multiples and horizon graphs--are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques--like standard line graphs--are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced.",
                "AuthorNamesDeduped": "Waqas Javed;Bryan McDonnel;Niklas Elmqvist",
                "AuthorNames": "Waqas Javed;Bryan McDonnel;Niklas Elmqvist",
                "AuthorAffiliation": "Purdue University, West Lafayette, IN, USA;Purdue University, West Lafayette, IN, USA;Purdue University, West Lafayette, IN, USA",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2007.70583;10.1109/tvcg.2007.70535;10.1109/infvis.1999.801851;10.1109/tvcg.2008.125;10.1109/infvis.2005.1532144",
                "AuthorKeywords": "Line graphs, braided graphs, horizon graphs, small multiples, stacked graphs, evaluation, design guidelines",
                "AminerCitationCount": 321,
                "CitationCountCrossRef": 183,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 3978,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1685,
                "i": [
                    1685
                ]
            }
        },
        {
            "name": "Carlos Eduardo Scheidegger",
            "value": 451,
            "numPapers": 56,
            "cluster": "6",
            "visible": 1,
            "index": 475,
            "x": -199.4972103991843,
            "y": 88.03898592637006,
            "vy": 0,
            "vx": 0,
            "r": 1.5192861255037422,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "VisTrails: enabling interactive multiple-view visualizations",
                "DOI": "10.1109/visual.2005.1532788",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532788",
                "FirstPage": 135,
                "LastPage": 142,
                "PaperType": "C",
                "Abstract": "VisTrails is a new system that enables interactive multiple-view visualizations by simplifying the creation and maintenance of visualization pipelines, and by optimizing their execution. It provides a general infrastructure that can be combined with existing visualization systems and libraries. A key component of VisTrails is the visualization trail (vistrail), a formal specification of a pipeline. Unlike existing dataflow-based systems, in VisTrails there is a clear separation between the specification of a pipeline and its execution instances. This separation enables powerful scripting capabilities and provides a scalable mechanism for generating a large number of visualizations. VisTrails also leverages the vistrail specification to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. In this paper, we describe the design and implementation of VisTrails, and show its effectiveness in different application scenarios.",
                "AuthorNamesDeduped": "Louis Bavoil;Steven P. Callahan;Carlos Eduardo Scheidegger;Huy T. Vo;Patricia Crossno;Cláudio T. Silva;Juliana Freire",
                "AuthorNames": "L. Bavoil;S.P. Callahan;P.J. Crossno;J. Freire;C.E. Scheidegger;C.T. Silva;H.T. Vo",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Sandia National Laboratories, USA;School of Computing, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1998.745299;10.1109/infvis.2004.2;10.1109/visual.2004.112;10.1109/visual.2002.1183791",
                "AuthorKeywords": "interrogative visualization, dataflow, caching, coordinated views",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 66,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1291,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2364,
                "i": [
                    2364
                ]
            }
        },
        {
            "name": "Rebecca Kehlbeck",
            "value": 39,
            "numPapers": 51,
            "cluster": "5",
            "visible": 1,
            "index": 476,
            "x": 87.72565415160994,
            "y": -199.88549122853343,
            "vy": 0,
            "vx": 0,
            "r": 1.0449050086355787,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "VIANA: Visual Interactive Annotation of Argumentation",
                "DOI": "10.1109/vast47406.2019.8986917",
                "Link": "http://dx.doi.org/10.1109/VAST47406.2019.8986917",
                "FirstPage": 11,
                "LastPage": 22,
                "PaperType": "C",
                "Abstract": "Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.",
                "AuthorNamesDeduped": "Fabian Sperrle;Rita Sevastjanova;Rebecca Kehlbeck;Mennatallah El-Assady",
                "AuthorNames": "Fabian Sperrle;Rita Sevastjanova;Rebecca Kehlbeck;Mennatallah El-Assady",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/vast.2012.6400485;10.1109/tvcg.2006.156;10.1109/tvcg.2019.2934654;10.1109/tvcg.2017.2745080;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2467531;10.1109/tvcg.2007.70539;10.1109/tvcg.2008.127;10.1109/tvcg.2014.2346677;10.1109/tvcg.2015.2467759;10.1109/tvcg.2012.262",
                "AuthorKeywords": "Argumentation annotation,machine learning,user interaction,layered interfaces,semantic transitions",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 390,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 633,
                "i": [
                    633
                ]
            }
        },
        {
            "name": "Kecheng Lu",
            "value": 87,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 477,
            "x": 70.40830672419803,
            "y": 206.86389328307456,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Palettailor: Discriminable Colorization for Categorical Data",
                "DOI": "10.1109/tvcg.2020.3030406",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030406",
                "FirstPage": 475,
                "LastPage": 484,
                "PaperType": "J",
                "Abstract": "We present an integrated approach for creating and assigning color palettes to different visualizations such as multi-class scatterplots, line, and bar charts. While other methods separate the creation of colors from their assignment, our approach takes data characteristics into account to produce color palettes, which are then assigned in a way that fosters better visual discrimination of classes. To do so, we use a customized optimization based on simulated annealing to maximize the combination of three carefully designed color scoring functions: point distinctness, name difference, and color discrimination. We compare our approach to state-of-the-art palettes with a controlled user study for scatterplots and line charts, furthermore we performed a case study. Our results show that Palettailor, as a fully-automated approach, generates color palettes with a higher discrimination quality than existing approaches. The efficiency of our optimization allows us also to incorporate user modifications into the color selection process.",
                "AuthorNamesDeduped": "Kecheng Lu;Mi Feng;Xin Chen;Michael Sedlmair;Oliver Deussen;Dani Lischinski;Zhanglin Cheng;Yunhai Wang",
                "AuthorNames": "Kecheng Lu;Mi Feng;Xin Chen;Michael Sedlmair;Oliver Deussen;Dani Lischinski;Zhanglin Cheng;Yunhai Wang",
                "AuthorAffiliation": "Shandong University;Twitter Inc.;Shandong University;VISUS, University of Stuttgart, Germany;Shenzhen VisuCA Key Lab, SIAT, China and Konstanz University, Germany;Hebrew University, Jerusalem, Israel;Shenzhen VisuCA Key Lab, SIAT, China;Shandong University",
                "InternalReferences": "0.1109/tvcg.2014.2346594;10.1109/tvcg.2016.2599214;10.1109/tvcg.2013.183;10.1109/tvcg.2016.2598918;10.1109/visual.1996.568118;10.1109/tvcg.2010.162;10.1109/tvcg.2009.113;10.1109/tvcg.2015.2467471;10.1109/tvcg.2008.118;10.1109/tvcg.2018.2864912",
                "AuthorKeywords": "Color Palette,Discriminability,Multi-Class Scatterplot,Line Chart,Bar Chart",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 872,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 386,
                "i": [
                    386
                ]
            }
        },
        {
            "name": "Xin Chen",
            "value": 126,
            "numPapers": 61,
            "cluster": "5",
            "visible": 1,
            "index": 478,
            "x": -191.85202346588017,
            "y": -105.08473291609666,
            "vy": 0,
            "vx": 0,
            "r": 1.145077720207254,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Optimizing Color Assignment for Perception of Class Separability in Multiclass Scatterplots",
                "DOI": "10.1109/tvcg.2018.2864912",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864912",
                "FirstPage": 820,
                "LastPage": 829,
                "PaperType": "J",
                "Abstract": "Appropriate choice of colors significantly aids viewers in understanding the structures in multiclass scatterplots and becomes more important with a growing number of data points and groups. An appropriate color mapping is also an important parameter for the creation of an aesthetically pleasing scatterplot. Currently, users of visualization software routinely rely on color mappings that have been pre-defined by the software. A default color mapping, however, cannot ensure an optimal perceptual separability between groups, and sometimes may even lead to a misinterpretation of the data. In this paper, we present an effective approach for color assignment based on a set of given colors that is designed to optimize the perception of scatterplots. Our approach takes into account the spatial relationships, density, degree of overlap between point clusters, and also the background color. For this purpose, we use a genetic algorithm that is able to efficiently find good color assignments. We implemented an interactive color assignment system with three extensions of the basic method that incorporates top K suggestions, user-defined color subsets, and classes of interest for the optimization. To demonstrate the effectiveness of our assignment technique, we conducted a numerical study and a controlled user study to compare our approach with default color assignments; our findings were verified by two expert studies. The results show that our approach is able to support users in distinguishing cluster numbers faster and more precisely than default assignment methods.",
                "AuthorNamesDeduped": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorNames": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorAffiliation": "Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Universitat Stuttgart, Stuttgart, Baden-Württemberg, DE;Chinese University of Hong Kong, New Territories, HK;Universitat Konstanz, Konstanz, Baden-Württemberg, DE;Peking University, Beijing, Beijing, CN",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/tvcg.2016.2599214;10.1109/tvcg.2013.183;10.1109/tvcg.2016.2598918;10.1109/visual.1996.568118;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.153;10.1109/tvcg.2015.2467471;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2008.118",
                "AuthorKeywords": "Color perception,visual design,scatterplots",
                "AminerCitationCount": 40,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1169,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 667,
                "i": [
                    667
                ]
            }
        },
        {
            "name": "Jarke J. van Wijk",
            "value": 1503,
            "numPapers": 155,
            "cluster": "6",
            "visible": 1,
            "index": 479,
            "x": 212.67130781910393,
            "y": -52.16238904145356,
            "vy": 0,
            "vx": 0,
            "r": 2.7305699481865284,
            "node": {
                "Conference": "InfoVis",
                "Year": 1999,
                "Title": "Cluster and calendar based visualization of time series data",
                "DOI": "10.1109/infvis.1999.801851",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1999.801851",
                "FirstPage": 4,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "A new method is presented to get an insight into univariate time series data. The problem addressed is how to identify patterns and trends on multiple time scales (days, weeks, seasons) simultaneously. The solution presented is to cluster similar daily data patterns, and to visualize the average patterns as graphs and the corresponding days on a calendar. This presentation provides a quick insight into both standard and exceptional patterns. Furthermore, it is well suited to interactive exploration. Two applications, numbers of employees present and energy consumption, are presented.",
                "AuthorNamesDeduped": "Jarke J. van Wijk;Edward R. van Selow",
                "AuthorNames": "J.J. Van Wijk;E.R. Van Selow",
                "AuthorAffiliation": "Department of Mathematics and Computing Science, Eindhovan University of Technology, Eindhoven, Netherlands;Netherlands Energy Research Foundation, Petten, Netherlands",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 446,
                "CitationCountCrossRef": 142,
                "PubsCitedCrossRef": 7,
                "DownloadsXplore": 2638,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3031,
                "i": [
                    3031
                ]
            }
        },
        {
            "name": "Daniel Archambault",
            "value": 177,
            "numPapers": 61,
            "cluster": "1",
            "visible": 1,
            "index": 480,
            "x": -121.70873008771795,
            "y": 182.31013416822177,
            "vy": 0,
            "vx": 0,
            "r": 1.2037996545768566,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "In Search of Patient Zero: Visual Analytics of Pathogen Transmission Pathways in Hospitals",
                "DOI": "10.1109/tvcg.2020.3030437",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030437",
                "FirstPage": 711,
                "LastPage": 721,
                "PaperType": "J",
                "Abstract": "Pathogen outbreaks (i.e., outbreaks of bacteria and viruses) in hospitals can cause high mortality rates and increase costs for hospitals significantly. An outbreak is generally noticed when the number of infected patients rises above an endemic level or the usual prevalence of a pathogen in a defined population. Reconstructing transmission pathways back to the source of an outbreak - the patient zero or index patient - requires the analysis of microbiological data and patient contacts. This is often manually completed by infection control experts. We present a novel visual analytics approach to support the analysis of transmission pathways, patient contacts, the progression of the outbreak, and patient timelines during hospitalization. Infection control experts applied our solution to a real outbreak of Klebsiella pneumoniae in a large German hospital. Using our system, our experts were able to scale the analysis of transmission pathways to longer time intervals (i.e., several years of data instead of days) and across a larger number of wards. Also, the system is able to reduce the analysis time from days to hours. In our final study, feedback from twenty-five experts from seven German hospitals provides evidence that our solution brings significant benefits for analyzing outbreaks.",
                "AuthorNamesDeduped": "Tom Baumgartl;Markus Petzold;Marcel Wunderlich;Markus Höhn;Daniel Archambault;M. Lieser;A. Dalpke;Simone Scheithauer;Michael Marschollek;Vanessa Eichel;Nico T. Mutters;Highmed Consortium;Tatiana von Landesberger",
                "AuthorNames": "T. Baumgartl;N. T. Mutters;Highmed Consortium;T. Von Landesberger;M. Petzold;M. Wunderlich;M. Hohn;D. Archambault;M. Lieser;A. Dalpke;S. Scheithauer;M. Marschollek;V. M. Eichel",
                "AuthorAffiliation": "TU Darmstadt, Darmstadt, Germany;University Hospital Heidelberg, Heidelberg, Germany;TU Darmstadt, Darmstadt, Germany;TU Darmstadt, Darmstadt, Germany;Swansea University, Swansea, United Kingdom;University Hospital Heidelberg, Heidelberg, Germany;TU Dresden, Dresden, Germany;University Medicine Gottingen, Universitat Gottingen, Germany;L. Reichertz Institute for Medical Informatics, Hannover, Germany;University Hospital Heidelberg, Heidelberg, Germany;University Hospital Heidelberg, Heidelberg, Germany;L. Reichertz Institute for Medical Informatics, Hannover, Germany;TU Darmstadt, Darmstadt, Germany and Universitat Rostock, Rostock, Germany",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2015.2467851;10.1109/tvcg.2011.185;10.1109/vast.2015.7347626;10.1109/tvcg.2011.239;10.1109/tvcg.2006.156;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598588;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.196;10.1109/tvcg.2013.200;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2015.2468078;10.1109/vast.2012.6400553;10.1109/vast.2009.5333893;10.1109/tvcg.2015.2467751",
                "AuthorKeywords": "dynamic networks,visualization applications,health,medicine,outbreak,Klebsiella,infection control",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 990,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 476,
                "i": [
                    476
                ]
            }
        },
        {
            "name": "John Thompson 0002",
            "value": 53,
            "numPapers": 25,
            "cluster": "5",
            "visible": 1,
            "index": 481,
            "x": -33.439238494380454,
            "y": -216.86820266907722,
            "vy": 0,
            "vx": 0,
            "r": 1.0610247553252734,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Critical Reflections on Visualization Authoring Systems",
                "DOI": "10.1109/tvcg.2019.2934281",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934281",
                "FirstPage": 461,
                "LastPage": 471,
                "PaperType": "J",
                "Abstract": "An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed —Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John T. Stasko;John Thompson 0002;Matthew Brehmer;Zhicheng Liu 0001",
                "AuthorNames": "Arvind Satyanarayan;Bongshin Lee;Donghao Ren;Jeffrey Heer;John Stasko;John Thompson;Matthew Brehmer;Zhicheng Liu",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Microsoft Research;University of California, Santa Barbara;University of Washington;Georgia Institute of Technology;Georgia Institute of Technology;Microsoft Research;Adobe Research",
                "InternalReferences": "0.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Critical reflection,visualization authoring,expressivity,learnability,reusability",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 529,
                "i": [
                    529
                ]
            }
        },
        {
            "name": "Jun Wang",
            "value": 102,
            "numPapers": 9,
            "cluster": "3",
            "visible": 1,
            "index": 482,
            "x": 171.32705288233913,
            "y": 137.4665084689801,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "The Visual Causality Analyst: An Interactive Interface for Causal Reasoning",
                "DOI": "10.1109/tvcg.2015.2467931",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467931",
                "FirstPage": 230,
                "LastPage": 239,
                "PaperType": "J",
                "Abstract": "Uncovering the causal relations that exist among variables in multivariate datasets is one of the ultimate goals in data analytics. Causation is related to correlation but correlation does not imply causation. While a number of casual discovery algorithms have been devised that eliminate spurious correlations from a network, there are no guarantees that all of the inferred causations are indeed true. Hence, bringing a domain expert into the casual reasoning loop can be of great benefit in identifying erroneous casual relationships suggested by the discovery algorithm. To address this need we present the Visual Causal Analyst - a novel visual causal reasoning framework that allows users to apply their expertise, verify and edit causal links, and collaborate with the causal discovery algorithm to identify a valid causal network. Its interface consists of both an interactive 2D graph view and a numerical presentation of salient statistical parameters, such as regression coefficients, p-values, and others. Both help users in gaining a good understanding of the landscape of causal structures particularly when the number of variables is large. Our framework is also novel in that it can handle both numerical and categorical variables within one unified model and return plausible results. We demonstrate its use via a set of case studies using multiple practical datasets.",
                "AuthorNamesDeduped": "Jun Wang;Klaus Mueller 0001",
                "AuthorNames": "Jun Wang;Klaus Mueller",
                "AuthorAffiliation": "Computer Science Department, Stony Brook University, Stony Brook, NY;Computer Science Dept., SUNY, Korea and Computer Science Department, Stony Brook University, Stony Brook, NY",
                "InternalReferences": "0.1109/infvis.2003.1249025;10.1109/tvcg.2007.70528;10.1109/tvcg.2012.225;10.1109/vast.2007.4388999",
                "AuthorKeywords": "Visual knowledge discovery, Causality, Hypothesis testing, Visual evidence, High-dimensional data",
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 47,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 1808,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1114,
                "i": [
                    1114
                ]
            }
        },
        {
            "name": "Filip Sadlo",
            "value": 187,
            "numPapers": 69,
            "cluster": "11",
            "visible": 1,
            "index": 483,
            "x": -219.41554983070114,
            "y": 14.381115829139173,
            "vy": 0,
            "vx": 0,
            "r": 1.2153137593552101,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Visualization of Discontinuous Vector Field Topology",
                "DOI": "10.1109/tvcg.2023.3326519",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326519",
                "FirstPage": 45,
                "LastPage": 54,
                "PaperType": "J",
                "Abstract": "This paper extends the concept and the visualization of vector field topology to vector fields with discontinuities. We address the non-uniqueness of flow in such fields by introduction of a time-reversible concept of equivalence. This concept generalizes streamlines to streamsets and thus vector field topology to discontinuous vector fields in terms of invariant streamsets. We identify respective novel critical structures as well as their manifolds, investigate their interplay with traditional vector field topology, and detail the application and interpretation of our approach using specifically designed synthetic cases and a simulated case from physics.",
                "AuthorNamesDeduped": "Egzon Miftari;Daniel Durstewitz;Filip Sadlo",
                "AuthorNames": "Egzon Miftari;Daniel Durstewitz;Filip Sadlo",
                "AuthorAffiliation": "Heidelberg University, Germany;Heidelberg University, Germany;Heidelberg University, Germany",
                "InternalReferences": "10.1109/visual.1997.663858;10.1109/visual.2003.1250376",
                "AuthorKeywords": "Discontinuous vector field topology,equivalence in non-unique flow,non-smooth dynamical systems",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 289,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 79,
                "i": [
                    79
                ]
            }
        },
        {
            "name": "Gerik Scheuermann",
            "value": 596,
            "numPapers": 132,
            "cluster": "11",
            "visible": 1,
            "index": 484,
            "x": 152.2330780765265,
            "y": -158.98141381729562,
            "vy": 0,
            "vx": 0,
            "r": 1.6862406447898675,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "Visualization of higher order singularities in vector fields",
                "DOI": "10.1109/visual.1997.663858",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663858",
                "FirstPage": 67,
                "LastPage": 74,
                "PaperType": "C",
                "Abstract": "Presents an algorithm for the visualization of vector field topology based on Clifford algebra. It allows the detection of higher-order singularities. This is accomplished by first analysing the possible critical points and then choosing a suitable polynomial approximation, because conventional methods based on piecewise linear or bilinear approximation do not allow higher-order critical points and destroy the topology in such cases. The algorithm is still very fast, because of using linear approximation outside the areas with several critical points.",
                "AuthorNamesDeduped": "Gerik Scheuermann;Hans Hagen;Heinz Krüger;Martin Menzel;Alyn P. Rockwood",
                "AuthorNames": "G. Scheuermann;H. Hagen;H. Kruger;M. Menzel;A. Rockwood",
                "AuthorAffiliation": "Department of Computer Science, University of Kaiserslautern, Germany;Department of Computer Science, University of Kaiserslautern, Germany;Department of Physics, University of Kaiserslautern, Germany;Department of Physics, University of Kaiserslautern, Germany;Department of Computer Science, Arizona State University, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 110,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 7,
                "DownloadsXplore": 150,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3254,
                "i": [
                    3254
                ]
            }
        },
        {
            "name": "Hans Hagen",
            "value": 454,
            "numPapers": 98,
            "cluster": "11",
            "visible": 1,
            "index": 485,
            "x": -4.866546460898656,
            "y": 220.28689639999908,
            "vy": 0,
            "vx": 0,
            "r": 1.522740356937248,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "Visualization of higher order singularities in vector fields",
                "DOI": "10.1109/visual.1997.663858",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663858",
                "FirstPage": 67,
                "LastPage": 74,
                "PaperType": "C",
                "Abstract": "Presents an algorithm for the visualization of vector field topology based on Clifford algebra. It allows the detection of higher-order singularities. This is accomplished by first analysing the possible critical points and then choosing a suitable polynomial approximation, because conventional methods based on piecewise linear or bilinear approximation do not allow higher-order critical points and destroy the topology in such cases. The algorithm is still very fast, because of using linear approximation outside the areas with several critical points.",
                "AuthorNamesDeduped": "Gerik Scheuermann;Hans Hagen;Heinz Krüger;Martin Menzel;Alyn P. Rockwood",
                "AuthorNames": "G. Scheuermann;H. Hagen;H. Kruger;M. Menzel;A. Rockwood",
                "AuthorAffiliation": "Department of Computer Science, University of Kaiserslautern, Germany;Department of Computer Science, University of Kaiserslautern, Germany;Department of Physics, University of Kaiserslautern, Germany;Department of Physics, University of Kaiserslautern, Germany;Department of Computer Science, Arizona State University, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 110,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 7,
                "DownloadsXplore": 150,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3254,
                "i": [
                    3254
                ]
            }
        },
        {
            "name": "Holger Theisel",
            "value": 747,
            "numPapers": 149,
            "cluster": "11",
            "visible": 1,
            "index": 486,
            "x": -145.36269381374228,
            "y": -165.89058818152466,
            "vy": 0,
            "vx": 0,
            "r": 1.8601036269430051,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Saddle connectors - an approach to visualizing the topological skeleton of complex 3D vector fields",
                "DOI": "10.1109/visual.2003.1250376",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250376",
                "FirstPage": 225,
                "LastPage": 232,
                "PaperType": "C",
                "Abstract": "One of the reasons that topological methods have a limited popularity for the visualization of complex 3D flow fields is the fact that such topological structures contain a number of separating stream surfaces. Since these stream surfaces tend to hide each other as well as other topological features, for complex 3D topologies the visualizations become cluttered and hardly interpretable. This paper proposes to use particular stream lines called saddle connectors instead of separating stream surfaces and to depict single surfaces only on user demand. We discuss properties and computational issues of saddle connectors and apply these methods to complex flow data. We show that the use of saddle connectors makes topological skeletons available as a valuable visualization tool even for topologically complex 3D flow data.",
                "AuthorNamesDeduped": "Holger Theisel;Tino Weinkauf;Hans-Christian Hege;Hans-Peter Seidel",
                "AuthorNames": "H. Theisel;T. Weinkauf;H.-C. Hege;H.-P. Seidel",
                "AuthorAffiliation": "MPI Informatik Saarbrücken, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;MPI Informatik Saarbrücken, Germany",
                "InternalReferences": "0.1109/visual.2000.885714;10.1109/visual.1999.809874;10.1109/visual.1998.745284;10.1109/visual.1998.745291;10.1109/visual.1999.809907;10.1109/visual.1992.235211;10.1109/visual.1993.398875;10.1109/visual.2001.964506;10.1109/visual.2000.885716;10.1109/visual.2001.964507;10.1109/visual.1991.175773",
                "AuthorKeywords": "3D flow visualization, vector field topology, critical points, separatrices",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 516,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2660,
                "i": [
                    2660
                ]
            }
        },
        {
            "name": "Hans-Peter Seidel",
            "value": 347,
            "numPapers": 66,
            "cluster": "11",
            "visible": 1,
            "index": 487,
            "x": 219.46862025100953,
            "y": 24.15625643840878,
            "vy": 0,
            "vx": 0,
            "r": 1.3995394358088658,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Saddle connectors - an approach to visualizing the topological skeleton of complex 3D vector fields",
                "DOI": "10.1109/visual.2003.1250376",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250376",
                "FirstPage": 225,
                "LastPage": 232,
                "PaperType": "C",
                "Abstract": "One of the reasons that topological methods have a limited popularity for the visualization of complex 3D flow fields is the fact that such topological structures contain a number of separating stream surfaces. Since these stream surfaces tend to hide each other as well as other topological features, for complex 3D topologies the visualizations become cluttered and hardly interpretable. This paper proposes to use particular stream lines called saddle connectors instead of separating stream surfaces and to depict single surfaces only on user demand. We discuss properties and computational issues of saddle connectors and apply these methods to complex flow data. We show that the use of saddle connectors makes topological skeletons available as a valuable visualization tool even for topologically complex 3D flow data.",
                "AuthorNamesDeduped": "Holger Theisel;Tino Weinkauf;Hans-Christian Hege;Hans-Peter Seidel",
                "AuthorNames": "H. Theisel;T. Weinkauf;H.-C. Hege;H.-P. Seidel",
                "AuthorAffiliation": "MPI Informatik Saarbrücken, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;MPI Informatik Saarbrücken, Germany",
                "InternalReferences": "0.1109/visual.2000.885714;10.1109/visual.1999.809874;10.1109/visual.1998.745284;10.1109/visual.1998.745291;10.1109/visual.1999.809907;10.1109/visual.1992.235211;10.1109/visual.1993.398875;10.1109/visual.2001.964506;10.1109/visual.2000.885716;10.1109/visual.2001.964507;10.1109/visual.1991.175773",
                "AuthorKeywords": "3D flow visualization, vector field topology, critical points, separatrices",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 516,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2660,
                "i": [
                    2660
                ]
            }
        },
        {
            "name": "Aimen Gaba",
            "value": 12,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 488,
            "x": -178.329267963755,
            "y": 130.57056401621023,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Comparison Conundrum and the Chamber of Visualizations: An Exploration of How Language Influences Visual Design",
                "DOI": "10.1109/tvcg.2022.3209456",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209456",
                "FirstPage": 1211,
                "LastPage": 1221,
                "PaperType": "J",
                "Abstract": "The language for expressing comparisons is often complex and nuanced, making supporting natural language-based visual comparison a non-trivial task. To better understand how people reason about comparisons in natural language, we explore a design space of utterances for comparing data entities. We identified different parameters of comparison utterances that indicate what is being compared (i.e., data variables and attributes) as well as how these parameters are specified (i.e., explicitly or implicitly). We conducted a user study with sixteen data visualization experts and non-experts to investigate how they designed visualizations for comparisons in our design space. Based on the rich set of visualization techniques observed, we extracted key design features from the visualizations and synthesized them into a subset of sixteen representative visualization designs. We then conducted a follow-up study to validate user preferences for the sixteen representative visualizations corresponding to utterances in our design space. Findings from these studies suggest guidelines and future directions for designing natural language interfaces and recommendation tools to better support natural language comparisons in visual analytics.",
                "AuthorNamesDeduped": "Aimen Gaba;Vidya Setlur;Arjun Srinivasan;Jane Hoffswell;Cindy Xiong",
                "AuthorNames": "Aimen Gaba;Vidya Setlur;Arjun Srinivasan;Jane Hoffswell;Cindy Xiong",
                "AuthorAffiliation": "UMass Amherst, USA;Tableau Research, USA;Tableau Research, USA;Adobe Research, USA;UMass Amherst, USA",
                "InternalReferences": "0.1109/tvcg.2017.2744199;10.1109/tvcg.2013.183;10.1109/tvcg.2007.70556;10.1109/tvcg.2019.2934786;10.1109/tvcg.2011.194;10.1109/tvcg.2019.2934801;10.1109/tvcg.2016.2599030;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2021.3114814;10.1109/tvcg.2016.2598920",
                "AuthorKeywords": "Comparative constructions,cardinality,explicit and implicit comparisons,natural language,intent,visual analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 408,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 183,
                "i": [
                    183
                ]
            }
        },
        {
            "name": "Zhanna Kaufman",
            "value": 0,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 489,
            "x": 43.3395954199691,
            "y": -216.96008727144581,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning",
                "DOI": "10.1109/tvcg.2023.3327192",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327192",
                "FirstPage": 327,
                "LastPage": 337,
                "PaperType": "J",
                "Abstract": "Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer “Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?” Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.",
                "AuthorNamesDeduped": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorNames": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorAffiliation": "University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;Global Compliance, TD Bank, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA",
                "InternalReferences": "10.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467732;10.1109/vast47406.2019.8986948;10.1109/tvcg.2022.3209456;10.1109/tvcg.2022.3209484;10.1109/tvcg.2021.3114805;10.1109/tvcg.2022.3209377;10.1109/tvcg.2018.2864884;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591;10.1109/tvcg.2020.3030434;10.1109/tvcg.2022.3209383;10.1109/tvcg.2014.2346320;10.1109/tvcg.2020.3030471;10.1109/tvcg.2019.2934619;10.1109/tvcg.2021.3114850;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "machine learning,fairness,bias,trust,visual design,gender,human-subjects studies",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 80,
                "i": [
                    80
                ]
            }
        },
        {
            "name": "Jason Cheung",
            "value": 0,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 490,
            "x": 114.71415994687564,
            "y": 189.44830827347772,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning",
                "DOI": "10.1109/tvcg.2023.3327192",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327192",
                "FirstPage": 327,
                "LastPage": 337,
                "PaperType": "J",
                "Abstract": "Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer “Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?” Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.",
                "AuthorNamesDeduped": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorNames": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorAffiliation": "University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;Global Compliance, TD Bank, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA",
                "InternalReferences": "10.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467732;10.1109/vast47406.2019.8986948;10.1109/tvcg.2022.3209456;10.1109/tvcg.2022.3209484;10.1109/tvcg.2021.3114805;10.1109/tvcg.2022.3209377;10.1109/tvcg.2018.2864884;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591;10.1109/tvcg.2020.3030434;10.1109/tvcg.2022.3209383;10.1109/tvcg.2014.2346320;10.1109/tvcg.2020.3030471;10.1109/tvcg.2019.2934619;10.1109/tvcg.2021.3114850;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "machine learning,fairness,bias,trust,visual design,gender,human-subjects studies",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 80,
                "i": [
                    80
                ]
            }
        },
        {
            "name": "Marie Shvakel",
            "value": 0,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 491,
            "x": -212.77370854988226,
            "y": -62.26836235143621,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning",
                "DOI": "10.1109/tvcg.2023.3327192",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327192",
                "FirstPage": 327,
                "LastPage": 337,
                "PaperType": "J",
                "Abstract": "Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer “Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?” Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.",
                "AuthorNamesDeduped": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorNames": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorAffiliation": "University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;Global Compliance, TD Bank, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA",
                "InternalReferences": "10.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467732;10.1109/vast47406.2019.8986948;10.1109/tvcg.2022.3209456;10.1109/tvcg.2022.3209484;10.1109/tvcg.2021.3114805;10.1109/tvcg.2022.3209377;10.1109/tvcg.2018.2864884;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591;10.1109/tvcg.2020.3030434;10.1109/tvcg.2022.3209383;10.1109/tvcg.2014.2346320;10.1109/tvcg.2020.3030471;10.1109/tvcg.2019.2934619;10.1109/tvcg.2021.3114850;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "machine learning,fairness,bias,trust,visual design,gender,human-subjects studies",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 80,
                "i": [
                    80
                ]
            }
        },
        {
            "name": "Kyle Wm. Hall",
            "value": 70,
            "numPapers": 38,
            "cluster": "5",
            "visible": 1,
            "index": 492,
            "x": 199.15667742477723,
            "y": -97.91127533192099,
            "vy": 0,
            "vx": 0,
            "r": 1.0805987334484743,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Design by Immersion: A Transdisciplinary Approach to Problem-Driven Visualizations",
                "DOI": "10.1109/tvcg.2019.2934790",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934790",
                "FirstPage": 109,
                "LastPage": 118,
                "PaperType": "J",
                "Abstract": "While previous work exists on how to conduct and disseminate insights from problem-driven visualization projects and design studies, the literature does not address how to accomplish these goals in transdisciplinary teams in ways that advance all disciplines involved. In this paper we introduce and define a new methodological paradigm we call design by immersion, which provides an alternative perspective on problem-driven visualization work. Design by immersion embeds transdisciplinary experiences at the center of the visualization process by having visualization researchers participate in the work of the target domain (or domain experts participate in visualization research). Based on our own combined experiences of working on cross-disciplinary, problem-driven visualization projects, we present six case studies that expose the opportunities that design by immersion enables, including (1) exploring new domain-inspired visualization design spaces, (2) enriching domain understanding through personal experiences, and (3) building strong transdisciplinary relationships. Furthermore, we illustrate how the process of design by immersion opens up a diverse set of design activities that can be combined in different ways depending on the type of collaboration, project, and goals. Finally, we discuss the challenges and potential pitfalls of design by immersion.",
                "AuthorNamesDeduped": "Kyle Wm. Hall;Adam James Bradley;Uta Hinrichs;Samuel Huron;Jo Wood;Christopher Collins 0001;Sheelagh Carpendale",
                "AuthorNames": "Kyle Wm. Hall;Adam J. Bradley;Uta Hinrichs;Samuel Huron;Jo Wood;Christopher Collins;Sheelagh Carpendale",
                "AuthorAffiliation": "Temple University, Philadelphia, USA;Ontario Tech University, Oshawa, Canada;University of St Andrews, Fife, United Kingdom;Télécom Paristech, Université Paris-Saclay, Paris, France;University of London, London, United Kingdom;Ontario Tech University, Oshawa, Canada;University of Calgary, Calgary, Canada and Simon Fraser University, Burnaby, Canada",
                "InternalReferences": "0.1109/tvcg.2009.122;10.1109/tvcg.2006.160;10.1109/tvcg.2015.2467452;10.1109/tvcg.2018.2865241;10.1109/tvcg.2014.2346325;10.1109/tvcg.2011.209;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467271;10.1109/tvcg.2012.213;10.1109/tvcg.2014.2346323",
                "AuthorKeywords": "Visualization,problem-driven,design studies,collaboration,methodology,framework",
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1323,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 534,
                "i": [
                    534
                ]
            }
        },
        {
            "name": "Yuriy Brun",
            "value": 0,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 493,
            "x": -80.79572101986605,
            "y": 206.93489668221738,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning",
                "DOI": "10.1109/tvcg.2023.3327192",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327192",
                "FirstPage": 327,
                "LastPage": 337,
                "PaperType": "J",
                "Abstract": "Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer “Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?” Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.",
                "AuthorNamesDeduped": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorNames": "Aimen Gaba;Zhanna Kaufman;Jason Cheung;Marie Shvakel;Kyle Wm. Hall;Yuriy Brun;Cindy Xiong Bearfield",
                "AuthorAffiliation": "University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA;Global Compliance, TD Bank, USA;University of Massachusetts, Amherst, USA;University of Massachusetts, Amherst, USA",
                "InternalReferences": "10.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467732;10.1109/vast47406.2019.8986948;10.1109/tvcg.2022.3209456;10.1109/tvcg.2022.3209484;10.1109/tvcg.2021.3114805;10.1109/tvcg.2022.3209377;10.1109/tvcg.2018.2864884;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591;10.1109/tvcg.2020.3030434;10.1109/tvcg.2022.3209383;10.1109/tvcg.2014.2346320;10.1109/tvcg.2020.3030471;10.1109/tvcg.2019.2934619;10.1109/tvcg.2021.3114850;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209465",
                "AuthorKeywords": "machine learning,fairness,bias,trust,visual design,gender,human-subjects studies",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 80,
                "i": [
                    80
                ]
            }
        },
        {
            "name": "Johanna Schmidt",
            "value": 74,
            "numPapers": 39,
            "cluster": "6",
            "visible": 1,
            "index": 494,
            "x": -80.28748564496982,
            "y": -207.37386443042615,
            "vy": 0,
            "vx": 0,
            "r": 1.0852043753598157,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Data Type Agnostic Visual Sensitivity Analysis",
                "DOI": "10.1109/tvcg.2023.3327203",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327203",
                "FirstPage": 1106,
                "LastPage": 1116,
                "PaperType": "J",
                "Abstract": "Modern science and industry rely on computational models for simulation, prediction, and data analysis. Spatial blind source separation (SBSS) is a model used to analyze spatial data. Designed explicitly for spatial data analysis, it is superior to popular non-spatial methods, like PCA. However, a challenge to its practical use is setting two complex tuning parameters, which requires parameter space analysis. In this paper, we focus on sensitivity analysis (SA). SBSS parameters and outputs are spatial data, which makes SA difficult as few SA approaches in the literature assume such complex data on both sides of the model. Based on the requirements in our design study with statistics experts, we developed a visual analytics prototype for data type agnostic visual sensitivity analysis that fits SBSS and other contexts. The main advantage of our approach is that it requires only dissimilarity measures for parameter settings and outputs (Fig. 1). We evaluated the prototype heuristically with visualization experts and through interviews with two SBSS experts. In addition, we show the transferability of our approach by applying it to microclimate simulations. Study participants could confirm suspected and known parameter-output relations, find surprising associations, and identify parameter subspaces to examine in the future. During our design study and evaluation, we identified challenging future research opportunities.",
                "AuthorNamesDeduped": "Nikolaus Piccolotto;Markus Bögl;Christoph Muehlmann;Klaus Nordhausen;Peter Filzmoser;Johanna Schmidt;Silvia Miksch",
                "AuthorNames": "Nikolaus Piccolotto;Markus Bögl;Christoph Muehlmann;Klaus Nordhausen;Peter Filzmoser;Johanna Schmidt;Silvia Miksch",
                "AuthorAffiliation": "TU Wien, Austria;TU Wien, Austria;TU Wien, Austria;University of Jyväskylä, Finland;TU Wien, Austria;VRVis GmbH, Austria;TU Wien, Austria",
                "InternalReferences": "10.1109/tvcg.2014.2346626;10.1109/tvcg.2010.190;10.1109/tvcg.2011.188;10.1109/tvcg.2018.2864477;10.1109/tvcg.2016.2598468;10.1109/vast.2011.6102450;10.1109/tvcg.2019.2934591;10.1109/tvcg.2019.2934312;10.1109/visual.2000.885678;10.1109/tvcg.2021.3114833;10.1109/tvcg.2020.3030420;10.1109/tvcg.2017.2745085;10.1109/tvcg.2018.2865051;10.1109/tvcg.2016.2598589;10.1109/tvcg.2014.2346321;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865146;10.1109/tvcg.2016.2598830;10.1109/vast.2016.7883516;10.1109/tvcg.2007.70589;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Visual analytics,parameter space analysis,sensitivity analysis,spatial blind source separation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 280,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 81,
                "i": [
                    81
                ]
            }
        },
        {
            "name": "Markus Bögl",
            "value": 31,
            "numPapers": 31,
            "cluster": "6",
            "visible": 1,
            "index": 495,
            "x": 199.4819211466756,
            "y": 98.77734120551888,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Visual Encodings of Temporal Uncertainty: A Comparative User Study",
                "DOI": "10.1109/tvcg.2015.2467752",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467752",
                "FirstPage": 539,
                "LastPage": 548,
                "PaperType": "J",
                "Abstract": "A number of studies have investigated different ways of visualizing uncertainty. However, in the temporal dimension, it is still an open question how to best represent uncertainty, since the special characteristics of time require special visual encodings and may provoke different interpretations. Thus, we have conducted a comprehensive study comparing alternative visual encodings of intervals with uncertain start and end times: gradient plots, violin plots, accumulated probability plots, error bars, centered error bars, and ambiguation. Our results reveal significant differences in error rates and completion time for these different visualization types and different tasks. We recommend using ambiguation - using a lighter color value to represent uncertain regions - or error bars for judging durations and temporal bounds, and gradient plots - using fading color or transparency - for judging probability values.",
                "AuthorNamesDeduped": "Theresia Gschwandtner;Markus Bögl;Paolo Federico 0001;Silvia Miksch",
                "AuthorNames": "Theresia Gschwandtnei;Markus Bögl;Paolo Federico;Silvia Miksch",
                "AuthorAffiliation": "Vienna University of Technology;Vienna University of Technology;Vienna University of Technology;Vienna University of Technology",
                "InternalReferences": "0.1109/tvcg.2014.2346298;10.1109/tvcg.2012.279;10.1109/infvis.2002.1173145;10.1109/tvcg.2009.114",
                "AuthorKeywords": "Uncertainty, temporal intervals, visualization",
                "AminerCitationCount": 72,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1275,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1018,
                "i": [
                    1018
                ]
            }
        },
        {
            "name": "Peter Filzmoser",
            "value": 79,
            "numPapers": 32,
            "cluster": "6",
            "visible": 1,
            "index": 496,
            "x": -214.03054362777604,
            "y": 61.97520790121367,
            "vy": 0,
            "vx": 0,
            "r": 1.0909614277489925,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Data Type Agnostic Visual Sensitivity Analysis",
                "DOI": "10.1109/tvcg.2023.3327203",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327203",
                "FirstPage": 1106,
                "LastPage": 1116,
                "PaperType": "J",
                "Abstract": "Modern science and industry rely on computational models for simulation, prediction, and data analysis. Spatial blind source separation (SBSS) is a model used to analyze spatial data. Designed explicitly for spatial data analysis, it is superior to popular non-spatial methods, like PCA. However, a challenge to its practical use is setting two complex tuning parameters, which requires parameter space analysis. In this paper, we focus on sensitivity analysis (SA). SBSS parameters and outputs are spatial data, which makes SA difficult as few SA approaches in the literature assume such complex data on both sides of the model. Based on the requirements in our design study with statistics experts, we developed a visual analytics prototype for data type agnostic visual sensitivity analysis that fits SBSS and other contexts. The main advantage of our approach is that it requires only dissimilarity measures for parameter settings and outputs (Fig. 1). We evaluated the prototype heuristically with visualization experts and through interviews with two SBSS experts. In addition, we show the transferability of our approach by applying it to microclimate simulations. Study participants could confirm suspected and known parameter-output relations, find surprising associations, and identify parameter subspaces to examine in the future. During our design study and evaluation, we identified challenging future research opportunities.",
                "AuthorNamesDeduped": "Nikolaus Piccolotto;Markus Bögl;Christoph Muehlmann;Klaus Nordhausen;Peter Filzmoser;Johanna Schmidt;Silvia Miksch",
                "AuthorNames": "Nikolaus Piccolotto;Markus Bögl;Christoph Muehlmann;Klaus Nordhausen;Peter Filzmoser;Johanna Schmidt;Silvia Miksch",
                "AuthorAffiliation": "TU Wien, Austria;TU Wien, Austria;TU Wien, Austria;University of Jyväskylä, Finland;TU Wien, Austria;VRVis GmbH, Austria;TU Wien, Austria",
                "InternalReferences": "10.1109/tvcg.2014.2346626;10.1109/tvcg.2010.190;10.1109/tvcg.2011.188;10.1109/tvcg.2018.2864477;10.1109/tvcg.2016.2598468;10.1109/vast.2011.6102450;10.1109/tvcg.2019.2934591;10.1109/tvcg.2019.2934312;10.1109/visual.2000.885678;10.1109/tvcg.2021.3114833;10.1109/tvcg.2020.3030420;10.1109/tvcg.2017.2745085;10.1109/tvcg.2018.2865051;10.1109/tvcg.2016.2598589;10.1109/tvcg.2014.2346321;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865146;10.1109/tvcg.2016.2598830;10.1109/vast.2016.7883516;10.1109/tvcg.2007.70589;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Visual analytics,parameter space analysis,sensitivity analysis,spatial blind source separation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 280,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 81,
                "i": [
                    81
                ]
            }
        },
        {
            "name": "Hanghang Tong",
            "value": 52,
            "numPapers": 28,
            "cluster": "6",
            "visible": 1,
            "index": 497,
            "x": 116.07252489110392,
            "y": -190.46566348138464,
            "vy": 0,
            "vx": 0,
            "r": 1.059873344847438,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "FairRankVis: A Visual Analytics Framework for Exploring Algorithmic Fairness in Graph Mining Models",
                "DOI": "10.1109/tvcg.2021.3114850",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114850",
                "FirstPage": 368,
                "LastPage": 377,
                "PaperType": "J",
                "Abstract": "Graph mining is an essential component of recommender systems and search engines. Outputs of graph mining models typically provide a ranked list sorted by each item's relevance or utility. However, recent research has identified issues of algorithmic bias in such models, and new graph mining algorithms have been proposed to correct for bias. As such, algorithm developers need tools that can help them uncover potential biases in their models while also exploring the impacts of correcting for biases when employing fairness-aware algorithms. In this paper, we present FairRankVis, a visual analytics framework designed to enable the exploration of multi-class bias in graph mining algorithms. We support both group and individual fairness levels of comparison. Our framework is designed to enable model developers to compare multi-class fairness between algorithms (for example, comparing PageRank with a debiased PageRank algorithm) to assess the impacts of algorithmic debiasing with respect to group and individual fairness. We demonstrate our framework through two usage scenarios inspecting algorithmic fairness.",
                "AuthorNamesDeduped": "Tiankai Xie;Yuxin Ma;Jian Kang;Hanghang Tong;Ross Maciejewski",
                "AuthorNames": "Tiankai Xie;Yuxin Ma;Jian Kang;Hanghang Tong;Ross Maciejewski",
                "AuthorAffiliation": "Arizona State University, United States;Southern University of Science and Technology, China;University of Illinois at Urbana-Champaign, United States;University of Illinois at Urbana-Champaign, United States;Arizona State University, United States",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/tvcg.2011.185;10.1109/vast47406.2019.8986948;10.1109/tvcg.2020.3030471;10.1109/tvcg.2020.3028958",
                "AuthorKeywords": "Graph ranking,fairness,visual analytics",
                "AminerCitationCount": 4,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 1154,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 297,
                "i": [
                    297
                ]
            }
        },
        {
            "name": "Qing Chen 0001",
            "value": 49,
            "numPapers": 54,
            "cluster": "5",
            "visible": 1,
            "index": 498,
            "x": 43.1127037561878,
            "y": 219.06915523375537,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VizLinter: A Linter and Fixer Framework for Data Visualization",
                "DOI": "10.1109/tvcg.2021.3114804",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114804",
                "FirstPage": 206,
                "LastPage": 216,
                "PaperType": "J",
                "Abstract": "Despite the rising popularity of automated visualization tools, existing systems tend to provide direct results which do not always fit the input data or meet visualization requirements. Therefore, additional specification adjustments are still required in real-world use cases. However, manual adjustments are difficult since most users do not necessarily possess adequate skills or visualization knowledge. Even experienced users might create imperfect visualizations that involve chart construction errors. We present a framework, VizLinter, to help users detect flaws and rectify already-built but defective visualizations. The framework consists of two components, (1) a visualization linter, which applies well-recognized principles to inspect the legitimacy of rendered visualizations, and (2) a visualization fixer, which automatically corrects the detected violations according to the linter. We implement the framework into an online editor prototype based on Vega-Lite specifications. To further evaluate the system, we conduct an in-lab user study. The results prove its effectiveness and efficiency in identifying and fixing errors for data visualizations.",
                "AuthorNamesDeduped": "Qing Chen 0001;Fuling Sun;Xinyue Xu;Zui Chen;Jiazhe Wang;Nan Cao 0001",
                "AuthorNames": "Qing Chen;Fuling Sun;Xinyue Xu;Zui Chen;Jiazhe Wang;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Ant Group, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2006.138;10.1109/tvcg.2006.163;10.1109/tvcg.2013.126;10.1109/tvcg.2012.219;10.1109/tvcg.2018.2865240;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2745140;10.1109/infvis.2000.885086;10.1109/tvcg.2020.3030467;10.1109/vast.2009.5332628;10.1109/infvis.2003.1249018;10.1109/tvcg.2018.2864912;10.1109/tvcg.2017.2745919;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030423;10.1109/tvcg.2013.234",
                "AuthorKeywords": "Visualization Linting,Automated Visualization Design,Visualization Optimization",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 64,
                "DownloadsXplore": 1653,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 271,
                "i": [
                    271
                ]
            }
        },
        {
            "name": "Qianwen Wang",
            "value": 218,
            "numPapers": 117,
            "cluster": "1",
            "visible": 1,
            "index": 499,
            "x": -179.949273780494,
            "y": -132.54530872827155,
            "vy": 0,
            "vx": 0,
            "r": 1.251007484168106,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Gosling: A Grammar-based Toolkit for Scalable and Interactive Genomics Data Visualization",
                "DOI": "10.1109/tvcg.2021.3114876",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114876",
                "FirstPage": 140,
                "LastPage": 150,
                "PaperType": "J",
                "Abstract": "The combination of diverse data types and analysis tasks in genomics has resulted in the development of a wide range of visualization techniques and tools. However, most existing tools are tailored to a specific problem or data type and offer limited customization, making it challenging to optimize visualizations for new analysis tasks or datasets. To address this challenge, we designed Gosling-a grammar for interactive and scalable genomics data visualization. Gosling balances expressiveness for comprehensive multi-scale genomics data visualizations with accessibility for domain scientists. Our accompanying JavaScript toolkit called Gosling.js provides scalable and interactive rendering. Gosling.js is built on top of an existing platform for web-based genomics data visualization to further simplify the visualization of common genomics data formats. We demonstrate the expressiveness of the grammar through a variety of real-world examples. Furthermore, we show how Gosling supports the design of novel genomics visualizations. An online editor and examples of Gosling.js, its source code, and documentation are available at <uri>https://gosling.js.org</uri>.",
                "AuthorNamesDeduped": "Sehi L'Yi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorNames": "Sehi LYi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorAffiliation": "Harvard Medical School, Boston, MA, USA;Harvard Medical School, Boston, MA, USA;Harvard School of Engineering and Applied Sciences, Boston, MA, USA;Harvard Medical School, Boston, MA, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2013.214;10.1109/tvcg.2018.2865141;10.1109/tvcg.2017.2745978;10.1109/tvcg.2013.179;10.1109/tvcg.2009.167;10.1109/tvcg.2010.163;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2016.2598796;10.1109/tvcg.2020.3030372;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934555",
                "AuthorKeywords": "Genomics,declarative specification,visualization grammar",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 90,
                "DownloadsXplore": 1426,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 266,
                "i": [
                    266
                ]
            }
        },
        {
            "name": "Luke S. Snyder",
            "value": 0,
            "numPapers": 23,
            "cluster": "5",
            "visible": 1,
            "index": 500,
            "x": 222.44439700031666,
            "y": -23.843033430449132,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "DIVI: Dynamically Interactive Visualization",
                "DOI": "10.1109/tvcg.2023.3327172",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327172",
                "FirstPage": 403,
                "LastPage": 413,
                "PaperType": "J",
                "Abstract": "Dynamically Interactive Visualization (DIVI) is a novel approach for orchestrating interactions within and across static visualizations. DIVI deconstructs Scalable Vector Graphics charts at runtime to infer content and coordinate user input, decoupling interaction from specification logic. This decoupling allows interactions to extend and compose freely across different tools, chart types, and analysis goals. DIVI exploits positional relations of marks to detect chart components such as axes and legends, reconstruct scales and view encodings, and infer data fields. DIVI then enumerates candidate transformations across inferred data to perform linking between views. To support dynamic interaction without prior specification, we introduce a taxonomy that formalizes the space of standard interactions by chart element, interaction type, and input event. We demonstrate DIVI's usefulness for rapid data exploration and analysis through a usability study with 13 participants and a diverse gallery of dynamically interactive visualizations, including single chart, multi-view, and cross-tool configurations.",
                "AuthorNamesDeduped": "Luke S. Snyder;Jeffrey Heer",
                "AuthorNames": "Luke S. Snyder;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington, USA;University of Washington, USA",
                "InternalReferences": "10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2012.229;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2598839;10.1109/tvcg.2016.2599030;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Interaction,Visualization Tools,Charts,SVG,Exploratory Data Analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 269,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 84,
                "i": [
                    84
                ]
            }
        },
        {
            "name": "Maneesh Agrawala",
            "value": 641,
            "numPapers": 42,
            "cluster": "5",
            "visible": 1,
            "index": 501,
            "x": -148.06553411304887,
            "y": 168.0077307977748,
            "vy": 0,
            "vx": 0,
            "r": 1.7380541162924583,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Graphical Overlays: Using Layered Elements to Aid Chart Reading",
                "DOI": "10.1109/tvcg.2012.229",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.229",
                "FirstPage": 2631,
                "LastPage": 2638,
                "PaperType": "J",
                "Abstract": "Reading a visualization can involve a number of tasks such as extracting, comparing or aggregating numerical values. Yet, most of the charts that are published in newspapers, reports, books, and on the Web only support a subset of these tasks. In this paper we introduce graphical overlays-visual elements that are layered onto charts to facilitate a larger set of chart reading tasks. These overlays directly support the lower-level perceptual and cognitive processes that viewers must perform to read a chart. We identify five main types of overlays that support these processes; the overlays can provide (1) reference structures such as gridlines, (2) highlights such as outlines around important marks, (3) redundant encodings such as numerical data labels, (4) summary statistics such as the mean or max and (5) annotations such as descriptive text for context. We then present an automated system that applies user-chosen graphical overlays to existing chart bitmaps. Our approach is based on the insight that generating most of these graphical overlays only requires knowing the properties of the visual marks and axes that encode the data, but does not require access to the underlying data values. Thus, our system analyzes the chart bitmap to extract only the properties necessary to generate the desired overlay. We also discuss techniques for generating interactive overlays that provide additional controls to viewers. We demonstrate several examples of each overlay type for bar, pie and line charts.",
                "AuthorNamesDeduped": "Nicholas Kong;Maneesh Agrawala",
                "AuthorNames": "Nicholas Kong;Maneesh Agrawala",
                "AuthorAffiliation": "Computer Science Division, University of California, Berkeley, USA;Computer Science Division, University of California, Berkeley, USA",
                "InternalReferences": "0.1109/tvcg.2011.242;10.1109/visual.1991.175820;10.1109/tvcg.2009.122;10.1109/tvcg.2011.183",
                "AuthorKeywords": "Visualization, overlays, graphical perception, graph comprehension",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 1829,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1416,
                "i": [
                    1416
                ]
            }
        },
        {
            "name": "Takanori Fujiwara",
            "value": 127,
            "numPapers": 82,
            "cluster": "4",
            "visible": 1,
            "index": 502,
            "x": -4.312968391838015,
            "y": -224.12362281484522,
            "vy": 0,
            "vx": 0,
            "r": 1.1462291306850891,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "A Visual Analytics Framework for Reviewing Multivariate Time-Series Data with Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2020.3028889",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028889",
                "FirstPage": 1601,
                "LastPage": 1611,
                "PaperType": "J",
                "Abstract": "Data-driven problem solving in many real-world applications involves analysis of time-dependent multivariate data, for which dimensionality reduction (DR) methods are often used to uncover the intrinsic structure and features of the data. However, DR is usually applied to a subset of data that is either single-time-point multivariate or univariate time-series, resulting in the need to manually examine and correlate the DR results out of different data subsets. When the number of dimensions is large either in terms of the number of time points or attributes, this manual task becomes too tedious and infeasible. In this paper, we present MulTiDR, a new DR framework that enables processing of time-dependent multivariate data as a whole to provide a comprehensive overview of the data. With the framework, we employ DR in two steps. When treating the instances, time points, and attributes of the data as a 3D array, the first DR step reduces the three axes of the array to two, and the second DR step visualizes the data in a lower-dimensional space. In addition, by coupling with a contrastive learning method and interactive visualizations, our framework enhances analysts' ability to interpret DR results. We demonstrate the effectiveness of our framework with four case studies using real-world datasets.",
                "AuthorNamesDeduped": "Takanori Fujiwara;Shilpika;Naohisa Sakamoto;Jorji Nonaka;Keiji Yamamoto;Kwan-Liu Ma",
                "AuthorNames": "Takanori Fujiwara;Shilpika;Naohisa Sakamoto;Jorji Nonaka;Keiji Yamamoto;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis;University of California, Davis;Kobe University;RIKEN R-CCS;RIKEN R-CCS;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2744419;10.1109/tvcg.2019.2934433;10.1109/tvcg.2019.2934251;10.1109/tvcg.2015.2467553;10.1109/tvcg.2018.2865018;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598470;10.1109/tvcg.2015.2468078;10.1109/tvcg.2015.2468111;10.1109/tvcg.2016.2598664",
                "AuthorKeywords": "Multivariate time-series,tensor,data cube,dimensionality reduction,interpretability,visual analytics",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1749,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 471,
                "i": [
                    471
                ]
            }
        },
        {
            "name": "Shilpika",
            "value": 81,
            "numPapers": 30,
            "cluster": "4",
            "visible": 1,
            "index": 503,
            "x": 154.7273087046004,
            "y": 162.50987644149905,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "A Visual Analytics Framework for Reviewing Multivariate Time-Series Data with Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2020.3028889",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028889",
                "FirstPage": 1601,
                "LastPage": 1611,
                "PaperType": "J",
                "Abstract": "Data-driven problem solving in many real-world applications involves analysis of time-dependent multivariate data, for which dimensionality reduction (DR) methods are often used to uncover the intrinsic structure and features of the data. However, DR is usually applied to a subset of data that is either single-time-point multivariate or univariate time-series, resulting in the need to manually examine and correlate the DR results out of different data subsets. When the number of dimensions is large either in terms of the number of time points or attributes, this manual task becomes too tedious and infeasible. In this paper, we present MulTiDR, a new DR framework that enables processing of time-dependent multivariate data as a whole to provide a comprehensive overview of the data. With the framework, we employ DR in two steps. When treating the instances, time points, and attributes of the data as a 3D array, the first DR step reduces the three axes of the array to two, and the second DR step visualizes the data in a lower-dimensional space. In addition, by coupling with a contrastive learning method and interactive visualizations, our framework enhances analysts' ability to interpret DR results. We demonstrate the effectiveness of our framework with four case studies using real-world datasets.",
                "AuthorNamesDeduped": "Takanori Fujiwara;Shilpika;Naohisa Sakamoto;Jorji Nonaka;Keiji Yamamoto;Kwan-Liu Ma",
                "AuthorNames": "Takanori Fujiwara;Shilpika;Naohisa Sakamoto;Jorji Nonaka;Keiji Yamamoto;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis;University of California, Davis;Kobe University;RIKEN R-CCS;RIKEN R-CCS;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2744419;10.1109/tvcg.2019.2934433;10.1109/tvcg.2019.2934251;10.1109/tvcg.2015.2467553;10.1109/tvcg.2018.2865018;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598470;10.1109/tvcg.2015.2468078;10.1109/tvcg.2015.2468111;10.1109/tvcg.2016.2598664",
                "AuthorKeywords": "Multivariate time-series,tensor,data cube,dimensionality reduction,interpretability,visual analytics",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1749,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 471,
                "i": [
                    471
                ]
            }
        },
        {
            "name": "Anders Ynnerman",
            "value": 236,
            "numPapers": 69,
            "cluster": "6",
            "visible": 1,
            "index": 504,
            "x": -224.0871447980112,
            "y": -15.32812892283891,
            "vy": 0,
            "vx": 0,
            "r": 1.2717328727691422,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Intuitive Exploration of Volumetric Data Using Dynamic Galleries",
                "DOI": "10.1109/tvcg.2015.2467294",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467294",
                "FirstPage": 896,
                "LastPage": 905,
                "PaperType": "J",
                "Abstract": "In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.",
                "AuthorNamesDeduped": "Daniel Jönsson;Martin Falk;Anders Ynnerman",
                "AuthorNames": "Daniel Jönsson;Martin Falk;Anders Ynnerman",
                "AuthorAffiliation": "Linköping University, Sweden;Linköping University, Sweden;Linköping University, Sweden",
                "InternalReferences": "0.1109/tvcg.2008.162;10.1109/tvcg.2011.261;10.1109/visual.1996.568113;10.1109/tvcg.2012.231;10.1109/tvcg.2010.195;10.1109/tvcg.2011.224;10.1109/tvcg.2006.148;10.1109/tvcg.2011.218",
                "AuthorKeywords": "Transfer function, scalar fields, volume rendering, touch interaction, visualization, user interfaces",
                "AminerCitationCount": 25,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 665,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1062,
                "i": [
                    1062
                ]
            }
        },
        {
            "name": "Stephen G. Kobourov",
            "value": 45,
            "numPapers": 33,
            "cluster": "2",
            "visible": 1,
            "index": 505,
            "x": 175.7628254028915,
            "y": -140.2049542861902,
            "vy": 0,
            "vx": 0,
            "r": 1.0518134715025906,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "MetroSets: Visualizing Sets as Metro Maps",
                "DOI": "10.1109/tvcg.2020.3030475",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030475",
                "FirstPage": 1257,
                "LastPage": 1267,
                "PaperType": "J",
                "Abstract": "We propose MetroSets, a new, flexible online tool for visualizing set systems using the metro map metaphor. We model a given set system as a hypergraph $\\mathcal{H}=(V,\\ \\mathcal{S})$, consisting of a set $V$ of vertices and a set $\\mathcal{S}$, which contains subsets of $V$ called hyperedges. Our system then computes a metro map representation of $\\mathcal{H}$, where each hyperedge $E$ in $\\mathcal{S}$ corresponds to a metro line and each vertex corresponds to a metro station. Vertices that appear in two or more hyperedges are drawn as interchanges in the metro map, connecting the different sets. MetroSets is based on a modular 4-step pipeline which constructs and optimizes a path-based hypergraph support, which is then drawn and schematized using metro map layout algorithms. We propose and implement multiple algorithms for each step of the MetroSet pipeline and provide a functional prototype with easy-to-use preset configurations. Furthermore, using several real-world datasets, we perform an extensive quantitative evaluation of the impact of different pipeline stages on desirable properties of the generated maps, such as octolinearity, monotonicity, and edge uniformity.",
                "AuthorNamesDeduped": "Ben Jacobsen;Markus Wallinger;Stephen G. Kobourov;Martin Nöllenburg",
                "AuthorNames": "Ben Jacobsen;Markus Wallinger;Stephen Kobourov;Martin Nöllenburg",
                "AuthorAffiliation": "University of Arizona;TU Wien;TU Wien;University of Arizona",
                "InternalReferences": "0.1109/tvcg.2014.2346312;10.1109/tvcg.2011.186;10.1109/tvcg.2013.184;10.1109/tvcg.2009.122;10.1109/tvcg.2012.252;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346248;10.1109/tvcg.2013.196;10.1109/tvcg.2014.2346249;10.1109/tvcg.2014.2346422;10.1109/tvcg.2015.2467992;10.1109/vast.2007.4389006;10.1109/tvcg.2012.212;10.1109/tvcg.2011.205",
                "AuthorKeywords": "Set visualization,metro map metaphor,hypergraphs",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 382,
                "i": [
                    382
                ]
            }
        },
        {
            "name": "Falk Schreiber",
            "value": 35,
            "numPapers": 15,
            "cluster": "2",
            "visible": 1,
            "index": 506,
            "x": -34.929449777924056,
            "y": 222.3284361889218,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "2D, 2.5D, or 3D? An Exploratory Study on Multilayer Network Visualisations in Virtual Reality",
                "DOI": "10.1109/tvcg.2023.3327402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327402",
                "FirstPage": 469,
                "LastPage": 479,
                "PaperType": "J",
                "Abstract": "Relational information between different types of entities is often modelled by a multilayer network (MLN) – a network with subnetworks represented by layers. The layers of an MLN can be arranged in different ways in a visual representation, however, the impact of the arrangement on the readability of the network is an open question. Therefore, we studied this impact for several commonly occurring tasks related to MLN analysis. Additionally, layer arrangements with a dimensionality beyond 2D, which are common in this scenario, motivate the use of stereoscopic displays. We ran a human subject study utilising a Virtual Reality headset to evaluate 2D, 2.5D, and 3D layer arrangements. The study employs six analysis tasks that cover the spectrum of an MLN task taxonomy, from path finding and pattern identification to comparisons between and across layers. We found no clear overall winner. However, we explore the task-to-arrangement space and derive empirical-based recommendations on the effective use of 2D, 2.5D, and 3D layer arrangements for MLNs.",
                "AuthorNamesDeduped": "Stefan P. Feyer;Bruno Pinaud;Stephen G. Kobourov;Nicolas Brich;Michael Krone;Andreas Kerren;Michael Behrisch 0001;Falk Schreiber;Karsten Klein 0001",
                "AuthorNames": "Stefan P. Feyer;Bruno Pinaud;Stephen Kobourov;Nicolas Brich;Michael Krone;Andreas Kerren;Michael Behrisch;Falk Schreiber;Karsten Klein",
                "AuthorAffiliation": "Life Science Informatics, University of Konstanz, Germany;Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, France;University of Arizona, USA;University of Tübingen, Germany;University of Tübingen, Germany;Linköping University, Sweden;Utrecht University, NL;University of Konstanz, Germany;Life Science Informatics, University of Konstanz, Germany",
                "InternalReferences": "10.1109/infvis.2005.1532136;10.1109/tvcg.2016.2599107;10.1109/tvcg.2020.3030371;10.1109/tvcg.2021.3114863;10.1109/tvcg.2014.2346441;10.1109/tvcg.2020.3030427;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Network,Guidelines,VisDesign,HumanQuant,CompSystems",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 259,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 86,
                "i": [
                    86
                ]
            }
        },
        {
            "name": "Karsten Klein 0001",
            "value": 86,
            "numPapers": 18,
            "cluster": "2",
            "visible": 1,
            "index": 507,
            "x": -124.54757907018367,
            "y": -187.7176085181045,
            "vy": 0,
            "vx": 0,
            "r": 1.0990213010938399,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Immersive Collaborative Analysis of Network Connectivity: CAVE-style or Head-Mounted Display?",
                "DOI": "10.1109/tvcg.2016.2599107",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599107",
                "FirstPage": 441,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "High-quality immersive display technologies are becoming mainstream with the release of head-mounted displays (HMDs) such as the Oculus Rift. These devices potentially represent an affordable alternative to the more traditional, centralised CAVE-style immersive environments. One driver for the development of CAVE-style immersive environments has been collaborative sense-making. Despite this, there has been little research on the effectiveness of collaborative visualisation in CAVE-style facilities, especially with respect to abstract data visualisation tasks. Indeed, very few studies have focused on the use of these displays to explore and analyse abstract data such as networks and there have been no formal user studies investigating collaborative visualisation of abstract data in immersive environments. In this paper we present the results of the first such study. It explores the relative merits of HMD and CAVE-style immersive environments for collaborative analysis of network connectivity, a common and important task involving abstract data. We find significant differences between the two conditions in task completion time and the physical movements of the participants within the space: participants using the HMD were faster while the CAVE2 condition introduced an asymmetry in movement between collaborators. Otherwise, affordances for collaborative data analysis offered by the low-cost HMD condition were not found to be different for accuracy and communication with the CAVE2. These results are notable, given that the latest HMDs will soon be accessible (in terms of cost and potentially ubiquity) to a massive audience.",
                "AuthorNamesDeduped": "Maxime Cordeil;Tim Dwyer;Karsten Klein 0001;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorNames": "Maxime Cordeil;Tim Dwyer;Karsten Klein;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Stanford University, USA;Monash University;University of South Australia",
                "InternalReferences": "0.1109/visual.2001.964545;10.1109/tvcg.2014.2346573;10.1109/vast.2007.4389011;10.1109/tvcg.2006.156;10.1109/tvcg.2011.234;10.1109/tvcg.2016.2598446",
                "AuthorKeywords": "3D Network;Oculus Rift;CAVE;Immersive Analytics;Collaboration",
                "AminerCitationCount": 188,
                "CitationCountCrossRef": 132,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 3680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 890,
                "i": [
                    890
                ]
            }
        },
        {
            "name": "Steven L. Franconeri",
            "value": 8,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 508,
            "x": 218.85423306855935,
            "y": 54.33989941077092,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Average Estimates in Line Graphs Are Biased Toward Areas of Higher Variability",
                "DOI": "10.1109/tvcg.2023.3326589",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326589",
                "FirstPage": 306,
                "LastPage": 315,
                "PaperType": "J",
                "Abstract": "We investigate variability overweighting, a previously undocumented bias in line graphs, where estimates of average value are biased toward areas of higher variability in that line. We found this effect across two preregistered experiments with 140 and 420 participants. These experiments also show that the bias is reduced when using a dot encoding of the same series. We can model the bias with the average of the data series and the average of the points drawn along the line. This bias might arise because higher variability leads to stronger weighting in the average calculation, either due to the longer line segments (even though those segments contain the same number of data values) or line segments with higher variability being otherwise more visually salient. Understanding and predicting this bias is important for visualization design guidelines, recommendation systems, and tool builders, as the bias can adversely affect estimates of averages and trends.",
                "AuthorNamesDeduped": "Dominik Moritz;Lace M. K. Padilla;Francis Nguyen;Steven L. Franconeri",
                "AuthorNames": "Dominik Moritz;Lace M. Padilla;Francis Nguyen;Steven L. Franconeri",
                "AuthorAffiliation": "Carnegie Mellon University, USA;Northeastern University, USA;Northwestern University, USA;UBC, Canada",
                "InternalReferences": "10.1109/infvis.2005.1532136;10.1109/tvcg.2018.2865077;10.1109/tvcg.2009.131;10.1109/tvcg.2021.3114783;10.1109/tvcg.2010.162;10.1109/tvcg.2021.3114684;10.1109/tvcg.2019.2934784;10.1109/tvcg.2019.2934400;10.1109/tvcg.2021.3114865",
                "AuthorKeywords": "bias,lines graph,ensemble perception,average",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 256,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 87,
                "i": [
                    87
                ]
            }
        },
        {
            "name": "Jakob Troidl",
            "value": 9,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 509,
            "x": -198.2770505433344,
            "y": 107.87127155937307,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "ViMO - Visual Analysis of Neuronal Connectivity Motifs",
                "DOI": "10.1109/tvcg.2023.3327388",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327388",
                "FirstPage": 748,
                "LastPage": 758,
                "PaperType": "J",
                "Abstract": "Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2–6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2–4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.",
                "AuthorNamesDeduped": "Jakob Troidl;Simon Warchol;Jinhan Choi;Jordan Matelsky;Nagaraju Dhanyasi;Xueying Wang;Brock A. Wester;Donglai Wei 0001;Jeff W. Lichtman;Hanspeter Pfister;Johanna Beyer",
                "AuthorNames": "Jakob Troidl;Simon Warchol;Jinhan Choi;Jordan Matelsky;Nagaraju Dhanyasi;Xueying Wang;Brock Wester;Donglai Wei;Jeff W. Lichtman;Hanspeter Pfister;Johanna Beyer",
                "AuthorAffiliation": "School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Department of Computer Science, Boston College, United States;Applied Physics Laboratory, Johns Hopkins University, USA;Department of Cellular & Molecular Biology, Harvard University, USA;Department of Cellular & Molecular Biology, Harvard University, USA;Applied Physics Laboratory, Johns Hopkins University, USA;Department of Computer Science, Boston College, United States;Department of Cellular & Molecular Biology, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA",
                "InternalReferences": "10.1109/tvcg.2014.2346312;10.1109/tvcg.2013.142;10.1109/tvcg.2017.2744278;10.1109/tvcg.2017.2744898;10.1109/tvcg.2012.213;10.1109/tvcg.2011.183",
                "AuthorKeywords": "Visual motif analysis,Focus&Context,Scientific visualization,Neuroscience,Connectomics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 254,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 88,
                "i": [
                    88
                ]
            }
        },
        {
            "name": "Simon Warchol",
            "value": 7,
            "numPapers": 26,
            "cluster": "6",
            "visible": 1,
            "index": 510,
            "x": 73.40926396165648,
            "y": -213.68453375152785,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "ViMO - Visual Analysis of Neuronal Connectivity Motifs",
                "DOI": "10.1109/tvcg.2023.3327388",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327388",
                "FirstPage": 748,
                "LastPage": 758,
                "PaperType": "J",
                "Abstract": "Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2–6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2–4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.",
                "AuthorNamesDeduped": "Jakob Troidl;Simon Warchol;Jinhan Choi;Jordan Matelsky;Nagaraju Dhanyasi;Xueying Wang;Brock A. Wester;Donglai Wei 0001;Jeff W. Lichtman;Hanspeter Pfister;Johanna Beyer",
                "AuthorNames": "Jakob Troidl;Simon Warchol;Jinhan Choi;Jordan Matelsky;Nagaraju Dhanyasi;Xueying Wang;Brock Wester;Donglai Wei;Jeff W. Lichtman;Hanspeter Pfister;Johanna Beyer",
                "AuthorAffiliation": "School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Department of Computer Science, Boston College, United States;Applied Physics Laboratory, Johns Hopkins University, USA;Department of Cellular & Molecular Biology, Harvard University, USA;Department of Cellular & Molecular Biology, Harvard University, USA;Applied Physics Laboratory, Johns Hopkins University, USA;Department of Computer Science, Boston College, United States;Department of Cellular & Molecular Biology, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA",
                "InternalReferences": "10.1109/tvcg.2014.2346312;10.1109/tvcg.2013.142;10.1109/tvcg.2017.2744278;10.1109/tvcg.2017.2744898;10.1109/tvcg.2012.213;10.1109/tvcg.2011.183",
                "AuthorKeywords": "Visual motif analysis,Focus&Context,Scientific visualization,Neuroscience,Connectomics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 254,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 88,
                "i": [
                    88
                ]
            }
        },
        {
            "name": "Jeff W. Lichtman",
            "value": 110,
            "numPapers": 31,
            "cluster": "6",
            "visible": 1,
            "index": 511,
            "x": 90.30043538035655,
            "y": 207.35436183046176,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects",
                "DOI": "10.1109/tvcg.2015.2467441",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467441",
                "FirstPage": 738,
                "LastPage": 746,
                "PaperType": "J",
                "Abstract": "In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.",
                "AuthorNamesDeduped": "Ali K. Al-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorNames": "Ali K. Ai-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST);School of Engineering and Applied Sciences, Harvard University;School of Engineering and Applied Sciences, Harvard University;School of Medicine, Boston University;Center for Brain Science, Harvard University;School of Engineering and Applied Sciences, Harvard University;King Abdullah University of Science and Technology (KAUST)",
                "InternalReferences": "0.1109/tvcg.2014.2346312;10.1109/visual.2005.1532788;10.1109/tvcg.2013.142;10.1109/tvcg.2009.121;10.1109/tvcg.2012.240;10.1109/tvcg.2014.2346371;10.1109/tvcg.2013.174;10.1109/tvcg.2014.2346249;10.1109/tvcg.2007.70584",
                "AuthorKeywords": "Neuroscience, Segmentation, Proofreading, Data and Provenance Tracking",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1052,
                "i": [
                    1052
                ]
            }
        },
        {
            "name": "Narayanan Kasthuri",
            "value": 110,
            "numPapers": 25,
            "cluster": "6",
            "visible": 1,
            "index": 512,
            "x": -206.85249546706737,
            "y": -91.98937503346174,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects",
                "DOI": "10.1109/tvcg.2015.2467441",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467441",
                "FirstPage": 738,
                "LastPage": 746,
                "PaperType": "J",
                "Abstract": "In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.",
                "AuthorNamesDeduped": "Ali K. Al-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorNames": "Ali K. Ai-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST);School of Engineering and Applied Sciences, Harvard University;School of Engineering and Applied Sciences, Harvard University;School of Medicine, Boston University;Center for Brain Science, Harvard University;School of Engineering and Applied Sciences, Harvard University;King Abdullah University of Science and Technology (KAUST)",
                "InternalReferences": "0.1109/tvcg.2014.2346312;10.1109/visual.2005.1532788;10.1109/tvcg.2013.142;10.1109/tvcg.2009.121;10.1109/tvcg.2012.240;10.1109/tvcg.2014.2346371;10.1109/tvcg.2013.174;10.1109/tvcg.2014.2346249;10.1109/tvcg.2007.70584",
                "AuthorKeywords": "Neuroscience, Segmentation, Proofreading, Data and Provenance Tracking",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1052,
                "i": [
                    1052
                ]
            }
        },
        {
            "name": "Markus Steinberger",
            "value": 114,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 513,
            "x": 214.873849262154,
            "y": -71.96685975687089,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Context-Preserving Visual Links",
                "DOI": "10.1109/tvcg.2011.183",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.183",
                "FirstPage": 2249,
                "LastPage": 2258,
                "PaperType": "J",
                "Abstract": "Evaluating, comparing, and interpreting related pieces of information are tasks that are commonly performed during visual data analysis and in many kinds of information-intensive work. Synchronized visual highlighting of related elements is a well-known technique used to assist this task. An alternative approach, which is more invasive but also more expressive is visual linking in which line connections are rendered between related elements. In this work, we present context-preserving visual links as a new method for generating visual links. The method specifically aims to fulfill the following two goals: first, visual links should minimize the occlusion of important information; second, links should visually stand out from surrounding information by minimizing visual interference. We employ an image-based analysis of visual saliency to determine the important regions in the original representation. A consequence of the image-based approach is that our technique is application-independent and can be employed in a large number of visual data analysis scenarios in which the underlying content cannot or should not be altered. We conducted a controlled experiment that indicates that users can find linked elements in complex visualizations more quickly and with greater subjective satisfaction than in complex visualizations in which plain highlighting is used. Context-preserving visual links were perceived as visually more attractive than traditional visual links that do not account for the context information.",
                "AuthorNamesDeduped": "Markus Steinberger;Manuela Waldner;Marc Streit;Alexander Lex;Dieter Schmalstieg",
                "AuthorNames": "Markus Steinberger;Manuela Waldner;Marc Streit;Alexander Lex;Dieter Schmalstieg",
                "AuthorAffiliation": "Graz Univeristy of Technology, Austria;Graz Univeristy of Technology, Austria;Graz Univeristy of Technology, Austria;Graz Univeristy of Technology, Austria;Graz Univeristy of Technology, Austria",
                "InternalReferences": "0.1109/tvcg.2010.138;10.1109/infvis.2001.963286;10.1109/tvcg.2006.147;10.1109/tvcg.2009.122;10.1109/visual.1995.485139;10.1109/tvcg.2010.174;10.1109/tvcg.2006.166;10.1109/tvcg.2007.70521",
                "AuthorKeywords": "Visual links, highlighting, connectedness, routing, image-based, saliency",
                "AminerCitationCount": 127,
                "CitationCountCrossRef": 78,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 1114,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1554,
                "i": [
                    1554
                ]
            }
        },
        {
            "name": "Manuela Waldner",
            "value": 132,
            "numPapers": 30,
            "cluster": "6",
            "visible": 1,
            "index": 514,
            "x": -109.93526283821187,
            "y": 198.40422874700346,
            "vy": 0,
            "vx": 0,
            "r": 1.151986183074266,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "sMolBoxes: Dataflow Model for Molecular Dynamics Exploration",
                "DOI": "10.1109/tvcg.2022.3209411",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209411",
                "FirstPage": 581,
                "LastPage": 590,
                "PaperType": "J",
                "Abstract": "We present sMolBoxes, a dataflow representation for the exploration and analysis of long molecular dynamics (MD) simulations. When MD simulations reach millions of snapshots, a frame-by-frame observation is not feasible anymore. Thus, biochemists rely to a large extent only on quantitative analysis of geometric and physico-chemical properties. However, the usage of abstract methods to study inherently spatial data hinders the exploration and poses a considerable workload. sMolBoxes link quantitative analysis of a user-defined set of properties with interactive 3D visualizations. They enable visual explanations of molecular behaviors, which lead to an efficient discovery of biochemically significant parts of the MD simulation. sMolBoxes follow a node-based model for flexible definition, combination, and immediate evaluation of properties to be investigated. Progressive analytics enable fluid switching between multiple properties, which facilitates hypothesis generation. Each sMolBox provides quick insight to an observed property or function, available in more detail in the bigBox View. The case studies illustrate that even with relatively few sMolBoxes, it is possible to express complex analytical tasks, and their use in exploratory analysis is perceived as more efficient than traditional scripting-based methods.",
                "AuthorNamesDeduped": "Pavol Ulbrich;Manuela Waldner;Katarína Furmanová;Sérgio M. Marques;David Bednár;Barbora Kozlíková;Jan Byska",
                "AuthorNames": "Pavol Ulbrich;Manuela Waldner;Katarína Furmanová;Sérgio M. Marques;David Bednář;Barbora Kozlíková;Jan Byška",
                "AuthorAffiliation": "Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;TU Wien, Vienna, Austria;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;Department of Experimental Biology, Loschmidt Laboratories, Faculty of Science, RECETOX, Masaryk University, Brno, Czech Republic;Department of Experimental Biology, Loschmidt Laboratories, Faculty of Science, RECETOX, Masaryk University, Brno, Czech Republic;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic",
                "InternalReferences": "0.1109/tvcg.2018.2864851;10.1109/vast.2007.4389013;10.1109/tvcg.2012.213;10.1109/tvcg.2011.225;10.1109/tvcg.2016.2598497;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Molecular dynamics,structure,node-based visualization,progressive analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 407,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 203,
                "i": [
                    203
                ]
            }
        },
        {
            "name": "Dieter Schmalstieg",
            "value": 291,
            "numPapers": 46,
            "cluster": "4",
            "visible": 1,
            "index": 515,
            "x": -53.00872926581296,
            "y": -220.77154395805576,
            "vy": 0,
            "vx": 0,
            "r": 1.3350604490500864,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Design Patterns for Situated Visualization in Augmented Reality",
                "DOI": "10.1109/tvcg.2023.3327398",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327398",
                "FirstPage": 1324,
                "LastPage": 1335,
                "PaperType": "J",
                "Abstract": "Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.",
                "AuthorNamesDeduped": "Benjamin Lee;Michael Sedlmair;Dieter Schmalstieg",
                "AuthorNames": "Benjamin Lee;Michael Sedlmair;Dieter Schmalstieg",
                "AuthorAffiliation": "University of Stuttgart, Germany;University of Stuttgart, Germany;Graz University of Technology and University of Stuttgart, Austria",
                "InternalReferences": "10.1109/tvcg.2021.3114835;10.1109/tvcg.2020.3030334;10.1109/tvcg.2020.3030450;10.1109/tvcg.2020.3030460;10.1109/tvcg.2022.3209386;10.1109/tvcg.2016.2598608;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Augmented reality,immersive analytics,situated visualization,design patterns,design space",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 124,
                "DownloadsXplore": 736,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 0,
                "i": [
                    0
                ]
            }
        },
        {
            "name": "Lars Grammel",
            "value": 85,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 516,
            "x": 188.3984907345968,
            "y": 127.10628894325428,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "How Information Visualization Novices Construct Visualizations",
                "DOI": "10.1109/tvcg.2010.164",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.164",
                "FirstPage": 943,
                "LastPage": 952,
                "PaperType": "J",
                "Abstract": "It remains challenging for information visualization novices to rapidly construct visualizations during exploratory data analysis. We conducted an exploratory laboratory study in which information visualization novices explored fictitious sales data by communicating visualization specifications to a human mediator, who rapidly constructed the visualizations using commercial visualization software. We found that three activities were central to the iterative visualization construction process: data attribute selection, visual template selection, and visual mapping specification. The major barriers faced by the participants were translating questions into data attributes, designing visual mappings, and interpreting the visualizations. Partial specification was common, and the participants used simple heuristics and preferred visualizations they were already familiar with, such as bar, line and pie charts. We derived abstract models from our observations that describe barriers in the data exploration process and uncovered how information visualization novices think about visualization specifications. Our findings support the need for tools that suggest potential visualizations and support iterative refinement, that provide explanations and help with learning, and that are tightly integrated into tool support for the overall visual analytics process.",
                "AuthorNamesDeduped": "Lars Grammel;Melanie Tory;Margaret-Anne D. Storey",
                "AuthorNames": "Lars Grammel;Melanie Tory;Margaret-Anne Storey",
                "AuthorAffiliation": "University of Victoria, Canada;University of Victoria, Canada;University of Victoria, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70515;10.1109/tvcg.2006.163;10.1109/tvcg.2007.70541;10.1109/vast.2009.5333878;10.1109/tvcg.2008.109;10.1109/vast.2006.261428;10.1109/tvcg.2007.70577;10.1109/vast.2008.4677358;10.1109/vast.2008.4677365;10.1109/tvcg.2007.70535;10.1109/infvis.2005.1532136;10.1109/infvis.1998.729560;10.1109/tvcg.2007.70594;10.1109/infvis.2000.885086;10.1109/infvis.2001.963289;10.1109/infvis.2000.885092;10.1109/tvcg.2008.137",
                "AuthorKeywords": "Empirical study, visualization, visualization construction, visual analytics, visual mapping, novices",
                "AminerCitationCount": 283,
                "CitationCountCrossRef": 171,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 3970,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1686,
                "i": [
                    1686
                ]
            }
        },
        {
            "name": "Margaret-Anne D. Storey",
            "value": 86,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 517,
            "x": -224.99574057390285,
            "y": 33.569580331023,
            "vy": 0,
            "vx": 0,
            "r": 1.0990213010938399,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "How Information Visualization Novices Construct Visualizations",
                "DOI": "10.1109/tvcg.2010.164",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.164",
                "FirstPage": 943,
                "LastPage": 952,
                "PaperType": "J",
                "Abstract": "It remains challenging for information visualization novices to rapidly construct visualizations during exploratory data analysis. We conducted an exploratory laboratory study in which information visualization novices explored fictitious sales data by communicating visualization specifications to a human mediator, who rapidly constructed the visualizations using commercial visualization software. We found that three activities were central to the iterative visualization construction process: data attribute selection, visual template selection, and visual mapping specification. The major barriers faced by the participants were translating questions into data attributes, designing visual mappings, and interpreting the visualizations. Partial specification was common, and the participants used simple heuristics and preferred visualizations they were already familiar with, such as bar, line and pie charts. We derived abstract models from our observations that describe barriers in the data exploration process and uncovered how information visualization novices think about visualization specifications. Our findings support the need for tools that suggest potential visualizations and support iterative refinement, that provide explanations and help with learning, and that are tightly integrated into tool support for the overall visual analytics process.",
                "AuthorNamesDeduped": "Lars Grammel;Melanie Tory;Margaret-Anne D. Storey",
                "AuthorNames": "Lars Grammel;Melanie Tory;Margaret-Anne Storey",
                "AuthorAffiliation": "University of Victoria, Canada;University of Victoria, Canada;University of Victoria, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70515;10.1109/tvcg.2006.163;10.1109/tvcg.2007.70541;10.1109/vast.2009.5333878;10.1109/tvcg.2008.109;10.1109/vast.2006.261428;10.1109/tvcg.2007.70577;10.1109/vast.2008.4677358;10.1109/vast.2008.4677365;10.1109/tvcg.2007.70535;10.1109/infvis.2005.1532136;10.1109/infvis.1998.729560;10.1109/tvcg.2007.70594;10.1109/infvis.2000.885086;10.1109/infvis.2001.963289;10.1109/infvis.2000.885092;10.1109/tvcg.2008.137",
                "AuthorKeywords": "Empirical study, visualization, visualization construction, visual analytics, visual mapping, novices",
                "AminerCitationCount": 283,
                "CitationCountCrossRef": 171,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 3970,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1686,
                "i": [
                    1686
                ]
            }
        },
        {
            "name": "Zachary Pousman",
            "value": 87,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 518,
            "x": 143.36724975566514,
            "y": -176.9062794179357,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Casual Information Visualization: Depictions of Data in Everyday Life",
                "DOI": "10.1109/tvcg.2007.70541",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70541",
                "FirstPage": 1145,
                "LastPage": 1152,
                "PaperType": "J",
                "Abstract": "Information visualization has often focused on providing deep insight for expert user populations and on techniques for amplifying cognition through complicated interactive visual models. This paper proposes a new subdomain for infovis research that complements the focus on analytic tasks and expert use. Instead of work-related and analytically driven infovis, we propose casual information visualization (or casual infovis) as a complement to more traditional infovis domains. Traditional infovis systems, techniques, and methods do not easily lend themselves to the broad range of user populations, from expert to novices, or from work tasks to more everyday situations. We propose definitions, perspectives, and research directions for further investigations of this emerging subfield. These perspectives build from ambient information visualization (Skog et al., 2003), social visualization, and also from artistic work that visualizes information (Viegas and Wattenberg, 2007). We seek to provide a perspective on infovis that integrates these research agendas under a coherent vocabulary and framework for design. We enumerate the following contributions. First, we demonstrate how blurry the boundary of infovis is by examining systems that exhibit many of the putative properties of infovis systems, but perhaps would not be considered so. Second, we explore the notion of insight and how, instead of a monolithic definition of insight, there may be multiple types, each with particular characteristics. Third, we discuss design challenges for systems intended for casual audiences. Finally we conclude with challenges for system evaluation in this emerging subfield.",
                "AuthorNamesDeduped": "Zachary Pousman;John T. Stasko;Michael Mateas",
                "AuthorNames": "Zachary Pousman;John Stasko;Michael Mateas",
                "AuthorAffiliation": "School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;University of California, Santa Cruz, USA",
                "InternalReferences": "0.1109/infvis.2005.1532126;10.1109/infvis.2004.8;10.1109/infvis.2003.1249031;10.1109/infvis.2004.59;10.1109/visual.1990.146375",
                "AuthorKeywords": "Casual information visualization, ambient infovis, social infovis, editorial, design, evaluation",
                "AminerCitationCount": 501,
                "CitationCountCrossRef": 256,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 4748,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2091,
                "i": [
                    2091
                ]
            }
        },
        {
            "name": "Michael Mateas",
            "value": 87,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 519,
            "x": 13.79721235072684,
            "y": 227.5074436833858,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Casual Information Visualization: Depictions of Data in Everyday Life",
                "DOI": "10.1109/tvcg.2007.70541",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70541",
                "FirstPage": 1145,
                "LastPage": 1152,
                "PaperType": "J",
                "Abstract": "Information visualization has often focused on providing deep insight for expert user populations and on techniques for amplifying cognition through complicated interactive visual models. This paper proposes a new subdomain for infovis research that complements the focus on analytic tasks and expert use. Instead of work-related and analytically driven infovis, we propose casual information visualization (or casual infovis) as a complement to more traditional infovis domains. Traditional infovis systems, techniques, and methods do not easily lend themselves to the broad range of user populations, from expert to novices, or from work tasks to more everyday situations. We propose definitions, perspectives, and research directions for further investigations of this emerging subfield. These perspectives build from ambient information visualization (Skog et al., 2003), social visualization, and also from artistic work that visualizes information (Viegas and Wattenberg, 2007). We seek to provide a perspective on infovis that integrates these research agendas under a coherent vocabulary and framework for design. We enumerate the following contributions. First, we demonstrate how blurry the boundary of infovis is by examining systems that exhibit many of the putative properties of infovis systems, but perhaps would not be considered so. Second, we explore the notion of insight and how, instead of a monolithic definition of insight, there may be multiple types, each with particular characteristics. Third, we discuss design challenges for systems intended for casual audiences. Finally we conclude with challenges for system evaluation in this emerging subfield.",
                "AuthorNamesDeduped": "Zachary Pousman;John T. Stasko;Michael Mateas",
                "AuthorNames": "Zachary Pousman;John Stasko;Michael Mateas",
                "AuthorAffiliation": "School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing and the GVU Center, Georgia Institute of Technology, USA;University of California, Santa Cruz, USA",
                "InternalReferences": "0.1109/infvis.2005.1532126;10.1109/infvis.2004.8;10.1109/infvis.2003.1249031;10.1109/infvis.2004.59;10.1109/visual.1990.146375",
                "AuthorKeywords": "Casual information visualization, ambient infovis, social infovis, editorial, design, evaluation",
                "AminerCitationCount": 501,
                "CitationCountCrossRef": 256,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 4748,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2091,
                "i": [
                    2091
                ]
            }
        },
        {
            "name": "Jesse Kriss",
            "value": 253,
            "numPapers": 3,
            "cluster": "1",
            "visible": 1,
            "index": 520,
            "x": -164.01033156839767,
            "y": -158.5894420786713,
            "vy": 0,
            "vx": 0,
            "r": 1.291306850892343,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "ManyEyes: a Site for Visualization at Internet Scale",
                "DOI": "10.1109/tvcg.2007.70577",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70577",
                "FirstPage": 1121,
                "LastPage": 1128,
                "PaperType": "J",
                "Abstract": "We describe the design and deployment of Many Eyes, a public Web site where users may upload data, create interactive visualizations, and carry on discussions. The goal of the site is to support collaboration around visualizations at a large scale by fostering a social style of data analysis in which visualizations not only serve as a discovery tool for individuals but also as a medium to spur discussion among users. To support this goal, the site includes novel mechanisms for end-user creation of visualizations and asynchronous collaboration around those visualizations. In addition to describing these technologies, we provide a preliminary report on the activity of our users.",
                "AuthorNamesDeduped": "Fernanda B. Viégas;Martin Wattenberg;Frank van Ham;Jesse Kriss;Matthew M. McKeon",
                "AuthorNames": "Fernanda B. Viegas;Martin Wattenberg;Frank van Ham;Jesse Kriss;Matt McKeon",
                "AuthorAffiliation": "IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland",
                "InternalReferences": "0.1109/infvis.2005.1532122;10.1109/visual.1991.175820;10.1109/infvis.2003.1249007;10.1109/infvis.2004.13",
                "AuthorKeywords": "Visualization, World Wide Web, Social Software, Social Data Analysis, Communication-Minded Visualization",
                "AminerCitationCount": 1011,
                "CitationCountCrossRef": 452,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 3516,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2086,
                "i": [
                    2086
                ]
            }
        },
        {
            "name": "Matthew M. McKeon",
            "value": 253,
            "numPapers": 3,
            "cluster": "1",
            "visible": 1,
            "index": 521,
            "x": 228.2807174380235,
            "y": 6.157438264510778,
            "vy": 0,
            "vx": 0,
            "r": 1.291306850892343,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "ManyEyes: a Site for Visualization at Internet Scale",
                "DOI": "10.1109/tvcg.2007.70577",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70577",
                "FirstPage": 1121,
                "LastPage": 1128,
                "PaperType": "J",
                "Abstract": "We describe the design and deployment of Many Eyes, a public Web site where users may upload data, create interactive visualizations, and carry on discussions. The goal of the site is to support collaboration around visualizations at a large scale by fostering a social style of data analysis in which visualizations not only serve as a discovery tool for individuals but also as a medium to spur discussion among users. To support this goal, the site includes novel mechanisms for end-user creation of visualizations and asynchronous collaboration around those visualizations. In addition to describing these technologies, we provide a preliminary report on the activity of our users.",
                "AuthorNamesDeduped": "Fernanda B. Viégas;Martin Wattenberg;Frank van Ham;Jesse Kriss;Matthew M. McKeon",
                "AuthorNames": "Fernanda B. Viegas;Martin Wattenberg;Frank van Ham;Jesse Kriss;Matt McKeon",
                "AuthorAffiliation": "IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland;IBM Research GmbH, Switzerland",
                "InternalReferences": "0.1109/infvis.2005.1532122;10.1109/visual.1991.175820;10.1109/infvis.2003.1249007;10.1109/infvis.2004.13",
                "AuthorKeywords": "Visualization, World Wide Web, Social Software, Social Data Analysis, Communication-Minded Visualization",
                "AminerCitationCount": 1011,
                "CitationCountCrossRef": 452,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 3516,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2086,
                "i": [
                    2086
                ]
            }
        },
        {
            "name": "Eric Newburger",
            "value": 0,
            "numPapers": 9,
            "cluster": "5",
            "visible": 1,
            "index": 522,
            "x": -172.65168233717398,
            "y": 149.80452792270185,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Visualization According to Statisticians: An Interview Study on the Role of Visualization for Inferential Statistics",
                "DOI": "10.1109/tvcg.2023.3326521",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326521",
                "FirstPage": 230,
                "LastPage": 239,
                "PaperType": "J",
                "Abstract": "Statisticians are not only one of the earliest professional adopters of data visualization, but also some of its most prolific users. Understanding how these professionals utilize visual representations in their analytic process may shed light on best practices for visual sensemaking. We present results from an interview study involving 18 professional statisticians (19.7 years average in the profession) on three aspects: (1) their use of visualization in their daily analytic work; (2) their mental models of inferential statistical processes; and (3) their design recommendations for how to best represent statistical inferences. Interview sessions consisted of discussing inferential statistics, eliciting participant sketches of suitable visual designs, and finally, a design intervention with our proposed visual designs. We analyzed interview transcripts using thematic analysis and open coding, deriving thematic codes on statistical mindset, analytic process, and analytic toolkit. The key findings for each aspect are as follows: (1) statisticians make extensive use of visualization during all phases of their work (and not just when reporting results); (2) their mental models of inferential methods tend to be mostly visually based; and (3) many statisticians abhor dichotomous thinking. The latter suggests that a multi-faceted visual display of inferential statistics that includes a visual indicator of analytically important effect sizes may help to balance the attributed epistemic power of traditional statistical testing with an awareness of the uncertainty of sensemaking.",
                "AuthorNamesDeduped": "Eric Newburger;Niklas Elmqvist",
                "AuthorNames": "Eric Newburger;Niklas Elmqvist",
                "AuthorAffiliation": "U.S. Naval Academy, Annapolis, MD, USA;Aarhus University, Aarhus, Denmark",
                "InternalReferences": "10.1109/tvcg.2021.3114830;10.1109/tvcg.2016.2598862;10.1109/tvcg.2014.2346298;10.1109/tvcg.2018.2864907;10.1109/tvcg.2013.183;10.1109/tvcg.2010.164;10.1109/tvcg.2014.2346292;10.1109/tvcg.2007.70541;10.1109/tvcg.2010.161",
                "AuthorKeywords": "Inferential statistics,qualitative interview study,thematic coding,statistical visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 240,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 91,
                "i": [
                    91
                ]
            }
        },
        {
            "name": "Ryan Russell",
            "value": 136,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 523,
            "x": 26.14145254855483,
            "y": -227.30293543782415,
            "vy": 0,
            "vx": 0,
            "r": 1.1565918249856073,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Reactive Vega: A Streaming Dataflow Architecture for Declarative Interactive Visualization",
                "DOI": "10.1109/tvcg.2015.2467091",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467091",
                "FirstPage": 659,
                "LastPage": 668,
                "PaperType": "J",
                "Abstract": "We present Reactive Vega, a system architecture that provides the first robust and comprehensive treatment of declarative visual and interaction design for data visualization. Starting from a single declarative specification, Reactive Vega constructs a dataflow graph in which input data, scene graph elements, and interaction events are all treated as first-class streaming data sources. To support expressive interactive visualizations that may involve time-varying scalar, relational, or hierarchical data, Reactive Vega's dataflow graph can dynamically re-write itself at runtime by extending or pruning branches in a data-driven fashion. We discuss both compile- and run-time optimizations applied within Reactive Vega, and share the results of benchmark studies that indicate superior interactive performance to both D3 and the original, non-reactive Vega system.",
                "AuthorNamesDeduped": "Arvind Satyanarayan;Ryan Russell;Jane Hoffswell;Jeffrey Heer",
                "AuthorNames": "Arvind Satyanarayan;Ryan Russell;Jane Hoffswell;Jeffrey Heer",
                "AuthorAffiliation": "Stanford University;University of Washington;University of Washington;University of Washington",
                "InternalReferences": "0.1109/visual.1995.480821;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2010.144;10.1109/tvcg.2014.2346250;10.1109/tvcg.2013.179;10.1109/tvcg.2010.177;10.1109/visual.1996.567752;10.1109/infvis.2000.885086;10.1109/infvis.2004.12;10.1109/tvcg.2015.2467191;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Information visualization, systems, toolkits, declarative specification, optimization, interaction, streaming data",
                "AminerCitationCount": 267,
                "CitationCountCrossRef": 176,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 2313,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1005,
                "i": [
                    1005
                ]
            }
        },
        {
            "name": "Chris E. Weaver",
            "value": 232,
            "numPapers": 57,
            "cluster": "5",
            "visible": 1,
            "index": 524,
            "x": 134.39320988020904,
            "y": 185.4412713990445,
            "vy": 0,
            "vx": 0,
            "r": 1.2671272308578008,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Building Highly-Coordinated Visualizations in Improvise",
                "DOI": "10.1109/infvis.2004.12",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.12",
                "FirstPage": 159,
                "LastPage": 166,
                "PaperType": "C",
                "Abstract": "Improvise is a fully-implemented system in which users build and browse multiview visualizations interactively using a simple shared-object coordination mechanism coupled with a flexible, expression-based visual abstraction language. By coupling visual abstraction with coordination, users gain precise control over how navigation and selection in the visualization affects the appearance of data in individual views. As a result, it is practical to build visualizations with more views and richer coordination in Improvise than in other visualization systems. Building and browsing activities are integrated in a single, live user interface that lets users alter visualizations quickly and incrementally during data exploration",
                "AuthorNamesDeduped": "Chris E. Weaver",
                "AuthorNames": "C. Weaver",
                "AuthorAffiliation": "Computer Science Department, University of Wisconsin, Madison, USA",
                "InternalReferences": "0.1109/infvis.2002.1173141;10.1109/infvis.2000.885086",
                "AuthorKeywords": "coordinated queries, coordination, exploratory visualization, multiple views, visual abstraction language",
                "AminerCitationCount": 320,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 988,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2453,
                "i": [
                    2453
                ]
            }
        },
        {
            "name": "Lukas Herzberger",
            "value": 0,
            "numPapers": 7,
            "cluster": "6",
            "visible": 1,
            "index": 525,
            "x": -224.574928509469,
            "y": -46.00110308424025,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering",
                "DOI": "10.1109/tvcg.2023.3327193",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327193",
                "FirstPage": 1380,
                "LastPage": 1390,
                "PaperType": "J",
                "Abstract": "We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.",
                "AuthorNamesDeduped": "Lukas Herzberger;Markus Hadwiger;Robert Krüger;Peter K. Sorger;Hanspeter Pfister;Eduard Gröller;Johanna Beyer",
                "AuthorNames": "Lukas Herzberger;Markus Hadwiger;Robert Krüger;Peter Sorger;Hanspeter Pfister;Eduard Gröller;Johanna Beyer",
                "AuthorAffiliation": "TU Wien, Austria;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA;Harvard Medical School, USA;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA;TU Wien, Austria;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2744238;10.1109/tvcg.2012.240;10.1109/tvcg.2021.3114786;10.1109/tvcg.2019.2934547;10.1109/visual.2003.1250384;10.1109/tvcg.2014.2346458;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Volume rendering,ray-guided rendering,large-scale data,out-of-core rendering,multi-resolution,multi-channel,web-based visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 228,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 94,
                "i": [
                    94
                ]
            }
        },
        {
            "name": "Robert Krüger",
            "value": 111,
            "numPapers": 80,
            "cluster": "1",
            "visible": 1,
            "index": 526,
            "x": 196.85489731239318,
            "y": -117.89041268961245,
            "vy": 0,
            "vx": 0,
            "r": 1.1278065630397236,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Facetto: Combining Unsupervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data",
                "DOI": "10.1109/tvcg.2019.2934547",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934547",
                "FirstPage": 227,
                "LastPage": 237,
                "PaperType": "J",
                "Abstract": "Facetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109 or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology.",
                "AuthorNamesDeduped": "Robert Krüger;Johanna Beyer;Won-Dong Jang;Nam Wook Kim;Artem Sokolov;Peter K. Sorger;Hanspeter Pfister",
                "AuthorNames": "Robert Krueger;Johanna Beyer;Won-Dong Jang;Nam Wook Kim;Artem Sokolov;Peter K. Sorger;Hanspeter Pfister",
                "AuthorAffiliation": "School of Engineering and Applied Sciences, Harvard University, Cambridge, USA and Laboratory of Systems Pharmacology, Harvard Medical School, Boston, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;Laboratory of Systems Pharmacology, Harvard Medical School, Boston, USA;Laboratory of Systems Pharmacology, Harvard Medical School, Boston, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA",
                "InternalReferences": "0.1109/tvcg.2013.186;10.1109/tvcg.2016.2598468;10.1109/vast.2010.5652443;10.1109/tvcg.2016.2598587;10.1109/vast.2007.4389013;10.1109/tvcg.2012.277;10.1109/tvcg.2012.258;10.1109/vast.2014.7042495;10.1109/tvcg.2013.125;10.1109/tvcg.2007.70569;10.1109/tvcg.2015.2467551;10.1109/tvcg.2017.2744805;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Clustering,Classification,Visual Analysis,Multiplex Tissue Imaging,Digital Pathology,Cancer Systems Biology",
                "AminerCitationCount": 32,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 1226,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 608,
                "i": [
                    608
                ]
            }
        },
        {
            "name": "Peter K. Sorger",
            "value": 67,
            "numPapers": 40,
            "cluster": "6",
            "visible": 1,
            "index": 527,
            "x": -65.58303872447084,
            "y": 220.11102887330424,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Facetto: Combining Unsupervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data",
                "DOI": "10.1109/tvcg.2019.2934547",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934547",
                "FirstPage": 227,
                "LastPage": 237,
                "PaperType": "J",
                "Abstract": "Facetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109 or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology.",
                "AuthorNamesDeduped": "Robert Krüger;Johanna Beyer;Won-Dong Jang;Nam Wook Kim;Artem Sokolov;Peter K. Sorger;Hanspeter Pfister",
                "AuthorNames": "Robert Krueger;Johanna Beyer;Won-Dong Jang;Nam Wook Kim;Artem Sokolov;Peter K. Sorger;Hanspeter Pfister",
                "AuthorAffiliation": "School of Engineering and Applied Sciences, Harvard University, Cambridge, USA and Laboratory of Systems Pharmacology, Harvard Medical School, Boston, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;Laboratory of Systems Pharmacology, Harvard Medical School, Boston, USA;Laboratory of Systems Pharmacology, Harvard Medical School, Boston, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA",
                "InternalReferences": "0.1109/tvcg.2013.186;10.1109/tvcg.2016.2598468;10.1109/vast.2010.5652443;10.1109/tvcg.2016.2598587;10.1109/vast.2007.4389013;10.1109/tvcg.2012.277;10.1109/tvcg.2012.258;10.1109/vast.2014.7042495;10.1109/tvcg.2013.125;10.1109/tvcg.2007.70569;10.1109/tvcg.2015.2467551;10.1109/tvcg.2017.2744805;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744158",
                "AuthorKeywords": "Clustering,Classification,Visual Analysis,Multiplex Tissue Imaging,Digital Pathology,Cancer Systems Biology",
                "AminerCitationCount": 32,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 1226,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 608,
                "i": [
                    608
                ]
            }
        },
        {
            "name": "Eduard Gröller",
            "value": 0,
            "numPapers": 7,
            "cluster": "6",
            "visible": 1,
            "index": 528,
            "x": -100.41902076541712,
            "y": -206.79946873363753,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering",
                "DOI": "10.1109/tvcg.2023.3327193",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327193",
                "FirstPage": 1380,
                "LastPage": 1390,
                "PaperType": "J",
                "Abstract": "We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.",
                "AuthorNamesDeduped": "Lukas Herzberger;Markus Hadwiger;Robert Krüger;Peter K. Sorger;Hanspeter Pfister;Eduard Gröller;Johanna Beyer",
                "AuthorNames": "Lukas Herzberger;Markus Hadwiger;Robert Krüger;Peter Sorger;Hanspeter Pfister;Eduard Gröller;Johanna Beyer",
                "AuthorAffiliation": "TU Wien, Austria;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA;Harvard Medical School, USA;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA;TU Wien, Austria;John A. Paulson School of Engineering and Applied Sciences at Harvard University, USA",
                "InternalReferences": "10.1109/tvcg.2017.2744238;10.1109/tvcg.2012.240;10.1109/tvcg.2021.3114786;10.1109/tvcg.2019.2934547;10.1109/visual.2003.1250384;10.1109/tvcg.2014.2346458;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Volume rendering,ray-guided rendering,large-scale data,out-of-core rendering,multi-resolution,multi-channel,web-based visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 228,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 94,
                "i": [
                    94
                ]
            }
        },
        {
            "name": "Won-Ki Jeong",
            "value": 219,
            "numPapers": 31,
            "cluster": "6",
            "visible": 1,
            "index": 529,
            "x": 213.93900997951764,
            "y": 84.73547078398664,
            "vy": 0,
            "vx": 0,
            "r": 1.2521588946459412,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "DXR: A Toolkit for Building Immersive Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865152",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865152",
                "FirstPage": 715,
                "LastPage": 725,
                "PaperType": "J",
                "Abstract": "This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.",
                "AuthorNamesDeduped": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorNames": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorAffiliation": "Harvard University, Cambridge, MA, US;Harvard University, Cambridge, MA, US;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;Monash University, Clayton, VIC, AU;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;The University of Edinburgh, Edinburgh, Edinburgh, GB;Harvard University, Cambridge, MA, US",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346322;10.1109/tvcg.2016.2599107;10.1109/infvis.2004.64;10.1109/tvcg.2010.144;10.1109/tvcg.2016.2598620;10.1109/tvcg.2015.2467449;10.1109/tvcg.2014.2346318;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2017.2744079;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Augmented Reality,Virtual Reality,Immersive Visualization,Immersive Analytics,Visualization Toolkit",
                "AminerCitationCount": 137,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 4869,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 649,
                "i": [
                    649
                ]
            }
        },
        {
            "name": "Jens H. Krüger",
            "value": 249,
            "numPapers": 44,
            "cluster": "6",
            "visible": 1,
            "index": 530,
            "x": -215.1928723335894,
            "y": 82.10985139932656,
            "vy": 0,
            "vx": 0,
            "r": 1.2867012089810017,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Acceleration techniques for GPU-based volume rendering",
                "DOI": "10.1109/visual.2003.1250384",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250384",
                "FirstPage": 287,
                "LastPage": 292,
                "PaperType": "C",
                "Abstract": "Nowadays, direct volume rendering via 3D textures has positioned itself as an efficient tool for the display and visual analysis of volumetric scalar fields. It is commonly accepted, that for reasonably sized data sets appropriate quality at interactive rates can be achieved by means of this technique. However, despite these benefits one important issue has received little attention throughout the ongoing discussion of texture based volume rendering: the integration of acceleration techniques to reduce per-fragment operations. In this paper, we address the integration of early ray termination and empty-space skipping into texture based volume rendering on graphical processing units (GPU). Therefore, we describe volume ray-casting on programmable graphics hardware as an alternative to object-order approaches. We exploit the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight. We demonstrate performance gains up to a factor of 3 for typical renditions of volumetric data sets on the ATI 9700 graphics card.",
                "AuthorNamesDeduped": "Jens H. Krüger;Rüdiger Westermann",
                "AuthorNames": "J. Kruger;R. Westermann",
                "AuthorAffiliation": "Computer Graphics and Visualization Group, Technical University Munich, Germany;Computer Graphics and Visualization Group, Technical University Munich, Germany",
                "InternalReferences": "0.1109/visual.1999.809889;10.1109/visual.1997.663880;10.1109/visual.1993.398852;10.1109/visual.2002.1183764",
                "AuthorKeywords": "Volume Rendering, Programmable Graphics Hardware, Ray-Casting",
                "AminerCitationCount": 1310,
                "CitationCountCrossRef": 251,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 1723,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2657,
                "i": [
                    2657
                ]
            }
        },
        {
            "name": "Lace M. K. Padilla",
            "value": 107,
            "numPapers": 53,
            "cluster": "5",
            "visible": 1,
            "index": 531,
            "x": 103.30935151194662,
            "y": -206.09992210134638,
            "vy": 0,
            "vx": 0,
            "r": 1.1232009211283822,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Visualizing Uncertain Tropical Cyclone Predictions using Representative Samples from Ensembles of Forecast Tracks",
                "DOI": "10.1109/tvcg.2018.2865193",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865193",
                "FirstPage": 882,
                "LastPage": 891,
                "PaperType": "J",
                "Abstract": "A common approach to sampling the space of a prediction is the generation of an ensemble of potential outcomes, where the ensemble's distribution reveals the statistical structure of the prediction space. For example, the US National Hurricane Center generates multiple day predictions for a storm's path, size, and wind speed, and then uses a Monte Carlo approach to sample this prediction into a large ensemble of potential storm outcomes. Various forms of summary visualizations are generated from such an ensemble, often using spatial spread to indicate its statistical characteristics. However, studies have shown that changes in the size of such summary glyphs, representing changes in the uncertainty of the prediction, are frequently confounded with other attributes of the phenomenon, such as its size or strength. In addition, simulation ensembles typically encode multivariate information, which can be difficult or confusing to include in a summary display. This problem can be overcome by directly displaying the ensemble as a set of annotated trajectories, however this solution will not be effective if ensembles are densely overdrawn or structurally disorganized. We propose to overcome these difficulties by selectively sampling the original ensemble, constructing a smaller representative and spatially well organized ensemble. This can be drawn directly as a set of paths that implicitly reveals the underlying spatial uncertainty distribution of the prediction. Since this approach does not use a visual channel to encode uncertainty, additional information can more easily be encoded in the display without leading to visual confusion. To demonstrate our argument, we describe the development of a visualization for ensembles of tropical cyclone forecast tracks, explaining how their spatial and temporal predictions, as well as other crucial storm characteristics such as size and intensity, can be clearly revealed. We verify the effectiveness of this visualization approach through a cognitive study exploring how storm damage estimates are affected by the density of tracks drawn, and by the presence or absence of annotating information on storm size and intensity.",
                "AuthorNamesDeduped": "Le Liu 0007;Lace M. K. Padilla;Sarah H. Creem-Regehr;Donald H. House",
                "AuthorNames": "Le Liu;Lace Padilla;Sarah H. Creem-Regehr;Donald H. House",
                "AuthorAffiliation": "Magic Weaver Inc., Santa Clara, CA;Northwestern University, Evanston, IL, US;University of Utah, Salt Lake City, UT, US;Clemson University, Clemson, SC, US",
                "InternalReferences": "0.1109/tvcg.2017.2743898;10.1109/tvcg.2010.181;10.1109/tvcg.2014.2346455",
                "AuthorKeywords": "uncertainty visualization,hurricane forecasts,ensemble visualization,ensemble sampling,implicit uncertainty",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 1056,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 663,
                "i": [
                    663
                ]
            }
        },
        {
            "name": "Chris Bryan",
            "value": 115,
            "numPapers": 49,
            "cluster": "6",
            "visible": 1,
            "index": 532,
            "x": 63.10067370983174,
            "y": 221.964648035144,
            "vy": 0,
            "vx": 0,
            "r": 1.132412204951065,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Temporal Summary Images: An Approach to Narrative Visualization via Interactive Annotation Generation and Placement",
                "DOI": "10.1109/tvcg.2016.2598876",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598876",
                "FirstPage": 511,
                "LastPage": 520,
                "PaperType": "J",
                "Abstract": "Visualization is a powerful technique for analysis and communication of complex, multidimensional, and time-varying data. However, it can be difficult to manually synthesize a coherent narrative in a chart or graph due to the quantity of visualized attributes, a variety of salient features, and the awareness required to interpret points of interest (POls). We present Temporal Summary Images (TSIs) as an approach for both exploring this data and creating stories from it. As a visualization, a TSI is composed of three common components: (1) a temporal layout, (2) comic strip-style data snapshots, and (3) textual annotations. To augment user analysis and exploration, we have developed a number of interactive techniques that recommend relevant data features and design choices, including an automatic annotations workflow. As the analysis and visual design processes converge, the resultant image becomes appropriate for data storytelling. For validation, we use a prototype implementation for TSIs to conduct two case studies with large-scale, scientific simulation datasets.",
                "AuthorNamesDeduped": "Chris Bryan;Kwan-Liu Ma;Jonathan Woodring",
                "AuthorNames": "Chris Bryan;Kwan-Liu Ma;Jonathan Woodring",
                "AuthorAffiliation": "University of California, Davis;University of California, Davis;Los Alamos National Laboratory",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2007.70594;10.1109/tvcg.2011.255;10.1109/tvcg.2010.179;10.1109/vast.2010.5652890;10.1109/tvcg.2012.229;10.1109/tvcg.2012.212;10.1109/tvcg.2011.195;10.1109/vast.2012.6400487",
                "AuthorKeywords": "Narrative visualization;storytelling;annotations;comic strip visualization;time-varying data",
                "AminerCitationCount": 81,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2775,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 897,
                "i": [
                    897
                ]
            }
        },
        {
            "name": "Rita Borgo",
            "value": 176,
            "numPapers": 69,
            "cluster": "5",
            "visible": 1,
            "index": 533,
            "x": -196.64782458274732,
            "y": -121.15953568280561,
            "vy": 0,
            "vx": 0,
            "r": 1.2026482440990214,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "An Empirical Study on Using Visual Embellishments in Visualization",
                "DOI": "10.1109/tvcg.2012.197",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.197",
                "FirstPage": 2759,
                "LastPage": 2768,
                "PaperType": "J",
                "Abstract": "In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces “divided attention”, and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.",
                "AuthorNamesDeduped": "Rita Borgo;Alfie Abdul-Rahman;Farhan Mohamed;Philip W. Grant;Irene Reppa;Luciano Floridi;Min Chen 0001",
                "AuthorNames": "Rita Borgo;Alfie Abdul-Rahman;Farhan Mohamed;Philip W. Grant;Irene Reppa;Luciano Floridi;Min Chen",
                "AuthorAffiliation": "Computer Science, Swansea University, UK;Oxford e-Research Centre, University of Oxford, UK;Universiti Teknologi Malaysia, Malaysia;Computer Science, Swansea University, UK;Psychology Department, Swansea University, UK;St. Cross College, University of Oxford, UK;Oxford e-Research Centre, University of Oxford, UK",
                "InternalReferences": "0.1109/tvcg.2010.132;10.1109/visual.1996.568118;10.1109/tvcg.2008.171;10.1109/tvcg.2011.175",
                "AuthorKeywords": "Visual embellishments, metaphors, icons, cognition, working memory, long-term memory, visual search, evaluation",
                "AminerCitationCount": 156,
                "CitationCountCrossRef": 97,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 2702,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1406,
                "i": [
                    1406
                ]
            }
        },
        {
            "name": "Tatiana von Landesberger",
            "value": 201,
            "numPapers": 62,
            "cluster": "6",
            "visible": 1,
            "index": 534,
            "x": 227.0565765522618,
            "y": -43.53517020027462,
            "vy": 0,
            "vx": 0,
            "r": 1.231433506044905,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "In Search of Patient Zero: Visual Analytics of Pathogen Transmission Pathways in Hospitals",
                "DOI": "10.1109/tvcg.2020.3030437",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030437",
                "FirstPage": 711,
                "LastPage": 721,
                "PaperType": "J",
                "Abstract": "Pathogen outbreaks (i.e., outbreaks of bacteria and viruses) in hospitals can cause high mortality rates and increase costs for hospitals significantly. An outbreak is generally noticed when the number of infected patients rises above an endemic level or the usual prevalence of a pathogen in a defined population. Reconstructing transmission pathways back to the source of an outbreak - the patient zero or index patient - requires the analysis of microbiological data and patient contacts. This is often manually completed by infection control experts. We present a novel visual analytics approach to support the analysis of transmission pathways, patient contacts, the progression of the outbreak, and patient timelines during hospitalization. Infection control experts applied our solution to a real outbreak of Klebsiella pneumoniae in a large German hospital. Using our system, our experts were able to scale the analysis of transmission pathways to longer time intervals (i.e., several years of data instead of days) and across a larger number of wards. Also, the system is able to reduce the analysis time from days to hours. In our final study, feedback from twenty-five experts from seven German hospitals provides evidence that our solution brings significant benefits for analyzing outbreaks.",
                "AuthorNamesDeduped": "Tom Baumgartl;Markus Petzold;Marcel Wunderlich;Markus Höhn;Daniel Archambault;M. Lieser;A. Dalpke;Simone Scheithauer;Michael Marschollek;Vanessa Eichel;Nico T. Mutters;Highmed Consortium;Tatiana von Landesberger",
                "AuthorNames": "T. Baumgartl;N. T. Mutters;Highmed Consortium;T. Von Landesberger;M. Petzold;M. Wunderlich;M. Hohn;D. Archambault;M. Lieser;A. Dalpke;S. Scheithauer;M. Marschollek;V. M. Eichel",
                "AuthorAffiliation": "TU Darmstadt, Darmstadt, Germany;University Hospital Heidelberg, Heidelberg, Germany;TU Darmstadt, Darmstadt, Germany;TU Darmstadt, Darmstadt, Germany;Swansea University, Swansea, United Kingdom;University Hospital Heidelberg, Heidelberg, Germany;TU Dresden, Dresden, Germany;University Medicine Gottingen, Universitat Gottingen, Germany;L. Reichertz Institute for Medical Informatics, Hannover, Germany;University Hospital Heidelberg, Heidelberg, Germany;University Hospital Heidelberg, Heidelberg, Germany;L. Reichertz Institute for Medical Informatics, Hannover, Germany;TU Darmstadt, Darmstadt, Germany and Universitat Rostock, Rostock, Germany",
                "InternalReferences": "0.1109/vast.2017.8585487;10.1109/tvcg.2015.2467851;10.1109/tvcg.2011.185;10.1109/vast.2015.7347626;10.1109/tvcg.2011.239;10.1109/tvcg.2006.156;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598588;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.196;10.1109/tvcg.2013.200;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2012.212;10.1109/tvcg.2018.2864899;10.1109/tvcg.2015.2468078;10.1109/vast.2012.6400553;10.1109/vast.2009.5333893;10.1109/tvcg.2015.2467751",
                "AuthorKeywords": "dynamic networks,visualization applications,health,medicine,outbreak,Klebsiella,infection control",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 990,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 476,
                "i": [
                    476
                ]
            }
        },
        {
            "name": "Ivan Viola",
            "value": 217,
            "numPapers": 77,
            "cluster": "6",
            "visible": 1,
            "index": 535,
            "x": -138.14591624162077,
            "y": 185.64941644336804,
            "vy": 0,
            "vx": 0,
            "r": 1.2498560736902706,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Modeling in the Time of COVID-19: Statistical and Rule-based Mesoscale Models",
                "DOI": "10.1109/tvcg.2020.3030415",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030415",
                "FirstPage": 722,
                "LastPage": 732,
                "PaperType": "J",
                "Abstract": "We present a new technique for the rapid modeling and construction of scientifically accurate mesoscale biological models. The resulting 3D models are based on a few 2D microscopy scans and the latest knowledge available about the biological entity, represented as a set of geometric relationships. Our new visual-programming technique is based on statistical and rule-based modeling approaches that are rapid to author, fast to construct, and easy to revise. From a few 2D microscopy scans, we determine the statistical properties of various structural aspects, such as the outer membrane shape, the spatial properties, and the distribution characteristics of the macromolecular elements on the membrane. This information is utilized in the construction of the 3D model. Once all the imaging evidence is incorporated into the model, additional information can be incorporated by interactively defining the rules that spatially characterize the rest of the biological entity, such as mutual interactions among macromolecules, and their distances and orientations relative to other structures. These rules are defined through an intuitive 3D interactive visualization as a visual-programming feedback loop. We demonstrate the applicability of our approach on a use case of the modeling procedure of the SARS-CoV-2 virion ultrastructure. This atomistic model, which we present here, can steer biological research to new promising directions in our efforts to fight the spread of the virus.",
                "AuthorNamesDeduped": "Ngan V. T. Nguyen;Ondrej Strnad;Tobias Klein;Deng Luo;Ruwayda Alharbi;Peter Wonka;Martina Maritan;Peter Mindek;Ludovic Autin;David S. Goodsell;Ivan Viola",
                "AuthorNames": "Ngan Nguyen;Ivan Viola;Ondřej Strnad;Tobias Klein;Deng Luo;Ruwayda Alharbi;Peter Wonka;Martina Maritan;Peter Mindek;Ludovic Autin;David S. Goodsell",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;TU Wien and Nanographics GmbH;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;Scripps Research Institute, US;TU Wien and Nanographics GmbH;Scripps Research Institute, US;Scripps Research Institute, US;King Abdullah University of Science and Technology (KAUST), Saudi Arabia",
                "InternalReferences": "0.1109/tvcg.2019.2934334;10.1109/tvcg.2017.2744258;10.1109/tvcg.2009.157;10.1109/tvcg.2017.2744518;10.1109/tvcg.2013.158;10.1109/tvcg.2006.115",
                "AuthorKeywords": "molecular visualization,mesoscale modeling",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 82,
                "DownloadsXplore": 2739,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 428,
                "i": [
                    428
                ]
            }
        },
        {
            "name": "Peter Mindek",
            "value": 33,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 536,
            "x": -23.561848767161536,
            "y": -230.4231743611597,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Modeling in the Time of COVID-19: Statistical and Rule-based Mesoscale Models",
                "DOI": "10.1109/tvcg.2020.3030415",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030415",
                "FirstPage": 722,
                "LastPage": 732,
                "PaperType": "J",
                "Abstract": "We present a new technique for the rapid modeling and construction of scientifically accurate mesoscale biological models. The resulting 3D models are based on a few 2D microscopy scans and the latest knowledge available about the biological entity, represented as a set of geometric relationships. Our new visual-programming technique is based on statistical and rule-based modeling approaches that are rapid to author, fast to construct, and easy to revise. From a few 2D microscopy scans, we determine the statistical properties of various structural aspects, such as the outer membrane shape, the spatial properties, and the distribution characteristics of the macromolecular elements on the membrane. This information is utilized in the construction of the 3D model. Once all the imaging evidence is incorporated into the model, additional information can be incorporated by interactively defining the rules that spatially characterize the rest of the biological entity, such as mutual interactions among macromolecules, and their distances and orientations relative to other structures. These rules are defined through an intuitive 3D interactive visualization as a visual-programming feedback loop. We demonstrate the applicability of our approach on a use case of the modeling procedure of the SARS-CoV-2 virion ultrastructure. This atomistic model, which we present here, can steer biological research to new promising directions in our efforts to fight the spread of the virus.",
                "AuthorNamesDeduped": "Ngan V. T. Nguyen;Ondrej Strnad;Tobias Klein;Deng Luo;Ruwayda Alharbi;Peter Wonka;Martina Maritan;Peter Mindek;Ludovic Autin;David S. Goodsell;Ivan Viola",
                "AuthorNames": "Ngan Nguyen;Ivan Viola;Ondřej Strnad;Tobias Klein;Deng Luo;Ruwayda Alharbi;Peter Wonka;Martina Maritan;Peter Mindek;Ludovic Autin;David S. Goodsell",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;TU Wien and Nanographics GmbH;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;Scripps Research Institute, US;TU Wien and Nanographics GmbH;Scripps Research Institute, US;Scripps Research Institute, US;King Abdullah University of Science and Technology (KAUST), Saudi Arabia",
                "InternalReferences": "0.1109/tvcg.2019.2934334;10.1109/tvcg.2017.2744258;10.1109/tvcg.2009.157;10.1109/tvcg.2017.2744518;10.1109/tvcg.2013.158;10.1109/tvcg.2006.115",
                "AuthorKeywords": "molecular visualization,mesoscale modeling",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 82,
                "DownloadsXplore": 2739,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 428,
                "i": [
                    428
                ]
            }
        },
        {
            "name": "Peter Lindstrom 0001",
            "value": 233,
            "numPapers": 104,
            "cluster": "11",
            "visible": 1,
            "index": 537,
            "x": 173.18356783860375,
            "y": 154.13452510937236,
            "vy": 0,
            "vx": 0,
            "r": 1.2682786413356362,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Fast and Efficient Compression of Floating-Point Data",
                "DOI": "10.1109/tvcg.2006.143",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.143",
                "FirstPage": 1245,
                "LastPage": 1250,
                "PaperType": "J",
                "Abstract": "Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data",
                "AuthorNamesDeduped": "Peter Lindstrom 0001;Martin Isenburg",
                "AuthorNames": "Peter Lindstrom;Martin Isenburg",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, USA;University of California, Berkeley, USA",
                "InternalReferences": "0.1109/visual.1999.809868;10.1109/visual.2000.885711;10.1109/visual.2002.1183768;10.1109/visual.1996.568138",
                "AuthorKeywords": "High throughput, lossless compression, file compaction for I/O efficiency, fast entropy coding, range coder, predictive coding, large scale simulation and visualization",
                "AminerCitationCount": 475,
                "CitationCountCrossRef": 306,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 3732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2268,
                "i": [
                    2268
                ]
            }
        },
        {
            "name": "Nathaniel Fout",
            "value": 60,
            "numPapers": 11,
            "cluster": "11",
            "visible": 1,
            "index": 538,
            "x": -232.03209193597257,
            "y": 3.332913412674174,
            "vy": 0,
            "vx": 0,
            "r": 1.0690846286701208,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Transform Coding for Hardware-accelerated Volume Rendering",
                "DOI": "10.1109/tvcg.2007.70516",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70516",
                "FirstPage": 1600,
                "LastPage": 1607,
                "PaperType": "J",
                "Abstract": "Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by offline compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.",
                "AuthorNamesDeduped": "Nathaniel Fout;Kwan-Liu Ma",
                "AuthorNames": "Nathaniel Fout;Kwan-Liu Ma",
                "AuthorAffiliation": "Department of Computer Science, University of California, Davis, USA;Department of Computer Science, University of California, Davis, USA",
                "InternalReferences": "0.1109/visual.2002.1183757;10.1109/visual.2001.964520;10.1109/visual.2004.95;10.1109/visual.1993.398845;10.1109/visual.2003.1250357;10.1109/visual.1995.480812;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Volume Compression, Compressed Volume Rendering, Transform Coding, Hardware-accelerated Volume Rendering",
                "AminerCitationCount": 89,
                "CitationCountCrossRef": 47,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 492,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2175,
                "i": [
                    2175
                ]
            }
        },
        {
            "name": "Victor Antonio Paludetto Magri",
            "value": 0,
            "numPapers": 6,
            "cluster": "11",
            "visible": 1,
            "index": 539,
            "x": 168.9985904930462,
            "y": -159.3407556507866,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A General Framework for Progressive Data Compression and Retrieval",
                "DOI": "10.1109/tvcg.2023.3327186",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327186",
                "FirstPage": 1358,
                "LastPage": 1368,
                "PaperType": "J",
                "Abstract": "In scientific simulations, observations, and experiments, the transfer of data to and from disk and across networks has become a major bottleneck for data analysis and visualization. Compression techniques have been employed to tackle this challenge, but traditional lossy methods often demand conservative error tolerances to meet the numerical accuracy requirements of both anticipated and unknown data analysis tasks. Progressive data compression and retrieval has emerged as a promising solution, where each analysis task dictates its own accuracy needs. However, few analysis algorithms inherently support progressive data processing, and adapting compression techniques, file formats, client/server frameworks, and APIs to support progressivity can be challenging. This paper presents a framework that enables progressive-precision data queries for any data compressor or numerical representation. Our strategy hinges on a multi-component representation that successively reduces the error between the original and compressed field, allowing each field in the progressive sequence to be expressed as a partial sum of components. We have implemented this approach with four established scientific data compressors and assessed its effectiveness using real-world data sets from the SDRBench collection. The results show that our framework competes in accuracy with the standalone compressors it is based upon. Additionally, (de)compression time is proportional to the number of components requested by the user. Finally, our framework allows for fully lossless compression using lossy compressors when a sufficient number of components are employed.",
                "AuthorNamesDeduped": "Victor Antonio Paludetto Magri;Peter Lindstrom 0001",
                "AuthorNames": "Victor A. P. Magri;Peter Lindstrom",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory., U.S.;Lawrence Livermore National Laboratory., U.S.",
                "InternalReferences": "10.1109/tvcg.2007.70516;10.1109/tvcg.2018.2864853;10.1109/tvcg.2020.3030381;10.1109/tvcg.2014.2346458;10.1109/tvcg.2006.143;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Lossy to lossless compression,progressive precision,multi-component expansion,floating-point data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 224,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 98,
                "i": [
                    98
                ]
            }
        },
        {
            "name": "Peer-Timo Bremer",
            "value": 679,
            "numPapers": 128,
            "cluster": "11",
            "visible": 1,
            "index": 540,
            "x": -16.996897735236875,
            "y": 231.86441181729012,
            "vy": 0,
            "vx": 0,
            "r": 1.7818077144502014,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Visual Exploration of High Dimensional Scalar Functions",
                "DOI": "10.1109/tvcg.2010.213",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.213",
                "FirstPage": 1271,
                "LastPage": 1280,
                "PaperType": "J",
                "Abstract": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.",
                "AuthorNamesDeduped": "Samuel Gerber;Peer-Timo Bremer;Valerio Pascucci;Ross T. Whitaker",
                "AuthorNames": "Samuel Gerber;Peer-Timo Bremer;Valerio Pascucci;Ross Whitaker",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Center of Applied Scientific Computing CASC, Lawrence Livemore National Laboratory, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.2004.96;10.1109/tvcg.2007.70603;10.1109/tvcg.2006.186;10.1109/tvcg.2007.70552;10.1109/tvcg.2007.70601;10.1109/visual.2005.1532839",
                "AuthorKeywords": "Morse theory, High-dimensional visualization, Morse-Smale complex",
                "AminerCitationCount": 127,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1445,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1773,
                "i": [
                    1773
                ]
            }
        },
        {
            "name": "Valerio Pascucci",
            "value": 1085,
            "numPapers": 227,
            "cluster": "11",
            "visible": 1,
            "index": 541,
            "x": -144.22240724302435,
            "y": -182.61954235247453,
            "vy": 0,
            "vx": 0,
            "r": 2.2492803684513527,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Martin Isenburg",
            "value": 68,
            "numPapers": 13,
            "cluster": "11",
            "visible": 1,
            "index": 542,
            "x": 229.91484384730276,
            "y": 37.27149820801912,
            "vy": 0,
            "vx": 0,
            "r": 1.0782959124928038,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Fast and Efficient Compression of Floating-Point Data",
                "DOI": "10.1109/tvcg.2006.143",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.143",
                "FirstPage": 1245,
                "LastPage": 1250,
                "PaperType": "J",
                "Abstract": "Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data",
                "AuthorNamesDeduped": "Peter Lindstrom 0001;Martin Isenburg",
                "AuthorNames": "Peter Lindstrom;Martin Isenburg",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, USA;University of California, Berkeley, USA",
                "InternalReferences": "0.1109/visual.1999.809868;10.1109/visual.2000.885711;10.1109/visual.2002.1183768;10.1109/visual.1996.568138",
                "AuthorKeywords": "High throughput, lossless compression, file compaction for I/O efficiency, fast entropy coding, range coder, predictive coding, large scale simulation and visualization",
                "AminerCitationCount": 475,
                "CitationCountCrossRef": 306,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 3732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2268,
                "i": [
                    2268
                ]
            }
        },
        {
            "name": "Jens Schneider 0002",
            "value": 140,
            "numPapers": 45,
            "cluster": "6",
            "visible": 1,
            "index": 543,
            "x": -194.88795804909873,
            "y": 127.94015713392236,
            "vy": 0,
            "vx": 0,
            "r": 1.1611974668969487,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "The Mixture Graph-A Data Structure for Compressing, Rendering, and Querying Segmentation Histograms",
                "DOI": "10.1109/tvcg.2020.3030451",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030451",
                "FirstPage": 645,
                "LastPage": 655,
                "PaperType": "J",
                "Abstract": "In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178 x speed-up over naive parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions.",
                "AuthorNamesDeduped": "Khaled A. Al-Thelaya;Marco Agus;Jens Schneider 0002",
                "AuthorNames": "Khaled Ai- Thelaya;Marco Agus;Jens Schneider",
                "AuthorAffiliation": "Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar;Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar;Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar",
                "InternalReferences": "0.1109/tvcg.2015.2467441;10.1109/tvcg.2014.2346312;10.1109/tvcg.2013.142;10.1109/tvcg.2018.2864847;10.1109/tvcg.2007.70516;10.1109/tvcg.2017.2744238;10.1109/visual.2003.1250386;10.1109/tvcg.2014.2346371;10.1109/tvcg.2009.178;10.1109/tvcg.2010.168;10.1109/tvcg.2012.240;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Segmented Volumes,Data Structures,Sparse Data",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 438,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 444,
                "i": [
                    444
                ]
            }
        },
        {
            "name": "Tushar M. Athawale",
            "value": 52,
            "numPapers": 70,
            "cluster": "11",
            "visible": 1,
            "index": 544,
            "x": 57.33465368410723,
            "y": -226.19181569394482,
            "vy": 0,
            "vx": 0,
            "r": 1.059873344847438,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models",
                "DOI": "10.1109/tvcg.2022.3209424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209424",
                "FirstPage": 613,
                "LastPage": 623,
                "PaperType": "J",
                "Abstract": "Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green's theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Christopher R. Johnson 0001;Sudhanshu Sane;David Pugmire",
                "AuthorNames": "Tushar M. Athawale;Chris R. Johnson;Sudhanshu Sane;David Pugmire",
                "AuthorAffiliation": "Oak Ridge National Laboratory, USA;Scientific Computing & Imaging (SCI) Institute, University of Utah, USA;Luminary Cloud, Inc., USA;Oak Ridge National Laboratory, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030394;10.1109/tvcg.2015.2467958;10.1109/tvcg.2018.2864432;10.1109/tvcg.2015.2467204;10.1109/tvcg.2012.227;10.1109/infvis.2002.1173157;10.1109/tvcg.2017.2744099;10.1109/tvcg.2009.131;10.1109/tvcg.2008.116;10.1109/visual.1996.568116;10.1109/tvcg.2007.70518;10.1109/tvcg.2020.3030365;10.1109/tvcg.2018.2864846;10.1109/tvcg.2006.165;10.1109/tvcg.2016.2599017;10.1109/tvcg.2013.143;10.1109/tvcg.2016.2599040;10.1109/vast.2006.261424;10.1109/tvcg.2019.2934242;10.1109/tvcg.2020.3030466;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864505;10.1109/tvcg.2008.119",
                "AuthorKeywords": "Uncertainty visualization,fiber surfaces,and probability",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 346,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 221,
                "i": [
                    221
                ]
            }
        },
        {
            "name": "Christopher R. Johnson 0001",
            "value": 122,
            "numPapers": 42,
            "cluster": "6",
            "visible": 1,
            "index": 545,
            "x": 110.61502204364388,
            "y": 205.7044406382229,
            "vy": 0,
            "vx": 0,
            "r": 1.1404720782959126,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models",
                "DOI": "10.1109/tvcg.2022.3209424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209424",
                "FirstPage": 613,
                "LastPage": 623,
                "PaperType": "J",
                "Abstract": "Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green's theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Christopher R. Johnson 0001;Sudhanshu Sane;David Pugmire",
                "AuthorNames": "Tushar M. Athawale;Chris R. Johnson;Sudhanshu Sane;David Pugmire",
                "AuthorAffiliation": "Oak Ridge National Laboratory, USA;Scientific Computing & Imaging (SCI) Institute, University of Utah, USA;Luminary Cloud, Inc., USA;Oak Ridge National Laboratory, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030394;10.1109/tvcg.2015.2467958;10.1109/tvcg.2018.2864432;10.1109/tvcg.2015.2467204;10.1109/tvcg.2012.227;10.1109/infvis.2002.1173157;10.1109/tvcg.2017.2744099;10.1109/tvcg.2009.131;10.1109/tvcg.2008.116;10.1109/visual.1996.568116;10.1109/tvcg.2007.70518;10.1109/tvcg.2020.3030365;10.1109/tvcg.2018.2864846;10.1109/tvcg.2006.165;10.1109/tvcg.2016.2599017;10.1109/tvcg.2013.143;10.1109/tvcg.2016.2599040;10.1109/vast.2006.261424;10.1109/tvcg.2019.2934242;10.1109/tvcg.2020.3030466;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864505;10.1109/tvcg.2008.119",
                "AuthorKeywords": "Uncertainty visualization,fiber surfaces,and probability",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 346,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 221,
                "i": [
                    221
                ]
            }
        },
        {
            "name": "Jun Han 0010",
            "value": 32,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 546,
            "x": -220.71745759234832,
            "y": -77.03118793040858,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "TSR-TVD: Temporal Super-Resolution for Time-Varying Data Analysis and Visualization",
                "DOI": "10.1109/tvcg.2019.2934255",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934255",
                "FirstPage": 205,
                "LastPage": 215,
                "PaperType": "J",
                "Abstract": "We present TSR-TVD, a novel deep learning framework that generates temporal super-resolution (TSR) of time-varying data (TVD) using adversarial learning. TSR-TVD is the first work that applies the recurrent generative network (RGN), a combination of the recurrent neural network (RNN) and generative adversarial network (GAN), to generate temporal high-resolution volume sequences from low-resolution ones. The design of TSR-TVD includes a generator and a discriminator. The generator takes a pair of volumes as input and outputs the synthesized intermediate volume sequence through forward and backward predictions. The discriminator takes the synthesized intermediate volumes as input and produces a score indicating the realness of the volumes. Our method handles multivariate data as well where the trained network from one variable is applied to generate TSR for another variable. To demonstrate the effectiveness of TSR-TVD, we show quantitative and qualitative results with several time-varying multivariate data sets and compare our method against standard linear interpolation and solutions solely based on RNN or CNN.",
                "AuthorNamesDeduped": "Jun Han 0010;Chaoli Wang 0001",
                "AuthorNames": "Jun Han;Chaoli Wang",
                "AuthorAffiliation": "Department of Computer Science and Engineering, University of Notre Dame, Notre Dame;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2008.184;10.1109/visual.2005.1532857;10.1109/tvcg.2015.2467431;10.1109/tvcg.2006.165;10.1109/visual.1999.809910;10.1109/visual.2005.1532792;10.1109/tvcg.2018.2864808;10.1109/visual.2003.1250413;10.1109/tvcg.2008.140;10.1109/visual.2003.1250402",
                "AuthorKeywords": "Time-varying data visualization,super-resolution,deep learning,recurrent generative network",
                "AminerCitationCount": 54,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 1550,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 575,
                "i": [
                    575
                ]
            }
        },
        {
            "name": "Chaoli Wang 0001",
            "value": 192,
            "numPapers": 81,
            "cluster": "6",
            "visible": 1,
            "index": 547,
            "x": 214.98042276278272,
            "y": -92.37650041398632,
            "vy": 0,
            "vx": 0,
            "r": 1.221070811744387,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "TSR-TVD: Temporal Super-Resolution for Time-Varying Data Analysis and Visualization",
                "DOI": "10.1109/tvcg.2019.2934255",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934255",
                "FirstPage": 205,
                "LastPage": 215,
                "PaperType": "J",
                "Abstract": "We present TSR-TVD, a novel deep learning framework that generates temporal super-resolution (TSR) of time-varying data (TVD) using adversarial learning. TSR-TVD is the first work that applies the recurrent generative network (RGN), a combination of the recurrent neural network (RNN) and generative adversarial network (GAN), to generate temporal high-resolution volume sequences from low-resolution ones. The design of TSR-TVD includes a generator and a discriminator. The generator takes a pair of volumes as input and outputs the synthesized intermediate volume sequence through forward and backward predictions. The discriminator takes the synthesized intermediate volumes as input and produces a score indicating the realness of the volumes. Our method handles multivariate data as well where the trained network from one variable is applied to generate TSR for another variable. To demonstrate the effectiveness of TSR-TVD, we show quantitative and qualitative results with several time-varying multivariate data sets and compare our method against standard linear interpolation and solutions solely based on RNN or CNN.",
                "AuthorNamesDeduped": "Jun Han 0010;Chaoli Wang 0001",
                "AuthorNames": "Jun Han;Chaoli Wang",
                "AuthorAffiliation": "Department of Computer Science and Engineering, University of Notre Dame, Notre Dame;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2008.184;10.1109/visual.2005.1532857;10.1109/tvcg.2015.2467431;10.1109/tvcg.2006.165;10.1109/visual.1999.809910;10.1109/visual.2005.1532792;10.1109/tvcg.2018.2864808;10.1109/visual.2003.1250413;10.1109/tvcg.2008.140;10.1109/visual.2003.1250402",
                "AuthorKeywords": "Time-varying data visualization,super-resolution,deep learning,recurrent generative network",
                "AminerCitationCount": 54,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 1550,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 575,
                "i": [
                    575
                ]
            }
        },
        {
            "name": "Peter Rautek",
            "value": 109,
            "numPapers": 64,
            "cluster": "11",
            "visible": 1,
            "index": 548,
            "x": -96.20818487483217,
            "y": 213.52748104890415,
            "vy": 0,
            "vx": 0,
            "r": 1.125503742084053,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "ViSlang: A System for Interpreted Domain-Specific Languages for Scientific Visualization",
                "DOI": "10.1109/tvcg.2014.2346318",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346318",
                "FirstPage": 2388,
                "LastPage": 2396,
                "PaperType": "J",
                "Abstract": "Researchers from many domains use scientific visualization in their daily practice. Existing implementations of algorithms usually come with a graphical user interface (high-level interface), or as software library or source code (low-level interface). In this paper we present a system that integrates domain-specific languages (DSLs) and facilitates the creation of new DSLs. DSLs provide an effective interface for domain scientists avoiding the difficulties involved with low-level interfaces and at the same time offering more flexibility than high-level interfaces. We describe the design and implementation of ViSlang, an interpreted language specifically tailored for scientific visualization. A major contribution of our design is the extensibility of the ViSlang language. Novel DSLs that are tailored to the problems of the domain can be created and integrated into ViSlang. We show that our approach can be added to existing user interfaces to increase the flexibility for expert users on demand, but at the same time does not interfere with the user experience of novice users. To demonstrate the flexibility of our approach we present new DSLs for volume processing, querying and visualization. We report the implementation effort for new DSLs and compare our approach with Matlab and Python implementations in terms of run-time performance.",
                "AuthorNamesDeduped": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorNames": "Peter Rautek;Stefan Bruckner;M. Eduard Gröller;Markus Hadwiger",
                "AuthorAffiliation": "KAUST;University of Bergen;Vienna University of Technology, VrVis Research Center;KAUST",
                "InternalReferences": "0.1109/visual.2005.1532792;10.1109/visual.1992.235219;10.1109/tvcg.2009.174;10.1109/tvcg.2014.2346322;10.1109/visual.2004.95;10.1109/tvcg.2011.185;10.1109/visual.2005.1532788;10.1109/visual.1992.235202;10.1109/tvcg.2008.184",
                "AuthorKeywords": "Domain-specific languages, Volume visualization, Volume visualization framework",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 767,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1220,
                "i": [
                    1220
                ]
            }
        },
        {
            "name": "Tobias Günther",
            "value": 61,
            "numPapers": 71,
            "cluster": "11",
            "visible": 1,
            "index": 549,
            "x": -73.36160311216716,
            "y": -222.63888965949513,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "Vector Field Topology of Time-Dependent Flows in a Steady Reference Frame",
                "DOI": "10.1109/tvcg.2019.2934375",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934375",
                "FirstPage": 280,
                "LastPage": 290,
                "PaperType": "J",
                "Abstract": "The topological analysis of unsteady vector fields remains to this day one of the largest challenges in flow visualization. We build up on recent work on vortex extraction to define a time-dependent vector field topology for 2D and 3D flows. In our work, we split the vector field into two components: a vector field in which the flow becomes steady, and the remaining ambient flow that describes the motion of topological elements (such as sinks, sources and saddles) and feature curves (vortex corelines and bifurcation lines). To this end, we expand on recent local optimization approaches by modeling spatially-varying deformations through displacement transformations from continuum mechanics. We compare and discuss the relationships with existing local and integration-based topology extraction methods, showing for instance that separatrices seeded from saddles in the optimal frame align with the integration-based streakline vector field topology. In contrast to the streakline-based approach, our method gives a complete picture of the topology for every time slice, including the steps near the temporal domain boundaries. With our work it now becomes possible to extract topological information even when only few time slices are available. We demonstrate the method in several analytical and numerically-simulated flows and discuss practical aspects, limitations and opportunities for future work.",
                "AuthorNamesDeduped": "Irene Baeza Rojo;Tobias Günther",
                "AuthorNames": "Irene Baeza Rojo;Tobias Günther",
                "AuthorAffiliation": "Computer Graphics Laboratory, ETH Zürich;Computer Graphics Laboratory, ETH Zürich",
                "InternalReferences": "0.1109/visual.1999.809907;10.1109/visual.1991.175773;10.1109/tvcg.2015.2467200;10.1109/tvcg.2018.2864828;10.1109/tvcg.2018.2864839;10.1109/visual.1999.809896;10.1109/visual.1998.745296;10.1109/visual.2005.1532851;10.1109/visual.2004.99;10.1109/visual.2000.885716;10.1109/tvcg.2007.70545;10.1109/tvcg.2007.70557;10.1109/tvcg.2018.2864813",
                "AuthorKeywords": "Scientific visualization,unsteady flow,vector field topology,reference frame optimization",
                "AminerCitationCount": 37,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 803,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 573,
                "i": [
                    573
                ]
            }
        },
        {
            "name": "Thomas Theußl",
            "value": 59,
            "numPapers": 35,
            "cluster": "11",
            "visible": 1,
            "index": 550,
            "x": 204.67095207494324,
            "y": 114.71617748485293,
            "vy": 0,
            "vx": 0,
            "r": 1.0679332181922856,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Vortex Lens: Interactive Vortex Core Line Extraction using Observed Line Integral Convolution",
                "DOI": "10.1109/tvcg.2023.3326915",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326915",
                "FirstPage": 55,
                "LastPage": 65,
                "PaperType": "J",
                "Abstract": "This paper describes a novel method for detecting and visualizing vortex structures in unsteady 2D fluid flows. The method is based on an interactive local reference frame estimation that minimizes the observed time derivative of the input flow field $\\mathrm{v}(x, t)$. A locally optimal reference frame $\\mathrm{w}(x, t)$ assists the user in the identification of physically observable vortex structures in Observed Line Integral Convolution (LIC) visualizations. The observed LIC visualizations are interactively computed and displayed in a user-steered vortex lens region, embedded in the context of a conventional LIC visualization outside the lens. The locally optimal reference frame is then used to detect observed critical points, where $\\mathrm{v}=\\mathrm{w}$, which are used to seed vortex core lines. Each vortex core line is computed as a solution of the ordinary differential equation (ODE) $\\dot{w}(t)=\\mathrm{w}(w(t), t)$, with an observed critical point as initial condition $(w(t_{0}), t_{0})$. During integration, we enforce a strict error bound on the difference between the extracted core line and the integration of a path line of the input vector field, i.e., a solution to the ODE $\\dot{v}(t)=\\mathrm{v}(v(t), t)$. We experimentally verify that this error depends on the step size of the core line integration. This ensures that our method extracts Lagrangian vortex core lines that are the simultaneous solution of both ODEs with a numerical error that is controllable by the integration step size. We show the usability of our method in the context of an interactive system using a lens metaphor, and evaluate the results in comparison to state-of-the-art vortex core line extraction methods.",
                "AuthorNamesDeduped": "Peter Rautek;Xingdi Zhang;Bernhard Woschizka;Thomas Theußl;Markus Hadwiger",
                "AuthorNames": "Peter Rautek;Xingdi Zhang;Bernhard Woschizka;Thomas Theußl;Markus Hadwiger",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah Univ. of Sci. & Technol. (KAUST), Vis. Comput. Ctr., Thuwal, Saudi Arabia;Core Labs, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia",
                "InternalReferences": "10.1109/tvcg.2019.2934375;10.1109/visual.1991.175773;10.1109/tvcg.2015.2467200;10.1109/tvcg.2018.2864839;10.1109/visual.2003.1250364;10.1109/visual.1999.809896;10.1109/tvcg.2020.3030454;10.1109/visual.1998.745296;10.1109/visual.2004.128;10.1109/visual.1997.663898;10.1109/visual.2005.1532851;10.1109/visual.2003.1250363;10.1109/tvcg.2007.70545;10.1109/tvcg.2021.3115565",
                "AuthorKeywords": "Flow visualization,vortex detection,objectivity,observers,reference frames,Lie algebras,visual lens metaphors",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 201,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 102,
                "i": [
                    102
                ]
            }
        },
        {
            "name": "Xingdi Zhang",
            "value": 5,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 551,
            "x": -228.61501539337138,
            "y": 53.71382258496004,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Vortex Lens: Interactive Vortex Core Line Extraction using Observed Line Integral Convolution",
                "DOI": "10.1109/tvcg.2023.3326915",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326915",
                "FirstPage": 55,
                "LastPage": 65,
                "PaperType": "J",
                "Abstract": "This paper describes a novel method for detecting and visualizing vortex structures in unsteady 2D fluid flows. The method is based on an interactive local reference frame estimation that minimizes the observed time derivative of the input flow field $\\mathrm{v}(x, t)$. A locally optimal reference frame $\\mathrm{w}(x, t)$ assists the user in the identification of physically observable vortex structures in Observed Line Integral Convolution (LIC) visualizations. The observed LIC visualizations are interactively computed and displayed in a user-steered vortex lens region, embedded in the context of a conventional LIC visualization outside the lens. The locally optimal reference frame is then used to detect observed critical points, where $\\mathrm{v}=\\mathrm{w}$, which are used to seed vortex core lines. Each vortex core line is computed as a solution of the ordinary differential equation (ODE) $\\dot{w}(t)=\\mathrm{w}(w(t), t)$, with an observed critical point as initial condition $(w(t_{0}), t_{0})$. During integration, we enforce a strict error bound on the difference between the extracted core line and the integration of a path line of the input vector field, i.e., a solution to the ODE $\\dot{v}(t)=\\mathrm{v}(v(t), t)$. We experimentally verify that this error depends on the step size of the core line integration. This ensures that our method extracts Lagrangian vortex core lines that are the simultaneous solution of both ODEs with a numerical error that is controllable by the integration step size. We show the usability of our method in the context of an interactive system using a lens metaphor, and evaluate the results in comparison to state-of-the-art vortex core line extraction methods.",
                "AuthorNamesDeduped": "Peter Rautek;Xingdi Zhang;Bernhard Woschizka;Thomas Theußl;Markus Hadwiger",
                "AuthorNames": "Peter Rautek;Xingdi Zhang;Bernhard Woschizka;Thomas Theußl;Markus Hadwiger",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah Univ. of Sci. & Technol. (KAUST), Vis. Comput. Ctr., Thuwal, Saudi Arabia;Core Labs, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia",
                "InternalReferences": "10.1109/tvcg.2019.2934375;10.1109/visual.1991.175773;10.1109/tvcg.2015.2467200;10.1109/tvcg.2018.2864839;10.1109/visual.2003.1250364;10.1109/visual.1999.809896;10.1109/tvcg.2020.3030454;10.1109/visual.1998.745296;10.1109/visual.2004.128;10.1109/visual.1997.663898;10.1109/visual.2005.1532851;10.1109/visual.2003.1250363;10.1109/tvcg.2007.70545;10.1109/tvcg.2021.3115565",
                "AuthorKeywords": "Flow visualization,vortex detection,objectivity,observers,reference frames,Lie algebras,visual lens metaphors",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 201,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 102,
                "i": [
                    102
                ]
            }
        },
        {
            "name": "Matej Mlejnek",
            "value": 54,
            "numPapers": 18,
            "cluster": "11",
            "visible": 1,
            "index": 552,
            "x": 132.41031419737476,
            "y": -194.20996033713746,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Time-Dependent Flow seen through Approximate Observer Killing Fields",
                "DOI": "10.1109/tvcg.2018.2864839",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864839",
                "FirstPage": 1257,
                "LastPage": 1266,
                "PaperType": "J",
                "Abstract": "Flow fields are usually visualized relative to a global observer, i.e., a single frame of reference. However, often no global frame can depict all flow features equally well. Likewise, objective criteria for detecting features such as vortices often use either a global reference frame, or compute a separate frame for each point in space and time. We propose the first general framework that enables choosing a smooth trade-off between these two extremes. Using global optimization to minimize specific differential geometric properties, we compute a time-dependent observer velocity field that describes the motion of a continuous field of observers adapted to the input flow. This requires developing the novel notion of an observed time derivative. While individual observers are restricted to rigid motions, overall we compute an approximate Killing field, corresponding to almost-rigid motion. This enables continuous transitions between different observers. Instead of focusing only on flow features, we furthermore develop a novel general notion of visualizing how all observers jointly perceive the input field. This in fact requires introducing the concept of an observation time, with respect to which a visualization is computed. We develop the corresponding notions of observed stream, path, streak, and time lines. For efficiency, these characteristic curves can be computed using standard approaches, by first transforming the input field accordingly. Finally, we prove that the input flow perceived by the observer field is objective. This makes derived flow features, such as vortices, objective as well.",
                "AuthorNamesDeduped": "Markus Hadwiger;Matej Mlejnek;Thomas Theußl;Peter Rautek",
                "AuthorNames": "Markus Hadwiger;Matej Mlejnek;Thomas Theußl;Peter Rautek",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Saudi Arabia",
                "InternalReferences": "0.1109/tvcg.2015.2467200;10.1109/visual.1999.809896;10.1109/visual.1997.663898;10.1109/tvcg.2008.163;10.1109/tvcg.2007.70545;10.1109/tvcg.2010.198;10.1109/tvcg.2007.70557",
                "AuthorKeywords": "Flow visualization,observer frames of reference,Killing vector fields,infinitesimal isometries,Lie derivatives,objectivity",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 602,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 695,
                "i": [
                    695
                ]
            }
        },
        {
            "name": "Ronald Peikert",
            "value": 343,
            "numPapers": 52,
            "cluster": "11",
            "visible": 1,
            "index": 553,
            "x": 33.58204823219375,
            "y": 232.85670708942575,
            "vy": 0,
            "vx": 0,
            "r": 1.3949337938975246,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "The \"Parallel Vectors\" operator-a vector field visualization primitive",
                "DOI": "10.1109/visual.1999.809896",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809896",
                "FirstPage": 263,
                "LastPage": 532,
                "PaperType": "C",
                "Abstract": "We propose an elementary operation on a pair of vector fields as a building block for defining and computing global line-type features of vector or scalar fields. While usual feature definitions often are procedural and therefore implicit, our operator allows precise mathematical definitions. It can serve as a basis for comparing feature definitions and for reuse of algorithms and implementations. Applications focus on vortex core methods.",
                "AuthorNamesDeduped": "Ronald Peikert;Martin Roth",
                "AuthorNames": "R. Peikert;M. Roth",
                "AuthorAffiliation": "Dept. of Comput. Sci., ETH Zürich, Switzerland;Dept. of Comput. Sci., ETH Zürich, Switzerland",
                "InternalReferences": "0.1109/visual.1998.745290;10.1109/visual.1996.568137;10.1109/visual.1998.745296;10.1109/visual.1995.480795;10.1109/visual.1994.346327;10.1109/visual.1997.663894;10.1109/visual.1998.745297;10.1109/visual.1996.567807",
                "AuthorKeywords": null,
                "AminerCitationCount": 271,
                "CitationCountCrossRef": 117,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 314,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3052,
                "i": [
                    3052
                ]
            }
        },
        {
            "name": "Martin Roth",
            "value": 199,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 554,
            "x": -182.21918642377878,
            "y": -149.15149378754526,
            "vy": 0,
            "vx": 0,
            "r": 1.2291306850892343,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "The \"Parallel Vectors\" operator-a vector field visualization primitive",
                "DOI": "10.1109/visual.1999.809896",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809896",
                "FirstPage": 263,
                "LastPage": 532,
                "PaperType": "C",
                "Abstract": "We propose an elementary operation on a pair of vector fields as a building block for defining and computing global line-type features of vector or scalar fields. While usual feature definitions often are procedural and therefore implicit, our operator allows precise mathematical definitions. It can serve as a basis for comparing feature definitions and for reuse of algorithms and implementations. Applications focus on vortex core methods.",
                "AuthorNamesDeduped": "Ronald Peikert;Martin Roth",
                "AuthorNames": "R. Peikert;M. Roth",
                "AuthorAffiliation": "Dept. of Comput. Sci., ETH Zürich, Switzerland;Dept. of Comput. Sci., ETH Zürich, Switzerland",
                "InternalReferences": "0.1109/visual.1998.745290;10.1109/visual.1996.568137;10.1109/visual.1998.745296;10.1109/visual.1995.480795;10.1109/visual.1994.346327;10.1109/visual.1997.663894;10.1109/visual.1998.745297;10.1109/visual.1996.567807",
                "AuthorKeywords": null,
                "AminerCitationCount": 271,
                "CitationCountCrossRef": 117,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 314,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3052,
                "i": [
                    3052
                ]
            }
        },
        {
            "name": "Jan Sahner",
            "value": 99,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 555,
            "x": 235.325052626086,
            "y": -13.119436212347358,
            "vy": 0,
            "vx": 0,
            "r": 1.1139896373056994,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "Extraction of parallel vector surfaces in 3D time-dependent fields and application to vortex core line tracking",
                "DOI": "10.1109/visual.2005.1532851",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532851",
                "FirstPage": 631,
                "LastPage": 638,
                "PaperType": "C",
                "Abstract": "We introduce an approach to tracking vortex core lines in time-dependent 3D flow fields which are defined by the parallel vectors approach. They build surface structures in the 4D space-time domain. To extract them, we introduce two 4D vector fields which act as feature flow fields, i.e., their integration gives the vortex core structures. As part of this approach, we extract and classify local bifurcations of vortex core lines in space-time. Based on a 4D stream surface integration, we provide an algorithm to extract the complete vortex core structure. We apply our technique to a number of test data sets.",
                "AuthorNamesDeduped": "Holger Theisel;Jan Sahner;Tino Weinkauf;Hans-Christian Hege;Hans-Peter Seidel",
                "AuthorNames": "H. Theisel;J. Sahner;T. Weinkauf;H.-C. Hege;H.-P. Seidel",
                "AuthorAffiliation": "MPI Saarbrücken;ZIB, Berlin, Germany;ZIB, Berlin, Germany;ZIB, Berlin, Germany;MPI Saarbrücken, Germany",
                "InternalReferences": "0.1109/visual.2004.99;10.1109/visual.1994.346327;10.1109/visual.1999.809896;10.1109/visual.1992.235211;10.1109/visual.1993.398875;10.1109/visual.2001.964506;10.1109/visual.1998.745290;10.1109/visual.1998.745296",
                "AuthorKeywords": "flow visualization, vortex core lines, bifurcations",
                "AminerCitationCount": 92,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 310,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2382,
                "i": [
                    2382
                ]
            }
        },
        {
            "name": "Tino Weinkauf",
            "value": 322,
            "numPapers": 82,
            "cluster": "11",
            "visible": 1,
            "index": 556,
            "x": -164.80745977781135,
            "y": 168.78537022379956,
            "vy": 0,
            "vx": 0,
            "r": 1.370754173862982,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Saddle connectors - an approach to visualizing the topological skeleton of complex 3D vector fields",
                "DOI": "10.1109/visual.2003.1250376",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250376",
                "FirstPage": 225,
                "LastPage": 232,
                "PaperType": "C",
                "Abstract": "One of the reasons that topological methods have a limited popularity for the visualization of complex 3D flow fields is the fact that such topological structures contain a number of separating stream surfaces. Since these stream surfaces tend to hide each other as well as other topological features, for complex 3D topologies the visualizations become cluttered and hardly interpretable. This paper proposes to use particular stream lines called saddle connectors instead of separating stream surfaces and to depict single surfaces only on user demand. We discuss properties and computational issues of saddle connectors and apply these methods to complex flow data. We show that the use of saddle connectors makes topological skeletons available as a valuable visualization tool even for topologically complex 3D flow data.",
                "AuthorNamesDeduped": "Holger Theisel;Tino Weinkauf;Hans-Christian Hege;Hans-Peter Seidel",
                "AuthorNames": "H. Theisel;T. Weinkauf;H.-C. Hege;H.-P. Seidel",
                "AuthorAffiliation": "MPI Informatik Saarbrücken, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;MPI Informatik Saarbrücken, Germany",
                "InternalReferences": "0.1109/visual.2000.885714;10.1109/visual.1999.809874;10.1109/visual.1998.745284;10.1109/visual.1998.745291;10.1109/visual.1999.809907;10.1109/visual.1992.235211;10.1109/visual.1993.398875;10.1109/visual.2001.964506;10.1109/visual.2000.885716;10.1109/visual.2001.964507;10.1109/visual.1991.175773",
                "AuthorKeywords": "3D flow visualization, vector field topology, critical points, separatrices",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 516,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2660,
                "i": [
                    2660
                ]
            }
        },
        {
            "name": "David Bauer",
            "value": 3,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 557,
            "x": 7.517757725603968,
            "y": -235.99466798802703,
            "vy": 0,
            "vx": 0,
            "r": 1.003454231433506,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Photon Field Networks for Dynamic Real-Time Volumetric Global Illumination",
                "DOI": "10.1109/tvcg.2023.3327107",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327107",
                "FirstPage": 975,
                "LastPage": 985,
                "PaperType": "J",
                "Abstract": "Volume data is commonly found in many scientific disciplines, like medicine, physics, and biology. Experts rely on robust scientific visualization techniques to extract valuable insights from the data. Recent years have shown path tracing to be the preferred approach for volumetric rendering, given its high levels of realism. However, real-time volumetric path tracing often suffers from stochastic noise and long convergence times, limiting interactive exploration. In this paper, we present a novel method to enable real-time global illumination for volume data visualization. We develop Photon Field Networks—a phase-function-aware, multi-light neural representation of indirect volumetric global illumination. The fields are trained on multi-phase photon caches that we compute a priori. Training can be done within seconds, after which the fields can be used in various rendering tasks. To showcase their potential, we develop a custom neural path tracer, with which our photon fields achieve interactive framerates even on large datasets. We conduct in-depth evaluations of the method's performance, including visual quality, stochastic noise, inference and rendering speeds, and accuracy regarding illumination and phase function awareness. Results are compared to ray marching, path tracing and photon mapping. Our findings show that Photon Field Networks can faithfully represent indirect global illumination within the boundaries of the trained phase spectrum while exhibiting less stochastic noise and rendering at a significantly faster rate than traditional methods.",
                "AuthorNamesDeduped": "David Bauer;Qi Wu 0015;Kwan-Liu Ma",
                "AuthorNames": "David Bauer;Qi Wu;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis, USA;University of California, Davis, USA;University of California, Davis, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209498;10.1109/tvcg.2020.3030344;10.1109/tvcg.2012.232;10.1109/tvcg.2016.2598430;10.1109/visual.2002.1183764;10.1109/tvcg.2011.211",
                "AuthorKeywords": "Volume data,volume rendering,volume visualization,deep learning,global illumination,neural rendering,path tracing",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 201,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 103,
                "i": [
                    103
                ]
            }
        },
        {
            "name": "Timo Ropinski",
            "value": 156,
            "numPapers": 60,
            "cluster": "6",
            "visible": 1,
            "index": 558,
            "x": 154.0066841924854,
            "y": 179.2538457719557,
            "vy": 0,
            "vx": 0,
            "r": 1.1796200345423145,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Decision Graph Embedding for High-Resolution Manometry Diagnosis",
                "DOI": "10.1109/tvcg.2017.2744299",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744299",
                "FirstPage": 873,
                "LastPage": 882,
                "PaperType": "J",
                "Abstract": "High-resolution manometry is an imaging modality which enables the categorization of esophageal motility disorders. Spatio-temporal pressure data along the esophagus is acquired using a tubular device and multiple test swallows are performed by the patient. Current approaches visualize these swallows as individual instances, despite the fact that aggregated metrics are relevant in the diagnostic process. Based on the current Chicago Classification, which serves as the gold standard in this area, we introduce a visualization supporting an efficient and correct diagnosis. To reach this goal, we propose a novel decision graph representing the Chicago Classification with workflow optimization in mind. Based on this graph, we are further able to prioritize the different metrics used during diagnosis and can exploit this prioritization in the actual data visualization. Thus, different disorders and their related parameters are directly represented and intuitively influence the appearance of our visualization. Within this paper, we introduce our novel visualization, justify the design decisions, and provide the results of a user study we performed with medical students as well as a domain expert. On top of the presented visualization, we further discuss how to derive a visual signature for individual patients that allows us for the first time to perform an intuitive comparison between subjects, in the form of small multiples.",
                "AuthorNamesDeduped": "Julian Kreiser;Alexander Hann;Eugen Zizer;Timo Ropinski",
                "AuthorNames": "Julian Kreiser;Alexander Hann;Eugen Zizer;Timo Ropinski",
                "AuthorAffiliation": "Visual Computing Group, Ulm University;Department of Internal Medicine I, Ulm University;Department of Internal Medicine I, Ulm University;Visual Computing Group, Ulm University",
                "InternalReferences": "0.1109/infvis.2001.963292;10.1109/tvcg.2013.122;10.1109/infvis.2003.1249006",
                "AuthorKeywords": "Small multiples,manometry,chicago classification",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 429,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 833,
                "i": [
                    833
                ]
            }
        },
        {
            "name": "Qi Wu 0015",
            "value": 9,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 559,
            "x": -234.85394124133725,
            "y": -28.171373473980655,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks",
                "DOI": "10.1109/tvcg.2022.3209498",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209498",
                "FirstPage": 515,
                "LastPage": 525,
                "PaperType": "J",
                "Abstract": "Volume data is found in many important scientific and engineering applications. Rendering this data for visualization at high quality and interactive rates for demanding applications such as virtual reality is still not easily achievable even using professional-grade hardware. We introduce FoVolNet—a method to significantly increase the performance of volume data visualization. We develop a cost-effective foveated rendering pipeline that sparsely samples a volume around a focal point and reconstructs the full-frame using a deep neural network. Foveated rendering is a technique that prioritizes rendering computations around the user's focal point. This approach leverages properties of the human visual system, thereby saving computational resources when rendering data in the periphery of the user's field of vision. Our reconstruction network combines direct and kernel prediction methods to produce fast, stable, and perceptually convincing output. With a slim design and the use of quantization, our method outperforms state-of-the-art neural reconstruction techniques in both end-to-end frame times and visual quality. We conduct extensive evaluations of the system's rendering performance, inference speed, and perceptual properties, and we provide comparisons to competing neural image reconstruction techniques. Our test results show that FoVolNet consistently achieves significant time saving over conventional rendering while preserving perceptual quality.",
                "AuthorNamesDeduped": "David Bauer;Qi Wu 0015;Kwan-Liu Ma",
                "AuthorNames": "David Bauer;Qi Wu;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California at Davis, USA;University of California at Davis, USA;University of California at Davis, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030344;10.1109/tvcg.2012.240;10.1109/visual.2002.1183764;10.1109/tvcg.2011.211;10.1109/tvcg.2016.2599041",
                "AuthorKeywords": "Volume data,volume visualization,deep learning,foveated rendering,neural reconstruction",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 922,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 161,
                "i": [
                    161
                ]
            }
        },
        {
            "name": "Joe Kniss",
            "value": 243,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 560,
            "x": 192.37516345014942,
            "y": -137.9920160282047,
            "vy": 0,
            "vx": 0,
            "r": 1.2797927461139897,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Gaussian transfer functions for multi-field volume visualization",
                "DOI": "10.1109/visual.2003.1250412",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250412",
                "FirstPage": 497,
                "LastPage": 504,
                "PaperType": "C",
                "Abstract": "Volume rendering is a flexible technique for visualizing dense 3D volumetric datasets. A central element of volume rendering is the conversion between data values and observable quantities such as color and opacity. This process is usually realized through the use of transfer functions that are precomputed and stored in lookup tables. For multidimensional transfer functions applied to multivariate data, these lookup tables become prohibitively large. We propose the direct evaluation of a particular type of transfer functions based on a sum of Gaussians. Because of their simple form (in terms of number of parameters), these functions and their analytic integrals along line segments can be evaluated efficiently on current graphics hardware, obviating the need for precomputed lookup tables. We have adopted these transfer functions because they are well suited for classification based on a unique combination of multiple data values that localize features in the transfer function domain. We apply this technique to the visualization of several multivariate datasets (CT, cryosection) that are difficult to classify and render accurately at interactive rates using traditional approaches.",
                "AuthorNamesDeduped": "Joe Kniss;Simon Premoze;Milan Ikits;Aaron E. Lefohn;Charles D. Hansen;Emil Praun",
                "AuthorNames": "J. Kniss;S. premoze;M. Ikits;A. Lefohn;C. Hansen;E. Praun",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;School of Computing, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;School of Computing, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1999.809889;10.1109/visual.2000.885683;10.1109/visual.2001.964521",
                "AuthorKeywords": "Volume Rendering, Transfer Functions, Multi-field visualization",
                "AminerCitationCount": 167,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 558,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2673,
                "i": [
                    2673
                ]
            }
        },
        {
            "name": "Nate Morrical",
            "value": 40,
            "numPapers": 34,
            "cluster": "11",
            "visible": 1,
            "index": 561,
            "x": -48.682560703893486,
            "y": 231.90517088523862,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Stefan Zellmann",
            "value": 30,
            "numPapers": 18,
            "cluster": "11",
            "visible": 1,
            "index": 562,
            "x": -120.86016600221261,
            "y": -204.0657253776773,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Attribute-Aware RBFs: Interactive Visualization of Time Series Particle Volumes Using RT Core Range Queries",
                "DOI": "10.1109/tvcg.2023.3327366",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327366",
                "FirstPage": 1150,
                "LastPage": 1160,
                "PaperType": "J",
                "Abstract": "Smoothed-particle hydrodynamics (SPH) is a mesh-free method used to simulate volumetric media in fluids, astrophysics, and solid mechanics. Visualizing these simulations is problematic because these datasets often contain millions, if not billions of particles carrying physical attributes and moving over time. Radial basis functions (RBFs) are used to model particles, and overlapping particles are interpolated to reconstruct a high-quality volumetric field; however, this interpolation process is expensive and makes interactive visualization difficult. Existing RBF interpolation schemes do not account for color-mapped attributes and are instead constrained to visualizing just the density field. To address these challenges, we exploit ray tracing cores in modern GPU architectures to accelerate scalar field reconstruction. We use a novel RBF interpolation scheme to integrate per-particle colors and densities, and leverage GPU-parallel tree construction and refitting to quickly update the tree as the simulation animates over time or when the user manipulates particle radii. We also propose a Hilbert reordering scheme to cluster particles together at the leaves of the tree to reduce tree memory consumption. Finally, we reduce the noise of volumetric shadows by adopting a spatially temporal blue noise sampling scheme. Our method can provide a more detailed and interactive view of these large, volumetric, time-series particle datasets than traditional methods, leading to new insights into these physics simulations.",
                "AuthorNamesDeduped": "Nate Morrical;Stefan Zellmann;Alper Sahistan;Patrick C. Shriwise;Valerio Pascucci",
                "AuthorNames": "Nate Morrical;Stefan Zellmann;Alper Sahistan;Patrick Shriwise;Valerio Pascucci",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;University of Cologne, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Argonne National Laboratory, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "10.1109/tvcg.2010.148;10.1109/tvcg.2009.142;10.1109/tvcg.2011.161;10.1109/tvcg.2023.3327366;10.1109/tvcg.2022.3209418;10.1109/tvcg.2007.70526;10.1109/tvcg.2021.3114869;10.1109/tvcg.2020.3030470",
                "AuthorKeywords": "Ray Tracing,Volume Rendering,Particle Volumes,Radial Basis Functions,Scientific Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 200,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 104,
                "i": [
                    104
                ]
            }
        },
        {
            "name": "Alper Sahistan",
            "value": 10,
            "numPapers": 12,
            "cluster": "11",
            "visible": 1,
            "index": 563,
            "x": 227.1645971459492,
            "y": 68.89300257296551,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Attribute-Aware RBFs: Interactive Visualization of Time Series Particle Volumes Using RT Core Range Queries",
                "DOI": "10.1109/tvcg.2023.3327366",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327366",
                "FirstPage": 1150,
                "LastPage": 1160,
                "PaperType": "J",
                "Abstract": "Smoothed-particle hydrodynamics (SPH) is a mesh-free method used to simulate volumetric media in fluids, astrophysics, and solid mechanics. Visualizing these simulations is problematic because these datasets often contain millions, if not billions of particles carrying physical attributes and moving over time. Radial basis functions (RBFs) are used to model particles, and overlapping particles are interpolated to reconstruct a high-quality volumetric field; however, this interpolation process is expensive and makes interactive visualization difficult. Existing RBF interpolation schemes do not account for color-mapped attributes and are instead constrained to visualizing just the density field. To address these challenges, we exploit ray tracing cores in modern GPU architectures to accelerate scalar field reconstruction. We use a novel RBF interpolation scheme to integrate per-particle colors and densities, and leverage GPU-parallel tree construction and refitting to quickly update the tree as the simulation animates over time or when the user manipulates particle radii. We also propose a Hilbert reordering scheme to cluster particles together at the leaves of the tree to reduce tree memory consumption. Finally, we reduce the noise of volumetric shadows by adopting a spatially temporal blue noise sampling scheme. Our method can provide a more detailed and interactive view of these large, volumetric, time-series particle datasets than traditional methods, leading to new insights into these physics simulations.",
                "AuthorNamesDeduped": "Nate Morrical;Stefan Zellmann;Alper Sahistan;Patrick C. Shriwise;Valerio Pascucci",
                "AuthorNames": "Nate Morrical;Stefan Zellmann;Alper Sahistan;Patrick Shriwise;Valerio Pascucci",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;University of Cologne, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Argonne National Laboratory, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "10.1109/tvcg.2010.148;10.1109/tvcg.2009.142;10.1109/tvcg.2011.161;10.1109/tvcg.2023.3327366;10.1109/tvcg.2022.3209418;10.1109/tvcg.2007.70526;10.1109/tvcg.2021.3114869;10.1109/tvcg.2020.3030470",
                "AuthorKeywords": "Ray Tracing,Volume Rendering,Particle Volumes,Radial Basis Functions,Scientific Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 200,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 104,
                "i": [
                    104
                ]
            }
        },
        {
            "name": "Patrick C. Shriwise",
            "value": 5,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 564,
            "x": -214.2304952588927,
            "y": 102.73896486304291,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Attribute-Aware RBFs: Interactive Visualization of Time Series Particle Volumes Using RT Core Range Queries",
                "DOI": "10.1109/tvcg.2023.3327366",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327366",
                "FirstPage": 1150,
                "LastPage": 1160,
                "PaperType": "J",
                "Abstract": "Smoothed-particle hydrodynamics (SPH) is a mesh-free method used to simulate volumetric media in fluids, astrophysics, and solid mechanics. Visualizing these simulations is problematic because these datasets often contain millions, if not billions of particles carrying physical attributes and moving over time. Radial basis functions (RBFs) are used to model particles, and overlapping particles are interpolated to reconstruct a high-quality volumetric field; however, this interpolation process is expensive and makes interactive visualization difficult. Existing RBF interpolation schemes do not account for color-mapped attributes and are instead constrained to visualizing just the density field. To address these challenges, we exploit ray tracing cores in modern GPU architectures to accelerate scalar field reconstruction. We use a novel RBF interpolation scheme to integrate per-particle colors and densities, and leverage GPU-parallel tree construction and refitting to quickly update the tree as the simulation animates over time or when the user manipulates particle radii. We also propose a Hilbert reordering scheme to cluster particles together at the leaves of the tree to reduce tree memory consumption. Finally, we reduce the noise of volumetric shadows by adopting a spatially temporal blue noise sampling scheme. Our method can provide a more detailed and interactive view of these large, volumetric, time-series particle datasets than traditional methods, leading to new insights into these physics simulations.",
                "AuthorNamesDeduped": "Nate Morrical;Stefan Zellmann;Alper Sahistan;Patrick C. Shriwise;Valerio Pascucci",
                "AuthorNames": "Nate Morrical;Stefan Zellmann;Alper Sahistan;Patrick Shriwise;Valerio Pascucci",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;University of Cologne, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Argonne National Laboratory, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "10.1109/tvcg.2010.148;10.1109/tvcg.2009.142;10.1109/tvcg.2011.161;10.1109/tvcg.2023.3327366;10.1109/tvcg.2022.3209418;10.1109/tvcg.2007.70526;10.1109/tvcg.2021.3114869;10.1109/tvcg.2020.3030470",
                "AuthorKeywords": "Ray Tracing,Volume Rendering,Particle Volumes,Radial Basis Functions,Scientific Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 200,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 104,
                "i": [
                    104
                ]
            }
        },
        {
            "name": "Ingo Wald",
            "value": 92,
            "numPapers": 30,
            "cluster": "11",
            "visible": 1,
            "index": 565,
            "x": 88.64613956210131,
            "y": -220.6623256034805,
            "vy": 0,
            "vx": 0,
            "r": 1.105929763960852,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Quick Clusters: A GPU-Parallel Partitioning for Efficient Path Tracing of Unstructured Volumetric Grids",
                "DOI": "10.1109/tvcg.2022.3209418",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209418",
                "FirstPage": 537,
                "LastPage": 547,
                "PaperType": "J",
                "Abstract": "We propose a simple yet effective method for clustering finite elements to improve preprocessing times and rendering performance of unstructured volumetric grids without requiring auxiliary connectivity data. Rather than building bounding volume hierarchies (BVHs) over individual elements, we sort elements along with a Hilbert curve and aggregate neighboring elements together, improving BVH memory consumption by over an order of magnitude. Then to further reduce memory consumption, we cluster the mesh on the fly into sub-meshes with smaller indices using a series of efficient parallel mesh re-indexing operations. These clusters are then passed to a highly optimized ray tracing API for point containment queries and ray-cluster intersection testing. Each cluster is assigned a maximum extinction value for adaptive sampling, which we rasterize into non-overlapping view-aligned bins allocated along the ray. These maximum extinction bins are then used to guide the placement of samples along the ray during visualization, reducing the number of samples required by multiple orders of magnitude (depending on the dataset), thereby improving overall visualization interactivity. Using our approach, we improve rendering performance over a competitive baseline on the NASA Mars Lander dataset from 6× (1 frame per second (fps) and 1.0 M rays per second (rps) up to now 6 fps and 12.4 M rps, now including volumetric shadows) while simultaneously reducing memory consumption by 3×(33 GB down to 11 GB) and avoiding any offline preprocessing steps, enabling high-quality interactive visualization on consumer graphics cards. Then by utilizing the full 48 GB of an RTX 8000, we improve the performance of Lander by 17 × (1 fps up to 17 fps, 1.0 M rps up to 35.6 M rps).",
                "AuthorNamesDeduped": "Nate Morrical;Alper Sahistan;Ugur Güdükbay;Ingo Wald;Valerio Pascucci",
                "AuthorNames": "Nate Morrical;Alper Sahistan;Uğur Güdükbay;Ingo Wald;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah, USA;Bilkent University, Turkey;Bilkent University, Turkey;NVIDIA, USA;SCI Institute, University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2014.2346333;10.1109/tvcg.2011.252;10.1109/tvcg.2011.216;10.1109/tvcg.2021.3114869;10.1109/tvcg.2020.3030470",
                "AuthorKeywords": "Ray Tracing,Path Tracing,Volume Rendering,Scientific Visualization,Delta Tracking",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 711,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 224,
                "i": [
                    224
                ]
            }
        },
        {
            "name": "Will Usher 0001",
            "value": 63,
            "numPapers": 37,
            "cluster": "11",
            "visible": 1,
            "index": 566,
            "x": 83.76431887234412,
            "y": 222.7858588058323,
            "vy": 0,
            "vx": 0,
            "r": 1.072538860103627,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Xiaoli Qiao",
            "value": 47,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 567,
            "x": -212.44214337465607,
            "y": -107.78838396776382,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "EVM: Incorporating Model Checking into Exploratory Visual Analysis",
                "DOI": "10.1109/tvcg.2023.3326516",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326516",
                "FirstPage": 208,
                "LastPage": 218,
                "PaperType": "J",
                "Abstract": "Visual analytics (VA) tools support data exploration by helping analysts quickly and iteratively generate views of data which reveal interesting patterns. However, these tools seldom enable explicit checks of the resulting interpretations of data—e.g., whether patterns can be accounted for by a model that implies a particular structure in the relationships between variables. We present EVM, a data exploration tool that enables users to express and check provisional interpretations of data in the form of statistical models. EVM integrates support for visualization-based model checks by rendering distributions of model predictions alongside user-generated views of data. In a user study with data scientists practicing in the private and public sector, we evaluate how model checks influence analysts' thinking during data exploration. Our analysis characterizes how participants use model checks to scrutinize expectations about data generating process and surfaces further opportunities to scaffold model exploration in VA tools.",
                "AuthorNamesDeduped": "Alex Kale;Ziyang Guo;Xiaoli Qiao;Jeffrey Heer;Jessica Hullman",
                "AuthorNames": "Alex Kale;Ziyang Guo;Xiao Li Qiao;Jeffrey Heer;Jessica Hullman",
                "AuthorAffiliation": "University of Chicago, USA;Northwestern University, USA;Northwestern University, USA;University of Washington, USA;Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2013.119;10.1109/tvcg.2018.2864909;10.1109/tvcg.2021.3114824;10.1109/tvcg.2020.3028984;10.1109/tvcg.2022.3209460;10.1109/tvcg.2015.2467091;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2010.161;10.1109/tvcg.2020.3028957;10.1109/tvcg.2021.3114679",
                "AuthorKeywords": "Visualization,model checks,exploratory analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 199,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 105,
                "i": [
                    105
                ]
            }
        },
        {
            "name": "Zhen Wen",
            "value": 103,
            "numPapers": 50,
            "cluster": "1",
            "visible": 1,
            "index": 568,
            "x": 229.66030898289887,
            "y": -64.07918911689994,
            "vy": 0,
            "vx": 0,
            "r": 1.1185952792170408,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Effects of View Layout on Situated Analytics for Multiple-View Representations in Immersive Visualization",
                "DOI": "10.1109/tvcg.2022.3209475",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209475",
                "FirstPage": 440,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "Multiple-view (MV) representations enabling multi-perspective exploration of large and complex data are often employed on 2D displays. The technique also shows great potential in addressing complex analytic tasks in immersive visualization. However, although useful, the design space of MV representations in immersive visualization lacks in deep exploration. In this paper, we propose a new perspective to this line of research, by examining the effects of view layout for MV representations on situated analytics. Specifically, we disentangle situated analytics in perspectives of situatedness regarding spatial relationship between visual representations and physical referents, and analytics regarding cross-view data analysis including filtering, refocusing, and connecting tasks. Through an in-depth analysis of existing layout paradigms, we summarize design trade-offs for achieving high situatedness and effective analytics simultaneously. We then distill a list of design requirements for a desired layout that balances situatedness and analytics, and develop a prototype system with an automatic layout adaptation method to fulfill the requirements. The method mainly includes a cylindrical paradigm for egocentric reference frame, and a force-directed method for proper view-view, view-user, and view-referent proximities and high view visibility. We conducted a formal user study that compares layouts by our method with linked and embedded layouts. Quantitative results show that participants finished filtering- and connecting-centered tasks significantly faster with our layouts, and user feedback confirms high usability of the prototype system.",
                "AuthorNamesDeduped": "Zhen Wen;Wei Zeng 0004;Luoxuan Weng;Yihan Liu;Mingliang Xu;Wei Chen 0001",
                "AuthorNames": "Zhen Wen;Wei Zeng;Luoxuan Weng;Yihan Liu;Mingliang Xu;Wei Chen",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;The Hong Kong University of Science and Technology (Guangzhou), China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Zhengzhou University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2021.3114835;10.1109/tvcg.2020.3030338;10.1109/tvcg.2021.3114806;10.1109/tvcg.2019.2934332;10.1109/tvcg.2021.3114861;10.1109/vast.2015.7347628;10.1109/tvcg.2007.70521;10.1109/tvcg.2018.2865191;10.1109/tvcg.2020.3030419;10.1109/tvcg.2017.2744198;10.1109/tvcg.2021.3114801;10.1109/tvcg.2019.2934282;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Situated analytics,multiple-view representations,view layout,immersive visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1320,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 146,
                "i": [
                    146
                ]
            }
        },
        {
            "name": "Mohammad Ghoniem",
            "value": 186,
            "numPapers": 8,
            "cluster": "3",
            "visible": 1,
            "index": 569,
            "x": -126.17031561586971,
            "y": 202.56122890966037,
            "vy": 0,
            "vx": 0,
            "r": 1.2141623488773747,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "A Comparison of the Readability of Graphs Using Node-Link and Matrix-Based Representations",
                "DOI": "10.1109/infvis.2004.1",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.1",
                "FirstPage": 17,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "In this paper, we describe a taxonomy of generic graph related tasks and an evaluation aiming at assessing the readability of two representations of graphs: matrix-based representations and node-link diagrams. This evaluation bears on seven generic tasks and leads to important recommendations with regard to the representation of graphs according to their size and density. For instance, we show that when graphs are bigger than twenty vertices, the matrix-based visualization performs better than node-link diagrams on most tasks. Only path finding is consistently in favor of node-link diagrams throughout the evaluation",
                "AuthorNamesDeduped": "Mohammad Ghoniem;Jean-Daniel Fekete;Philippe Castagliola",
                "AuthorNames": "M. Ghoniem;J.-D. Fekete;P. Castagliola",
                "AuthorAffiliation": "Ecole des Mines de Nantes, Nantes, France;INRIA Futurs/LRI, Université Paris Sud, Orsay, France; IRCCyN, Ecole des Mines de Nantes, Nantes, France",
                "InternalReferences": "0.1109/infvis.2003.1249030",
                "AuthorKeywords": "Visualization of graphs, adjacency matrices, node-link representation, readability, evaluation",
                "AminerCitationCount": 532,
                "CitationCountCrossRef": 166,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 2200,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2450,
                "i": [
                    2450
                ]
            }
        },
        {
            "name": "Philippe Castagliola",
            "value": 121,
            "numPapers": 0,
            "cluster": "3",
            "visible": 1,
            "index": 570,
            "x": -43.832512773244105,
            "y": -234.7950400327558,
            "vy": 0,
            "vx": 0,
            "r": 1.1393206678180772,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "A Comparison of the Readability of Graphs Using Node-Link and Matrix-Based Representations",
                "DOI": "10.1109/infvis.2004.1",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.1",
                "FirstPage": 17,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "In this paper, we describe a taxonomy of generic graph related tasks and an evaluation aiming at assessing the readability of two representations of graphs: matrix-based representations and node-link diagrams. This evaluation bears on seven generic tasks and leads to important recommendations with regard to the representation of graphs according to their size and density. For instance, we show that when graphs are bigger than twenty vertices, the matrix-based visualization performs better than node-link diagrams on most tasks. Only path finding is consistently in favor of node-link diagrams throughout the evaluation",
                "AuthorNamesDeduped": "Mohammad Ghoniem;Jean-Daniel Fekete;Philippe Castagliola",
                "AuthorNames": "M. Ghoniem;J.-D. Fekete;P. Castagliola",
                "AuthorAffiliation": "Ecole des Mines de Nantes, Nantes, France;INRIA Futurs/LRI, Université Paris Sud, Orsay, France; IRCCyN, Ecole des Mines de Nantes, Nantes, France",
                "InternalReferences": "0.1109/infvis.2003.1249030",
                "AuthorKeywords": "Visualization of graphs, adjacency matrices, node-link representation, readability, evaluation",
                "AminerCitationCount": 532,
                "CitationCountCrossRef": 166,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 2200,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2450,
                "i": [
                    2450
                ]
            }
        },
        {
            "name": "Danny Holten",
            "value": 366,
            "numPapers": 13,
            "cluster": "4",
            "visible": 1,
            "index": 571,
            "x": 191.0897572869458,
            "y": 143.64784947926003,
            "vy": 0,
            "vx": 0,
            "r": 1.4214162348877375,
            "node": {
                "Conference": "InfoVis",
                "Year": 2006,
                "Title": "Hierarchical Edge Bundles: Visualization of Adjacency Relations in Hierarchical Data",
                "DOI": "10.1109/tvcg.2006.147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.147",
                "FirstPage": 741,
                "LastPage": 748,
                "PaperType": "J",
                "Abstract": "A compound graph is a frequently encountered type of data set. Relations are given between items, and a hierarchy is defined on the items as well. We present a new method for visualizing such compound graphs. Our approach is based on visually bundling the adjacency edges, i.e., non-hierarchical edges, together. We realize this as follows. We assume that the hierarchy is shown via a standard tree visualization method. Next, we bend each adjacency edge, modeled as a B-spline curve, toward the polyline defined by the path via the inclusion edges from one node to another. This hierarchical bundling reduces visual clutter and also visualizes implicit adjacency edges between parent nodes that are the result of explicit adjacency edges between their respective child nodes. Furthermore, hierarchical edge bundling is a generic method which can be used in conjunction with existing tree visualization techniques. We illustrate our technique by providing example visualizations and discuss the results based on an informal evaluation provided by potential users of such visualizations",
                "AuthorNamesDeduped": "Danny Holten",
                "AuthorNames": "Danny Holten",
                "AuthorAffiliation": "Technische Universiteit Eindhoven, Netherlands",
                "InternalReferences": "0.1109/infvis.2004.1;10.1109/infvis.2003.1249008;10.1109/infvis.2005.1532150;10.1109/infvis.2003.1249030;10.1109/infvis.2005.1532129;10.1109/infvis.1997.636718;10.1109/infvis.2002.1173152",
                "AuthorKeywords": "Network visualization, edge bundling, edge aggregation, edge concentration, curves, graph visualization, tree visualization, node-link diagrams, hierarchies, treemaps",
                "AminerCitationCount": 1395,
                "CitationCountCrossRef": 661,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 7591,
                "Award": "TT;BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2216,
                "i": [
                    2216
                ]
            }
        },
        {
            "name": "Zhiguang Zhou",
            "value": 239,
            "numPapers": 63,
            "cluster": "1",
            "visible": 1,
            "index": 572,
            "x": -238.1444453760606,
            "y": 23.17807447844831,
            "vy": 0,
            "vx": 0,
            "r": 1.2751871042026481,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Preserving Minority Structures in Graph Sampling",
                "DOI": "10.1109/tvcg.2020.3030428",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030428",
                "FirstPage": 1698,
                "LastPage": 1708,
                "PaperType": "J",
                "Abstract": "Sampling is a widely used graph reduction technique to accelerate graph computations and simplify graph visualizations. By comprehensively analyzing the literature on graph sampling, we assume that existing algorithms cannot effectively preserve minority structures that are rare and small in a graph but are very important in graph analysis. In this work, we initially conduct a pilot user study to investigate representative minority structures that are most appealing to human viewers. We then perform an experimental study to evaluate the performance of existing graph sampling algorithms regarding minority structure preservation. Results confirm our assumption and suggest key points for designing a new graph sampling approach named mino-centric graph sampling (MCGS). In this approach, a triangle-based algorithm and a cut-point-based algorithm are proposed to efficiently identify minority structures. A set of importance assessment criteria are designed to guide the preservation of important minority structures. Three optimization objectives are introduced into a greedy strategy to balance the preservation between minority and majority structures and suppress the generation of new minority structures. A series of experiments and case studies are conducted to evaluate the effectiveness of the proposed MCGS.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorNames": "Ying Zhao;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Software, Tsinghua University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Information, Zhejiang University of Finance and Economics China",
                "InternalReferences": "0.1109/tvcg.2018.2865139;10.1109/tvcg.2008.130;10.1109/tvcg.2011.233;10.1109/tvcg.2013.223;10.1109/tvcg.2016.2598831;10.1109/visual.2005.1532819;10.1109/tvcg.2019.2934208;10.1109/tvcg.2016.2598867;10.1109/tvcg.2017.2744098;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Graph sampling,graph visualization,node-link diagram",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 57,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1182,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 459,
                "i": [
                    459
                ]
            }
        },
        {
            "name": "Linhao Meng",
            "value": 116,
            "numPapers": 30,
            "cluster": "1",
            "visible": 1,
            "index": 573,
            "x": 160.0833660656704,
            "y": -178.11040370816235,
            "vy": 0,
            "vx": 0,
            "r": 1.1335636154289004,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Class-Constrained t-SNE: Combining Data Features and Class Probabilities",
                "DOI": "10.1109/tvcg.2023.3326600",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326600",
                "FirstPage": 164,
                "LastPage": 174,
                "PaperType": "J",
                "Abstract": "Data features and class probabilities are two main perspectives when, e.g., evaluating model results and identifying problematic items. Class probabilities represent the likelihood that each instance belongs to a particular class, which can be produced by probabilistic classifiers or even human labeling with uncertainty. Since both perspectives are multi-dimensional data, dimensionality reduction (DR) techniques are commonly used to extract informative characteristics from them. However, existing methods either focus solely on the data feature perspective or rely on class probability estimates to guide the DR process. In contrast to previous work where separate views are linked to conduct the analysis, we propose a novel approach, class-constrained t-SNE, that combines data features and class probabilities in the same DR result. Specifically, we combine them by balancing two corresponding components in a cost function to optimize the positions of data points and iconic representation of classes – class landmarks. Furthermore, an interactive user-adjustable parameter balances these two components so that users can focus on the weighted perspectives of interest and also empowers a smooth visual transition between varying perspectives to preserve the mental map. We illustrate its application potential in model evaluation and visual-interactive labeling. A comparative analysis is performed to evaluate the DR results.",
                "AuthorNamesDeduped": "Linhao Meng;Stef van den Elzen;Nicola Pezzotti;Anna Vilanova",
                "AuthorNames": "Linhao Meng;Stef van den Elzen;Nicola Pezzotti;Anna Vilanova",
                "AuthorAffiliation": "Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands",
                "InternalReferences": "10.1109/tvcg.2014.2346660;10.1109/tvcg.2017.2744818;10.1109/tvcg.2013.212;10.1109/vast.2010.5652443;10.1109/tvcg.2012.277;10.1109/visual.1997.663916;10.1109/vast.2012.6400492;10.1109/tvcg.2016.2598445;10.1109/tvcg.2018.2864843;10.1109/tvcg.2019.2934631;10.1109/tvcg.2011.212;10.1109/tvcg.2019.2934307;10.1109/tvcg.2016.2598828;10.1109/visual.2000.885740;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Dimensionality reduction,t-distributed stochastic neighbor embedding,constraint integration",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 346,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 35,
                "i": [
                    35
                ]
            }
        },
        {
            "name": "Cheng Tang",
            "value": 116,
            "numPapers": 14,
            "cluster": "1",
            "visible": 1,
            "index": 574,
            "x": 2.273336373454579,
            "y": 239.67651520692039,
            "vy": 0,
            "vx": 0,
            "r": 1.1335636154289004,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Abstraction of Large Scale Geospatial Origin-Destination Movement Data",
                "DOI": "10.1109/tvcg.2018.2864503",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864503",
                "FirstPage": 43,
                "LastPage": 53,
                "PaperType": "J",
                "Abstract": "A variety of human movement datasets are represented in an Origin-Destination(OD) form, such as taxi trips, mobile phone locations, etc. As a commonly-used method to visualize OD data, flow map always fails to discover patterns of human mobility, due to massive intersections and occlusions of lines on a 2D geographical map. A large number of techniques have been proposed to reduce visual clutter of flow maps, such as filtering, clustering and edge bundling, but the correlations of OD flows are often neglected, which makes the simplified OD flow map present little semantic information. In this paper, a characterization of OD flows is established based on an analogy between OD flows and natural language processing (NPL) terms. Then, an iterative multi-objective sampling scheme is designed to select OD flows in a vectorized representation space. To enhance the readability of sampled OD flows, a set of meaningful visual encodings are designed to present the interactions of OD flows. We design and implement a visual exploration system that supports visual inspection and quantitative evaluation from a variety of perspectives. Case studies based on real-world datasets and interviews with domain experts have demonstrated the effectiveness of our system in reducing the visual clutter and enhancing correlations of OD flows.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Linhao Meng;Cheng Tang;Ying Zhao 0001;Zhiyong Guo;Miaoxin Hu;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Linhao Meng;Cheng Tang;Ying Zhao;Zhiyong Guo;Miaoxin Hu;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics;State Key Lab of CAD & CG, Zhejiang University;Information School, Zhejiang Sci-tech University;Central South University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2017.2744322;10.1109/tvcg.2016.2598667;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346594;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2013.226;10.1109/tvcg.2009.143;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532150;10.1109/tvcg.2015.2467691;10.1109/tvcg.2014.2346746;10.1109/tvcg.2016.2598885",
                "AuthorKeywords": "Visual abstraction,human mobility,origin-destination,flow map,representation learning",
                "AminerCitationCount": 87,
                "CitationCountCrossRef": 73,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2487,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 734,
                "i": [
                    734
                ]
            }
        },
        {
            "name": "Zhiyong Guo",
            "value": 116,
            "numPapers": 14,
            "cluster": "1",
            "visible": 1,
            "index": 575,
            "x": -163.71774869233917,
            "y": -175.3467957024367,
            "vy": 0,
            "vx": 0,
            "r": 1.1335636154289004,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Abstraction of Large Scale Geospatial Origin-Destination Movement Data",
                "DOI": "10.1109/tvcg.2018.2864503",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864503",
                "FirstPage": 43,
                "LastPage": 53,
                "PaperType": "J",
                "Abstract": "A variety of human movement datasets are represented in an Origin-Destination(OD) form, such as taxi trips, mobile phone locations, etc. As a commonly-used method to visualize OD data, flow map always fails to discover patterns of human mobility, due to massive intersections and occlusions of lines on a 2D geographical map. A large number of techniques have been proposed to reduce visual clutter of flow maps, such as filtering, clustering and edge bundling, but the correlations of OD flows are often neglected, which makes the simplified OD flow map present little semantic information. In this paper, a characterization of OD flows is established based on an analogy between OD flows and natural language processing (NPL) terms. Then, an iterative multi-objective sampling scheme is designed to select OD flows in a vectorized representation space. To enhance the readability of sampled OD flows, a set of meaningful visual encodings are designed to present the interactions of OD flows. We design and implement a visual exploration system that supports visual inspection and quantitative evaluation from a variety of perspectives. Case studies based on real-world datasets and interviews with domain experts have demonstrated the effectiveness of our system in reducing the visual clutter and enhancing correlations of OD flows.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Linhao Meng;Cheng Tang;Ying Zhao 0001;Zhiyong Guo;Miaoxin Hu;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Linhao Meng;Cheng Tang;Ying Zhao;Zhiyong Guo;Miaoxin Hu;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics;State Key Lab of CAD & CG, Zhejiang University;Information School, Zhejiang Sci-tech University;Central South University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2017.2744322;10.1109/tvcg.2016.2598667;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346594;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2013.226;10.1109/tvcg.2009.143;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532150;10.1109/tvcg.2015.2467691;10.1109/tvcg.2014.2346746;10.1109/tvcg.2016.2598885",
                "AuthorKeywords": "Visual abstraction,human mobility,origin-destination,flow map,representation learning",
                "AminerCitationCount": 87,
                "CitationCountCrossRef": 73,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2487,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 734,
                "i": [
                    734
                ]
            }
        },
        {
            "name": "Miaoxin Hu",
            "value": 116,
            "numPapers": 14,
            "cluster": "1",
            "visible": 1,
            "index": 576,
            "x": 239.37313036926204,
            "y": 18.72176426569572,
            "vy": 0,
            "vx": 0,
            "r": 1.1335636154289004,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Abstraction of Large Scale Geospatial Origin-Destination Movement Data",
                "DOI": "10.1109/tvcg.2018.2864503",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864503",
                "FirstPage": 43,
                "LastPage": 53,
                "PaperType": "J",
                "Abstract": "A variety of human movement datasets are represented in an Origin-Destination(OD) form, such as taxi trips, mobile phone locations, etc. As a commonly-used method to visualize OD data, flow map always fails to discover patterns of human mobility, due to massive intersections and occlusions of lines on a 2D geographical map. A large number of techniques have been proposed to reduce visual clutter of flow maps, such as filtering, clustering and edge bundling, but the correlations of OD flows are often neglected, which makes the simplified OD flow map present little semantic information. In this paper, a characterization of OD flows is established based on an analogy between OD flows and natural language processing (NPL) terms. Then, an iterative multi-objective sampling scheme is designed to select OD flows in a vectorized representation space. To enhance the readability of sampled OD flows, a set of meaningful visual encodings are designed to present the interactions of OD flows. We design and implement a visual exploration system that supports visual inspection and quantitative evaluation from a variety of perspectives. Case studies based on real-world datasets and interviews with domain experts have demonstrated the effectiveness of our system in reducing the visual clutter and enhancing correlations of OD flows.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Linhao Meng;Cheng Tang;Ying Zhao 0001;Zhiyong Guo;Miaoxin Hu;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Linhao Meng;Cheng Tang;Ying Zhao;Zhiyong Guo;Miaoxin Hu;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics;State Key Lab of CAD & CG, Zhejiang University;Information School, Zhejiang Sci-tech University;Central South University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2017.2744322;10.1109/tvcg.2016.2598667;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346594;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2013.226;10.1109/tvcg.2009.143;10.1109/tvcg.2014.2346271;10.1109/tvcg.2016.2598432;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532150;10.1109/tvcg.2015.2467691;10.1109/tvcg.2014.2346746;10.1109/tvcg.2016.2598885",
                "AuthorKeywords": "Visual abstraction,human mobility,origin-destination,flow map,representation learning",
                "AminerCitationCount": 87,
                "CitationCountCrossRef": 73,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2487,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 734,
                "i": [
                    734
                ]
            }
        },
        {
            "name": "Justin Talbot",
            "value": 216,
            "numPapers": 18,
            "cluster": "5",
            "visible": 1,
            "index": 577,
            "x": -189.316648171741,
            "y": 148.0175892420128,
            "vy": 0,
            "vx": 0,
            "r": 1.2487046632124352,
            "node": {
                "Conference": "InfoVis",
                "Year": 2008,
                "Title": "Vispedia: Interactive Visual Exploration of Wikipedia Data via Search-Based Integration",
                "DOI": "10.1109/tvcg.2008.178",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.178",
                "FirstPage": 1213,
                "LastPage": 1220,
                "PaperType": "J",
                "Abstract": "Wikipedia is an example of the collaborative, semi-structured data sets emerging on the Web. These data sets have large, non-uniform schema that require costly data integration into structured tables before visualization can begin. We present Vispedia, a Web-based visualization system that reduces the cost of this data integration. Users can browse Wikipedia, select an interesting data table, then use a search interface to discover, integrate, and visualize additional columns of data drawn from multiple Wikipedia articles. This interaction is supported by a fast path search algorithm over DBpedia, a semantic graph extracted from Wikipedia's hyperlink structure. Vispedia can also export the augmented data tables produced for use in traditional visualization systems. We believe that these techniques begin to address the \"long tail\" of visualization by allowing a wider audience to visualize a broader class of data. We evaluated this system in a first-use formative lab study. Study participants were able to quickly create effective visualizations for a diverse set of domains, performing data integration as needed.",
                "AuthorNamesDeduped": "Bryan Chan 0001;Leslie Wu;Justin Talbot;Mike Cammarano;Pat Hanrahan",
                "AuthorNames": "Bryan Chan;Leslie Wu;Justin Talbot;Mike Cammarano;Pat Hanrahan",
                "AuthorAffiliation": "University of Stanford, USA;University of Stanford, USA;University of Stanford, USA;University of Stanford, USA;University of Stanford, USA",
                "InternalReferences": "0.1109/tvcg.2007.70617;10.1109/tvcg.2007.70577;10.1109/vast.2007.4389010",
                "AuthorKeywords": "Information visualization, Data integration, Wikipedia, Semantic web, Search interfaces",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 30,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 636,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1984,
                "i": [
                    1984
                ]
            }
        },
        {
            "name": "Nicolas Kruchten",
            "value": 0,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 578,
            "x": 39.64604061282403,
            "y": -237.23024989180092,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Metrics-Based Evaluation and Comparison of Visualization Notations",
                "DOI": "10.1109/tvcg.2023.3326907",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326907",
                "FirstPage": 425,
                "LastPage": 435,
                "PaperType": "J",
                "Abstract": "A visualization notation is a recurring pattern of symbols used to author specifications of visualizations, from data transformation to visual mapping. Programmatic notations use symbols defined by grammars or domain-specific languages (e.g. ggplot2, dplyr, Vega-Lite) or libraries (e.g. Matplotlib, Pandas). Designers and prospective users of grammars and libraries often evaluate visualization notations by inspecting galleries of examples. While such collections demonstrate usage and expressiveness, their construction and evaluation are usually ad hoc, making comparisons of different notations difficult. More rarely, experts analyze notations via usability heuristics, such as the Cognitive Dimensions of Notations framework. These analyses, akin to structured close readings of text, can reveal design deficiencies, but place a burden on the expert to simultaneously consider many facets of often complex systems. To alleviate these issues, we introduce a metrics-based approach to usability evaluation and comparison of notations in which metrics are computed for a gallery of examples across a suite of notations. While applicable to any visualization domain, we explore the utility of our approach via a case study considering statistical graphics that explores 40 visualizations across 9 widely used notations. We facilitate the computation of appropriate metrics and analysis via a new tool called NotaScope. We gathered feedback via interviews with authors or maintainers of prominent charting libraries ($n=6$). We find that this approach is a promising way to formalize, externalize, and extend evaluations and comparisons of visualization notations.",
                "AuthorNamesDeduped": "Nicolas Kruchten;Andrew M. McNutt;Michael J. McGuffin",
                "AuthorNames": "Nicolas Kruchten;Andrew M. McNutt;Michael J. McGuffin",
                "AuthorAffiliation": "École de technologie supérieure, Canada;University of Chicago, USA;École de technologie supérieure, Canada",
                "InternalReferences": "10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/infvis.2000.885092;10.1109/tvcg.2021.3114782;10.1109/tvcg.2022.3209460;10.1109/tvcg.2022.3209367;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2018.2864836;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Notation,Usability,Evaluation,Language design,API design,Domain-specific languages",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 95,
                "DownloadsXplore": 192,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 109,
                "i": [
                    109
                ]
            }
        },
        {
            "name": "Andrew M. McNutt",
            "value": 8,
            "numPapers": 26,
            "cluster": "5",
            "visible": 1,
            "index": 579,
            "x": 131.12616099668443,
            "y": 201.88097954555204,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "No Grammar to Rule Them All: A Survey of JSON-style DSLs for Visualization",
                "DOI": "10.1109/tvcg.2022.3209460",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209460",
                "FirstPage": 160,
                "LastPage": 170,
                "PaperType": "J",
                "Abstract": "There has been substantial growth in the use of JSON-based grammars, as well as other standard data serialization languages, to create visualizations. Each of these grammars serves a purpose: some focus on particular computational tasks (such as animation), some are concerned with certain chart types (such as maps), and some target specific data domains (such as ML). Despite the prominence of this interface form, there has been little detailed analysis of the characteristics of these languages. In this study, we survey and analyze the design and implementation of 57 JSON-style DSLs for visualization. We analyze these languages supported by a collected corpus of examples for each DSL (consisting of 4395 instances) across a variety of axes organized into concerns related to domain, conceptual model, language relationships, affordances, and general practicalities. We identify tensions throughout these areas, such as between formal and colloquial specifications, among types of users, and within the composition of languages. Through this work, we seek to support language implementers by elucidating the choices, opportunities, and tradeoffs in visualization DSL design.",
                "AuthorNamesDeduped": "Andrew M. McNutt",
                "AuthorNames": "Andrew M. McNutt",
                "AuthorAffiliation": null,
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2021.3114804;10.1109/tvcg.2014.2346325;10.1109/tvcg.2020.3030453;10.1109/tvcg.2021.3114876;10.1109/tvcg.2020.3030378;10.1109/tvcg.2014.2346318;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2864841;10.1109/tvcg.2009.128;10.1109/tvcg.2020.3030476;10.1109/tvcg.2018.2864836;10.1109/tvcg.2020.3030367;10.1109/tvcg.2021.3114849",
                "AuthorKeywords": "Visualization grammar,Survey,Declarative specification,Domain-Specific Languages",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 506,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 160,
                "i": [
                    160
                ]
            }
        },
        {
            "name": "Michael J. McGuffin",
            "value": 439,
            "numPapers": 87,
            "cluster": "4",
            "visible": 1,
            "index": 580,
            "x": -233.25799037466126,
            "y": -60.338295686689975,
            "vy": 0,
            "vx": 0,
            "r": 1.5054691997697178,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Metrics-Based Evaluation and Comparison of Visualization Notations",
                "DOI": "10.1109/tvcg.2023.3326907",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326907",
                "FirstPage": 425,
                "LastPage": 435,
                "PaperType": "J",
                "Abstract": "A visualization notation is a recurring pattern of symbols used to author specifications of visualizations, from data transformation to visual mapping. Programmatic notations use symbols defined by grammars or domain-specific languages (e.g. ggplot2, dplyr, Vega-Lite) or libraries (e.g. Matplotlib, Pandas). Designers and prospective users of grammars and libraries often evaluate visualization notations by inspecting galleries of examples. While such collections demonstrate usage and expressiveness, their construction and evaluation are usually ad hoc, making comparisons of different notations difficult. More rarely, experts analyze notations via usability heuristics, such as the Cognitive Dimensions of Notations framework. These analyses, akin to structured close readings of text, can reveal design deficiencies, but place a burden on the expert to simultaneously consider many facets of often complex systems. To alleviate these issues, we introduce a metrics-based approach to usability evaluation and comparison of notations in which metrics are computed for a gallery of examples across a suite of notations. While applicable to any visualization domain, we explore the utility of our approach via a case study considering statistical graphics that explores 40 visualizations across 9 widely used notations. We facilitate the computation of appropriate metrics and analysis via a new tool called NotaScope. We gathered feedback via interviews with authors or maintainers of prominent charting libraries ($n=6$). We find that this approach is a promising way to formalize, externalize, and extend evaluations and comparisons of visualization notations.",
                "AuthorNamesDeduped": "Nicolas Kruchten;Andrew M. McNutt;Michael J. McGuffin",
                "AuthorNames": "Nicolas Kruchten;Andrew M. McNutt;Michael J. McGuffin",
                "AuthorAffiliation": "École de technologie supérieure, Canada;University of Chicago, USA;École de technologie supérieure, Canada",
                "InternalReferences": "10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/infvis.2000.885092;10.1109/tvcg.2021.3114782;10.1109/tvcg.2022.3209460;10.1109/tvcg.2022.3209367;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2018.2864836;10.1109/tvcg.2022.3209369",
                "AuthorKeywords": "Notation,Usability,Evaluation,Language design,API design,Domain-specific languages",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 95,
                "DownloadsXplore": 192,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 109,
                "i": [
                    109
                ]
            }
        },
        {
            "name": "Yu-Ru Lin",
            "value": 362,
            "numPapers": 60,
            "cluster": "1",
            "visible": 1,
            "index": 581,
            "x": 212.9382885292244,
            "y": -113.16927709517621,
            "vy": 0,
            "vx": 0,
            "r": 1.416810592976396,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "FairSight: Visual Analytics for Fairness in Decision Making",
                "DOI": "10.1109/tvcg.2019.2934262",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934262",
                "FirstPage": 1086,
                "LastPage": 1095,
                "PaperType": "J",
                "Abstract": "Data-driven decision making related to individuals has become increasingly pervasive, but the issue concerning the potential discrimination has been raised by recent studies. In response, researchers have made efforts to propose and implement fairness measures and algorithms, but those efforts have not been translated to the real-world practice of data-driven decision making. As such, there is still an urgent need to create a viable tool to facilitate fair decision making. We propose FairSight, a visual analytic system to address this need; it is designed to achieve different notions of fairness in ranking decisions through identifying the required actions – understanding, measuring, diagnosing and mitigating biases – that together lead to fairer decision making. Through a case study and user study, we demonstrate that the proposed visual analytic and diagnostic modules in the system are effective in understanding the fairness-aware decision pipeline and obtaining more fair outcomes.",
                "AuthorNamesDeduped": "Yongsu Ahn;Yu-Ru Lin",
                "AuthorNames": "Yongsu Ahn;Yu-Ru Lin",
                "AuthorAffiliation": "University of Pittsburgh;University of Pittsburgh",
                "InternalReferences": "0.1109/vast.2015.7347637;10.1109/tvcg.2018.2864812;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Fairness in Machine Learning,Visual Analytic",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 2050,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 609,
                "i": [
                    609
                ]
            }
        },
        {
            "name": "Gunther H. Weber",
            "value": 159,
            "numPapers": 28,
            "cluster": "11",
            "visible": 1,
            "index": 582,
            "x": -80.63856601223151,
            "y": 227.48059625271557,
            "vy": 0,
            "vx": 0,
            "r": 1.1830742659758204,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Augmented Topological Descriptors of Pore Networks for Material Science",
                "DOI": "10.1109/tvcg.2012.200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.200",
                "FirstPage": 2041,
                "LastPage": 2050,
                "PaperType": "J",
                "Abstract": "One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO&lt;sub&gt;2&lt;/sub&gt; in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.",
                "AuthorNamesDeduped": "Daniela Mayumi Ushizima;Dmitriy Morozov;Gunther H. Weber;Andrea Gomes Campos Bianchi;James A. Sethian;E. Wes Bethel",
                "AuthorNames": "Daniela Ushizima;Dmitriy Morozov;Gunther H. Weber;Andrea G.C. Bianchi;James A. Sethian;E. Wes Bethel",
                "AuthorAffiliation": "Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA;Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA;Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA and Department of Computer Science, University of California,슠Davis, Davis, CA, USA;Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA;Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA and Department of Mathematics, University of California, Berkeley, Berkeley, CA, USA;Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA",
                "InternalReferences": "0.1109/tvcg.2010.218;10.1109/tvcg.2007.70603;10.1109/visual.2005.1532795",
                "AuthorKeywords": "Reeb graph, persistent homology, topological data analysis, geometric algorithms, segmentation, microscopy",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 904,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1447,
                "i": [
                    1447
                ]
            }
        },
        {
            "name": "Steffen Frey",
            "value": 34,
            "numPapers": 36,
            "cluster": "6",
            "visible": 1,
            "index": 583,
            "x": -94.28139013252851,
            "y": -222.39833514367405,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Visual Analysis of Displacement Processes in Porous Media using Spatio-Temporal Flow Graphs",
                "DOI": "10.1109/tvcg.2023.3326931",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326931",
                "FirstPage": 759,
                "LastPage": 769,
                "PaperType": "J",
                "Abstract": "We developed a new approach comprised of different visualizations for the comparative spatio-temporal analysis of displacement processes in porous media. We aim to analyze and compare ensemble datasets from experiments to gain insight into the influence of different parameters on fluid flow. To capture the displacement of a defending fluid by an invading fluid, we first condense an input image series to a single time map. From this map, we generate a spatio-temporal flow graph covering the whole process. This graph is further simplified to only reflect topological changes in the movement of the invading fluid. Our interactive tools allow the visual analysis of these processes by visualizing the graph structure and the context of the experimental setup, as well as by providing charts for multiple metrics. We apply our approach to analyze and compare ensemble datasets jointly with domain experts, where we vary either fluid properties or the solid structure of the porous medium. We finally report the generated insights from the domain experts and discuss our contribution's advantages, generality, and limitations.",
                "AuthorNamesDeduped": "Alexander Straub;Nikolaos Karadimitriou;Guido Reina;Steffen Frey;Holger Steeb;Thomas Ertl",
                "AuthorNames": "Alexander Straub;Nikolaos Karadimitriou;Guido Reina;Steffen Frey;Holger Steeb;Thomas Ertl",
                "AuthorAffiliation": "University of Stuttgart, Germany;University of Stuttgart, Germany;University of Stuttgart, Germany;University of Groningen, The Netherlands;University of Stuttgart, Germany;University of Stuttgart, Germany",
                "InternalReferences": "10.1109/tvcg.2010.190;10.1109/tvcg.2015.2468093;10.1109/tvcg.2013.141;10.1109/tvcg.2018.2864901;10.1109/tvcg.2018.2864849;10.1109/tvcg.2010.181;10.1109/tvcg.2014.2346321;10.1109/tvcg.2012.200;10.1109/tvcg.2010.223;10.1109/tvcg.2018.2864506",
                "AuthorKeywords": "Comparative visualization,ensemble,graph,porous media",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 178,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 112,
                "i": [
                    112
                ]
            }
        },
        {
            "name": "Guido Reina",
            "value": 16,
            "numPapers": 29,
            "cluster": "11",
            "visible": 1,
            "index": 584,
            "x": 219.93630080295637,
            "y": 100.38935993974403,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Visual Analysis of Displacement Processes in Porous Media using Spatio-Temporal Flow Graphs",
                "DOI": "10.1109/tvcg.2023.3326931",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326931",
                "FirstPage": 759,
                "LastPage": 769,
                "PaperType": "J",
                "Abstract": "We developed a new approach comprised of different visualizations for the comparative spatio-temporal analysis of displacement processes in porous media. We aim to analyze and compare ensemble datasets from experiments to gain insight into the influence of different parameters on fluid flow. To capture the displacement of a defending fluid by an invading fluid, we first condense an input image series to a single time map. From this map, we generate a spatio-temporal flow graph covering the whole process. This graph is further simplified to only reflect topological changes in the movement of the invading fluid. Our interactive tools allow the visual analysis of these processes by visualizing the graph structure and the context of the experimental setup, as well as by providing charts for multiple metrics. We apply our approach to analyze and compare ensemble datasets jointly with domain experts, where we vary either fluid properties or the solid structure of the porous medium. We finally report the generated insights from the domain experts and discuss our contribution's advantages, generality, and limitations.",
                "AuthorNamesDeduped": "Alexander Straub;Nikolaos Karadimitriou;Guido Reina;Steffen Frey;Holger Steeb;Thomas Ertl",
                "AuthorNames": "Alexander Straub;Nikolaos Karadimitriou;Guido Reina;Steffen Frey;Holger Steeb;Thomas Ertl",
                "AuthorAffiliation": "University of Stuttgart, Germany;University of Stuttgart, Germany;University of Stuttgart, Germany;University of Groningen, The Netherlands;University of Stuttgart, Germany;University of Stuttgart, Germany",
                "InternalReferences": "10.1109/tvcg.2010.190;10.1109/tvcg.2015.2468093;10.1109/tvcg.2013.141;10.1109/tvcg.2018.2864901;10.1109/tvcg.2018.2864849;10.1109/tvcg.2010.181;10.1109/tvcg.2014.2346321;10.1109/tvcg.2012.200;10.1109/tvcg.2010.223;10.1109/tvcg.2018.2864506",
                "AuthorKeywords": "Comparative visualization,ensemble,graph,porous media",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 178,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 112,
                "i": [
                    112
                ]
            }
        },
        {
            "name": "Yiwen Xing",
            "value": 0,
            "numPapers": 9,
            "cluster": "5",
            "visible": 1,
            "index": 585,
            "x": -230.18287515428648,
            "y": 74.6045842137476,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts",
                "DOI": "10.1109/tvcg.2023.3326923",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326923",
                "FirstPage": 540,
                "LastPage": 550,
                "PaperType": "J",
                "Abstract": "The circulation of historical books has always been an area of interest for historians. However, the data used to represent the journey of a book across different places and times can be difficult for domain experts to digest due to buried geographical and chronological features within text-based presentations. This situation provides an opportunity for collaboration between visualization researchers and historians. This paper describes a design study where a variant of the Nine-Stage Framework [46] was employed to develop a Visual Analytics (VA) tool called DanteExploreVis. This tool was designed to aid domain experts in exploring, explaining, and presenting book trade data from multiple perspectives. We discuss the design choices made and how each panel in the interface meets the domain requirements. We also present the results of a qualitative evaluation conducted with domain experts. The main contributions of this paper include: 1) the development of a VA tool to support domain experts in exploring, explaining, and presenting book trade data; 2) a comprehensive documentation of the iterative design, development, and evaluation process following the variant Nine-Stage Framework; 3) a summary of the insights gained and lessons learned from this design study in the context of the humanities field; and 4) reflections on how our approach could be applied in a more generalizable way.",
                "AuthorNamesDeduped": "Yiwen Xing;Cristina Dondi;Rita Borgo;Alfie Abdul-Rahman",
                "AuthorNames": "Yiwen Xing;Cristina Dondi;Rita Borgo;Alfie Abdul-Rahman",
                "AuthorAffiliation": "King's College London, United Kingdom;University of Oxford, United Kingdom;King's College London, United Kingdom;King's College London, United Kingdom",
                "InternalReferences": "10.1109/tvcg.2014.2346431;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114797;10.1109/tvcg.2015.2467771;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2022.3209483;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Design study,application motivated visualization,geospatial data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 175,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 113,
                "i": [
                    113
                ]
            }
        },
        {
            "name": "Cristina Dondi",
            "value": 0,
            "numPapers": 9,
            "cluster": "5",
            "visible": 1,
            "index": 586,
            "x": 119.4368810238181,
            "y": -210.6770786091888,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts",
                "DOI": "10.1109/tvcg.2023.3326923",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326923",
                "FirstPage": 540,
                "LastPage": 550,
                "PaperType": "J",
                "Abstract": "The circulation of historical books has always been an area of interest for historians. However, the data used to represent the journey of a book across different places and times can be difficult for domain experts to digest due to buried geographical and chronological features within text-based presentations. This situation provides an opportunity for collaboration between visualization researchers and historians. This paper describes a design study where a variant of the Nine-Stage Framework [46] was employed to develop a Visual Analytics (VA) tool called DanteExploreVis. This tool was designed to aid domain experts in exploring, explaining, and presenting book trade data from multiple perspectives. We discuss the design choices made and how each panel in the interface meets the domain requirements. We also present the results of a qualitative evaluation conducted with domain experts. The main contributions of this paper include: 1) the development of a VA tool to support domain experts in exploring, explaining, and presenting book trade data; 2) a comprehensive documentation of the iterative design, development, and evaluation process following the variant Nine-Stage Framework; 3) a summary of the insights gained and lessons learned from this design study in the context of the humanities field; and 4) reflections on how our approach could be applied in a more generalizable way.",
                "AuthorNamesDeduped": "Yiwen Xing;Cristina Dondi;Rita Borgo;Alfie Abdul-Rahman",
                "AuthorNames": "Yiwen Xing;Cristina Dondi;Rita Borgo;Alfie Abdul-Rahman",
                "AuthorAffiliation": "King's College London, United Kingdom;University of Oxford, United Kingdom;King's College London, United Kingdom;King's College London, United Kingdom",
                "InternalReferences": "10.1109/tvcg.2014.2346431;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114797;10.1109/tvcg.2015.2467771;10.1109/tvcg.2014.2346331;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2022.3209483;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Design study,application motivated visualization,geospatial data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 175,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 113,
                "i": [
                    113
                ]
            }
        },
        {
            "name": "Alfie Abdul-Rahman",
            "value": 114,
            "numPapers": 41,
            "cluster": "5",
            "visible": 1,
            "index": 587,
            "x": 54.287504671038725,
            "y": 236.2263042859367,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Dashboard Design Patterns",
                "DOI": "10.1109/tvcg.2022.3209448",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209448",
                "FirstPage": 342,
                "LastPage": 352,
                "PaperType": "J",
                "Abstract": "This paper introduces design patterns for dashboards to inform dashboard design processes. Despite a growing number of public examples, case studies, and general guidelines there is surprisingly little design guidance for dashboards. Such guidance is necessary to inspire designs and discuss tradeoffs in, e.g., screenspace, interaction, or information shown. Based on a systematic review of 144 dashboards, we report on eight groups of design patterns that provide common solutions in dashboard design. We discuss combinations of these patterns in “dashboard genres” such as narrative, analytical, or embedded dashboard. We ran a 2-week dashboard design workshop with 23 participants of varying expertise working on their own data and dashboards. We discuss the application of patterns for the dashboard design processes, as well as general design tradeoffs and common challenges. Our work complements previous surveys and aims to support dashboard designers and researchers in co-creation, structured design decisions, as well as future user evaluations about dashboard design guidelines. Detailed pattern descriptions and workshop material can be found online: https://dashboarddesignpatterns.github.io",
                "AuthorNamesDeduped": "Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen 0001",
                "AuthorNames": "Benjamin Bach;Euan Freeman;Alfie Abdul-Rahman;Cagatay Turkay;Saiful Khan;Yulei Fan;Min Chen",
                "AuthorAffiliation": "University of Edinburgh, Scotland;University of Glasgow, Scotland;King's College London, England;University of Warwick, England;University of Oxford, England;University of Oxford, England;University of Oxford, England",
                "InternalReferences": "0.1109/visual.1991.175794;10.1109/infvis.1997.636792;10.1109/tvcg.2020.3030424;10.1109/tvcg.2016.2599338;10.1109/tvcg.2021.3114828;10.1109/tvcg.2018.2864903;10.1109/tvcg.2013.120;10.1109/tvcg.2010.179;10.1109/tvcg.2019.2934398",
                "AuthorKeywords": "Dashboards,Design Patterns,Data Visualization,Storytelling,Visual Analytics,Qualitative Evaluation,Education",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 4205,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 134,
                "i": [
                    134
                ]
            }
        },
        {
            "name": "Andreas Stoffel",
            "value": 191,
            "numPapers": 15,
            "cluster": "4",
            "visible": 1,
            "index": 588,
            "x": -199.76829104047047,
            "y": -137.63222694837822,
            "vy": 0,
            "vx": 0,
            "r": 1.2199194012665515,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Visual analytics for the big data era---A comparative review of state-of-the-art commercial systems",
                "DOI": "10.1109/vast.2012.6400554",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400554",
                "FirstPage": 173,
                "LastPage": 182,
                "PaperType": "C",
                "Abstract": "Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.",
                "AuthorNamesDeduped": "Leishi Zhang;Andreas Stoffel;Michael Behrisch 0001;Sebastian Mittelstädt;Tobias Schreck;René Pompl;Stefan Weber 0004;Holger Last;Daniel A. Keim",
                "AuthorNames": "Leishi Zhang;Andreas Stoffel;Michael Behrisch;Sebastian Mittelstadt;Tobias Schreck;René Pompl;Stefan Weber;Holger Last;Daniel Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;Siemens AG;Siemens AG;Siemens AG;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.12;10.1109/infvis.2004.64;10.1109/infvis.2000.885098",
                "AuthorKeywords": null,
                "AminerCitationCount": 229,
                "CitationCountCrossRef": 97,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 3861,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1490,
                "i": [
                    1490
                ]
            }
        },
        {
            "name": "Florian Stoffel",
            "value": 124,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 589,
            "x": 240.4762069429573,
            "y": -33.48423351859742,
            "vy": 0,
            "vx": 0,
            "r": 1.1427748992515832,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Knowledge Generation Model for Visual Analytics",
                "DOI": "10.1109/tvcg.2014.2346481",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346481",
                "FirstPage": 1604,
                "LastPage": 1613,
                "PaperType": "J",
                "Abstract": "Visual analytics enables us to analyze huge information spaces in order to support complex decision making and data exploration. Humans play a central role in generating knowledge from the snippets of evidence emerging from visual data analysis. Although prior research provides frameworks that generalize this process, their scope is often narrowly focused so they do not encompass different perspectives at different levels. This paper proposes a knowledge generation model for visual analytics that ties together these diverse frameworks, yet retains previously developed models (e.g., KDD process) to describe individual segments of the overall visual analytic processes. To test its utility, a real world visual analytics system is compared against the model, demonstrating that the knowledge generation process model provides a useful guideline when developing and evaluating such systems. The model is used to effectively compare different data analysis systems. Furthermore, the model provides a common language and description of visual analytic processes, which can be used for communication between researchers. At the end, our model reflects areas of research that future researchers can embark on.",
                "AuthorNamesDeduped": "Dominik Sacha;Andreas Stoffel;Florian Stoffel;Bum Chul Kwon;Geoffrey P. Ellis;Daniel A. Keim",
                "AuthorNames": "Dominik Sacha;Andreas Stoffel;Florian Stoffel;Bum Chul Kwon;Geoffrey Ellis;Daniel A. Keim",
                "AuthorAffiliation": "Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz;Data Analysis and Visualization Group, University of Konstanz",
                "InternalReferences": "0.1109/visual.2005.1532781;10.1109/tvcg.2013.124;10.1109/vast.2009.5333023;10.1109/tvcg.2011.229;10.1109/tvcg.2008.109;10.1109/vast.2008.4677361;10.1109/vast.2008.4677365;10.1109/vast.2010.5652879;10.1109/tvcg.2012.273;10.1109/vast.2008.4677358;10.1109/tvcg.2008.121;10.1109/vast.2007.4389006;10.1109/vast.2011.6102435;10.1109/tvcg.2013.120",
                "AuthorKeywords": "Visual Analytics, Knowledge Generation, Reasoning, Visualization Taxonomies and Models, Interaction",
                "AminerCitationCount": 381,
                "CitationCountCrossRef": 242,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 5363,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1242,
                "i": [
                    1242
                ]
            }
        },
        {
            "name": "Geoffrey P. Ellis",
            "value": 436,
            "numPapers": 42,
            "cluster": "4",
            "visible": 1,
            "index": 590,
            "x": -154.83255464274106,
            "y": 187.28822713348166,
            "vy": 0,
            "vx": 0,
            "r": 1.5020149683362118,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "The Role of Uncertainty, Awareness, and Trust in Visual Analytics",
                "DOI": "10.1109/tvcg.2015.2467591",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467591",
                "FirstPage": 240,
                "LastPage": 249,
                "PaperType": "J",
                "Abstract": "Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making.",
                "AuthorNamesDeduped": "Dominik Sacha;Hansi Senaratne;Bum Chul Kwon;Geoffrey P. Ellis;Daniel A. Keim",
                "AuthorNames": "Dominik Sacha;Hansi Senaratne;Bum Chul Kwon;Geoffrey Ellis;Daniel A. Keim",
                "AuthorAffiliation": "Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz;Data Analysis and Visualisation Group, University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/visual.2000.885679;10.1109/vast.2008.4677385;10.1109/vast.2009.5332611;10.1109/tvcg.2012.260;10.1109/vast.2011.6102473;10.1109/vast.2009.5333020;10.1109/vast.2011.6102435;10.1109/tvcg.2012.279;10.1109/tvcg.2014.2346481;10.1109/vast.2006.261416",
                "AuthorKeywords": "Visual Analytics, Knowledge Generation, Uncertainty Measures and Propagation, Trust Building, Human Factors",
                "AminerCitationCount": 261,
                "CitationCountCrossRef": 167,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 3805,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1097,
                "i": [
                    1097
                ]
            }
        },
        {
            "name": "Julien Tierny",
            "value": 262,
            "numPapers": 140,
            "cluster": "11",
            "visible": 1,
            "index": 591,
            "x": -12.353119165466609,
            "y": -242.89380487547183,
            "vy": 0,
            "vx": 0,
            "r": 1.3016695451928613,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Merge Tree Geodesics and Barycenters with Path Mappings",
                "DOI": "10.1109/tvcg.2023.3326601",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326601",
                "FirstPage": 1095,
                "LastPage": 1105,
                "PaperType": "J",
                "Abstract": "Comparative visualization of scalar fields is often facilitated using similarity measures such as edit distances. In this paper, we describe a novel approach for similarity analysis of scalar fields that combines two recently introduced techniques: Wasserstein geodesics/barycenters as well as path mappings, a branch decomposition-independent edit distance. Effectively, we are able to leverage the reduced susceptibility of path mappings to small perturbations in the data when compared with the original Wasserstein distance. Our approach therefore exhibits superior performance and quality in typical tasks such as ensemble summarization, ensemble clustering, and temporal reduction of time series, while retaining practically feasible runtimes. Beyond studying theoretical properties of our approach and discussing implementation aspects, we describe a number of case studies that provide empirical insights into its utility for comparative visualization, and demonstrate the advantages of our method in both synthetic and real-world scenarios. We supply a C++ implementation that can be used to reproduce our results.",
                "AuthorNamesDeduped": "Florian Wetzels;Mathieu Pont;Julien Tierny;Christoph Garth",
                "AuthorNames": "Florian Wetzels;Mathieu Pont;Julien Tierny;Christoph Garth",
                "AuthorAffiliation": "University of Kaiserslautern-Landau, Germany;CNRS / Sorbonne University, France;CNRS / Sorbonne University, France;University of Kaiserslautern-Landau, Germany",
                "InternalReferences": "10.1109/tvcg.2022.3209395;10.1109/tvcg.2018.2864432;10.1109/tvcg.2022.3209387;10.1109/tvcg.2019.2934368;10.1109/tvcg.2021.3114839;10.1109/tvcg.2011.236;10.1109/tvcg.2013.148;10.1109/tvcg.2017.2743938;10.1109/tvcg.2019.2934256;10.1109/tvcg.2007.70601;10.1109/tvcg.2010.198;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Topological data analysis,merge trees,scalar data,ensemble data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 173,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 115,
                "i": [
                    115
                ]
            }
        },
        {
            "name": "Joshua A. Levine",
            "value": 110,
            "numPapers": 44,
            "cluster": "11",
            "visible": 1,
            "index": 592,
            "x": 173.32754308686975,
            "y": 170.90220246523833,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Computing a Stable Distance on Merge Trees",
                "DOI": "10.1109/tvcg.2022.3209395",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209395",
                "FirstPage": 1168,
                "LastPage": 1177,
                "PaperType": "J",
                "Abstract": "Distances on merge trees facilitate visual comparison of collections of scalar fields. Two desirable properties for these distances to exhibit are 1) the ability to discern between scalar fields which other, less complex topological summaries cannot and 2) to still be robust to perturbations in the dataset. The combination of these two properties, known respectively as stability and discriminativity, has led to theoretical distances which are either thought to be or shown to be computationally complex and thus their implementations have been scarce. In order to design similarity measures on merge trees which are computationally feasible for more complex merge trees, many researchers have elected to loosen the restrictions on at least one of these two properties. The question still remains, however, if there are practical situations where trading these desirable properties is necessary. Here we construct a distance between merge trees which is designed to retain both discriminativity and stability. While our approach can be expensive for large merge trees, we illustrate its use in a setting where the number of nodes is small. This setting can be made more practical since we also provide a proof that persistence simplification increases the outputted distance by at most half of the simplified value. We demonstrate our distance measure on applications in shape comparison and on detection of periodicity in the von Kármán vortex street.",
                "AuthorNamesDeduped": "Brian Bollen;Pasindu Tennakoon;Joshua A. Levine",
                "AuthorNames": "Brian Bollen;Pasindu Tennakoon;Joshua A. Levine",
                "AuthorAffiliation": "Department of Mathematics, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA",
                "InternalReferences": "0.1109/tvcg.2012.287;10.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2007.70603;10.1109/tvcg.2006.186;10.1109/tvcg.2011.236;10.1109/tvcg.2017.2743938;10.1109/tvcg.2009.163;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Merge trees,scalar fields,distance measure,stability,edit distance,persistence",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 284,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 204,
                "i": [
                    204
                ]
            }
        },
        {
            "name": "Florian Wetzels",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 593,
            "x": -243.45430199372424,
            "y": -8.944430711258589,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Merge Tree Geodesics and Barycenters with Path Mappings",
                "DOI": "10.1109/tvcg.2023.3326601",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326601",
                "FirstPage": 1095,
                "LastPage": 1105,
                "PaperType": "J",
                "Abstract": "Comparative visualization of scalar fields is often facilitated using similarity measures such as edit distances. In this paper, we describe a novel approach for similarity analysis of scalar fields that combines two recently introduced techniques: Wasserstein geodesics/barycenters as well as path mappings, a branch decomposition-independent edit distance. Effectively, we are able to leverage the reduced susceptibility of path mappings to small perturbations in the data when compared with the original Wasserstein distance. Our approach therefore exhibits superior performance and quality in typical tasks such as ensemble summarization, ensemble clustering, and temporal reduction of time series, while retaining practically feasible runtimes. Beyond studying theoretical properties of our approach and discussing implementation aspects, we describe a number of case studies that provide empirical insights into its utility for comparative visualization, and demonstrate the advantages of our method in both synthetic and real-world scenarios. We supply a C++ implementation that can be used to reproduce our results.",
                "AuthorNamesDeduped": "Florian Wetzels;Mathieu Pont;Julien Tierny;Christoph Garth",
                "AuthorNames": "Florian Wetzels;Mathieu Pont;Julien Tierny;Christoph Garth",
                "AuthorAffiliation": "University of Kaiserslautern-Landau, Germany;CNRS / Sorbonne University, France;CNRS / Sorbonne University, France;University of Kaiserslautern-Landau, Germany",
                "InternalReferences": "10.1109/tvcg.2022.3209395;10.1109/tvcg.2018.2864432;10.1109/tvcg.2022.3209387;10.1109/tvcg.2019.2934368;10.1109/tvcg.2021.3114839;10.1109/tvcg.2011.236;10.1109/tvcg.2013.148;10.1109/tvcg.2017.2743938;10.1109/tvcg.2019.2934256;10.1109/tvcg.2007.70601;10.1109/tvcg.2010.198;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Topological data analysis,merge trees,scalar data,ensemble data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 173,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 115,
                "i": [
                    115
                ]
            }
        },
        {
            "name": "Mathieu Pont",
            "value": 13,
            "numPapers": 32,
            "cluster": "11",
            "visible": 1,
            "index": 594,
            "x": 185.7137606340582,
            "y": -157.98860437118793,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Merge Tree Geodesics and Barycenters with Path Mappings",
                "DOI": "10.1109/tvcg.2023.3326601",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326601",
                "FirstPage": 1095,
                "LastPage": 1105,
                "PaperType": "J",
                "Abstract": "Comparative visualization of scalar fields is often facilitated using similarity measures such as edit distances. In this paper, we describe a novel approach for similarity analysis of scalar fields that combines two recently introduced techniques: Wasserstein geodesics/barycenters as well as path mappings, a branch decomposition-independent edit distance. Effectively, we are able to leverage the reduced susceptibility of path mappings to small perturbations in the data when compared with the original Wasserstein distance. Our approach therefore exhibits superior performance and quality in typical tasks such as ensemble summarization, ensemble clustering, and temporal reduction of time series, while retaining practically feasible runtimes. Beyond studying theoretical properties of our approach and discussing implementation aspects, we describe a number of case studies that provide empirical insights into its utility for comparative visualization, and demonstrate the advantages of our method in both synthetic and real-world scenarios. We supply a C++ implementation that can be used to reproduce our results.",
                "AuthorNamesDeduped": "Florian Wetzels;Mathieu Pont;Julien Tierny;Christoph Garth",
                "AuthorNames": "Florian Wetzels;Mathieu Pont;Julien Tierny;Christoph Garth",
                "AuthorAffiliation": "University of Kaiserslautern-Landau, Germany;CNRS / Sorbonne University, France;CNRS / Sorbonne University, France;University of Kaiserslautern-Landau, Germany",
                "InternalReferences": "10.1109/tvcg.2022.3209395;10.1109/tvcg.2018.2864432;10.1109/tvcg.2022.3209387;10.1109/tvcg.2019.2934368;10.1109/tvcg.2021.3114839;10.1109/tvcg.2011.236;10.1109/tvcg.2013.148;10.1109/tvcg.2017.2743938;10.1109/tvcg.2019.2934256;10.1109/tvcg.2007.70601;10.1109/tvcg.2010.198;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Topological data analysis,merge trees,scalar data,ensemble data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 173,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 115,
                "i": [
                    115
                ]
            }
        },
        {
            "name": "Guillaume Favelier",
            "value": 106,
            "numPapers": 34,
            "cluster": "11",
            "visible": 1,
            "index": 595,
            "x": -30.245183916644724,
            "y": 242.14712232411176,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Persistence Atlas for Critical Point Variability in Ensembles",
                "DOI": "10.1109/tvcg.2018.2864432",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864432",
                "FirstPage": 1152,
                "LastPage": 1162,
                "PaperType": "J",
                "Abstract": "This paper presents a new approach for the visualization and analysis of the spatial variability of features of interest represented by critical points in ensemble data. Our framework, called Persistence Atlas, enables the visualization of the dominant spatial patterns of critical points, along with statistics regarding their occurrence in the ensemble. The persistence atlas represents in the geometrical domain each dominant pattern in the form of a confidence map for the appearance of critical points. As a by-product, our method also provides 2-dimensional layouts of the entire ensemble, highlighting the main trends at a global level. Our approach is based on the new notion of Persistence Map, a measure of the geometrical density in critical points which leverages the robustness to noise of topological persistence to better emphasize salient features. We show how to leverage spectral embedding to represent the ensemble members as points in a low-dimensional Euclidean space, where distances between points measure the dissimilarities between critical point layouts and where statistical tasks, such as clustering, can be easily carried out. Further, we show how the notion of mandatory critical point can be leveraged to evaluate for each cluster confidence regions for the appearance of critical points. Most of the steps of this framework can be trivially parallelized and we show how to efficiently implement them. Extensive experiments demonstrate the relevance of our approach. The accuracy of the confidence regions provided by the persistence atlas is quantitatively evaluated and compared to a baseline strategy using an off-the-shelf clustering approach. We illustrate the importance of the persistence atlas in a variety of real-life datasets, where clear trends in feature layouts are identified and analyzed. We provide a lightweight VTK-based C++ implementation of our approach that can be used for reproduction purposes.",
                "AuthorNamesDeduped": "Guillaume Favelier;Noura Faraj;Brian Summa;Julien Tierny",
                "AuthorNames": "Guillaume Favelier;Noura Faraj;Brian Summa;Julien Tierny",
                "AuthorAffiliation": "Sorbonne Université, CNRS (LIP6);Tulane University;Tulane University;Sorbonne Université, CNRS (LIP6)",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2015.2467958;10.1109/tvcg.2015.2467204;10.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2015.2467432;10.1109/tvcg.2013.141;10.1109/tvcg.2011.249;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346455;10.1109/tvcg.2015.2467754;10.1109/tvcg.2010.181;10.1109/visual.1999.809897;10.1109/tvcg.2012.249;10.1109/tvcg.2014.2346332;10.1109/tvcg.2013.143;10.1109/tvcg.2007.70603",
                "AuthorKeywords": "Topological data analysis,scalar data,ensemble data",
                "AminerCitationCount": 27,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 87,
                "DownloadsXplore": 417,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 696,
                "i": [
                    696
                ]
            }
        },
        {
            "name": "Christoph Garth",
            "value": 348,
            "numPapers": 114,
            "cluster": "11",
            "visible": 1,
            "index": 596,
            "x": -141.3847352250886,
            "y": -199.14908145741367,
            "vy": 0,
            "vx": 0,
            "r": 1.4006908462867012,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Comparative Visual Analysis of Lagrangian Transport in CFD Ensembles",
                "DOI": "10.1109/tvcg.2013.141",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.141",
                "FirstPage": 2743,
                "LastPage": 2752,
                "PaperType": "J",
                "Abstract": "Sets of simulation runs based on parameter and model variation, so-called ensembles, are increasingly used to model physical behaviors whose parameter space is too large or complex to be explored automatically. Visualization plays a key role in conveying important properties in ensembles, such as the degree to which members of the ensemble agree or disagree in their behavior. For ensembles of time-varying vector fields, there are numerous challenges for providing an expressive comparative visualization, among which is the requirement to relate the effect of individual flow divergence to joint transport characteristics of the ensemble. Yet, techniques developed for scalar ensembles are of little use in this context, as the notion of transport induced by a vector field cannot be modeled using such tools. We develop a Lagrangian framework for the comparison of flow fields in an ensemble. Our techniques evaluate individual and joint transport variance and introduce a classification space that facilitates incorporation of these properties into a common ensemble visualization. Variances of Lagrangian neighborhoods are computed using pathline integration and Principal Components Analysis. This allows for an inclusion of uncertainty measurements into the visualization and analysis approach. Our results demonstrate the usefulness and expressiveness of the presented method on several practical examples.",
                "AuthorNamesDeduped": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorNames": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorAffiliation": "University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA;University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA",
                "InternalReferences": "0.1109/tvcg.2011.203;10.1109/visual.1996.568116;10.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2007.70551",
                "AuthorKeywords": "Ensemble, flow field, time-varying, comparison, visualization, Lagrangian, variance, principal components analysis",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 793,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1338,
                "i": [
                    1338
                ]
            }
        },
        {
            "name": "Dilip Mathew Thomas",
            "value": 94,
            "numPapers": 15,
            "cluster": "11",
            "visible": 1,
            "index": 597,
            "x": 238.97603891850196,
            "y": 51.38533665183714,
            "vy": 0,
            "vx": 0,
            "r": 1.1082325849165227,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Symmetry in Scalar field Topology",
                "DOI": "10.1109/tvcg.2011.236",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.236",
                "FirstPage": 2035,
                "LastPage": 2044,
                "PaperType": "J",
                "Abstract": "Study of symmetric or repeating patterns in scalar fields is important in scientific data analysis because it gives deep insights into the properties of the underlying phenomenon. Though geometric symmetry has been well studied within areas like shape processing, identifying symmetry in scalar fields has remained largely unexplored due to the high computational cost of the associated algorithms. We propose a computationally efficient algorithm for detecting symmetric patterns in a scalar field distribution by analysing the topology of level sets of the scalar field. Our algorithm computes the contour tree of a given scalar field and identifies subtrees that are similar. We define a robust similarity measure for comparing subtrees of the contour tree and use it to group similar subtrees together. Regions of the domain corresponding to subtrees that belong to a common group are extracted and reported to be symmetric. Identifying symmetry in scalar fields finds applications in visualization, data exploration, and feature detection. We describe two applications in detail: symmetry-aware transfer function design and symmetry-aware isosurface extraction.",
                "AuthorNamesDeduped": "Dilip Mathew Thomas;Vijay Natarajan",
                "AuthorNames": "Dilip Mathew Thomas;Vijay Natarajan",
                "AuthorAffiliation": "Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India;Department of Computer Science and Automation, and Supercomputer Education Research Centre, Indian Institute of Science, Bangalore, India",
                "InternalReferences": "0.1109/tvcg.2008.143;10.1109/tvcg.2009.120;10.1109/tvcg.2007.70601",
                "AuthorKeywords": "Scalar field symmetry, contour tree, similarity measure, persistence, isosurface extraction, transfer function design",
                "AminerCitationCount": 41,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 681,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1659,
                "i": [
                    1659
                ]
            }
        },
        {
            "name": "Vijay Natarajan",
            "value": 307,
            "numPapers": 30,
            "cluster": "11",
            "visible": 1,
            "index": 598,
            "x": -211.10022135427351,
            "y": 123.63938104089942,
            "vy": 0,
            "vx": 0,
            "r": 1.353483016695452,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Symmetry in Scalar field Topology",
                "DOI": "10.1109/tvcg.2011.236",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.236",
                "FirstPage": 2035,
                "LastPage": 2044,
                "PaperType": "J",
                "Abstract": "Study of symmetric or repeating patterns in scalar fields is important in scientific data analysis because it gives deep insights into the properties of the underlying phenomenon. Though geometric symmetry has been well studied within areas like shape processing, identifying symmetry in scalar fields has remained largely unexplored due to the high computational cost of the associated algorithms. We propose a computationally efficient algorithm for detecting symmetric patterns in a scalar field distribution by analysing the topology of level sets of the scalar field. Our algorithm computes the contour tree of a given scalar field and identifies subtrees that are similar. We define a robust similarity measure for comparing subtrees of the contour tree and use it to group similar subtrees together. Regions of the domain corresponding to subtrees that belong to a common group are extracted and reported to be symmetric. Identifying symmetry in scalar fields finds applications in visualization, data exploration, and feature detection. We describe two applications in detail: symmetry-aware transfer function design and symmetry-aware isosurface extraction.",
                "AuthorNamesDeduped": "Dilip Mathew Thomas;Vijay Natarajan",
                "AuthorNames": "Dilip Mathew Thomas;Vijay Natarajan",
                "AuthorAffiliation": "Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India;Department of Computer Science and Automation, and Supercomputer Education Research Centre, Indian Institute of Science, Bangalore, India",
                "InternalReferences": "0.1109/tvcg.2008.143;10.1109/tvcg.2009.120;10.1109/tvcg.2007.70601",
                "AuthorKeywords": "Scalar field symmetry, contour tree, similarity measure, persistence, isosurface extraction, transfer function design",
                "AminerCitationCount": 41,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 681,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1659,
                "i": [
                    1659
                ]
            }
        },
        {
            "name": "Charles Gueunet",
            "value": 79,
            "numPapers": 18,
            "cluster": "11",
            "visible": 1,
            "index": 599,
            "x": 72.20177496105934,
            "y": -233.95919236583237,
            "vy": 0,
            "vx": 0,
            "r": 1.0909614277489925,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "The Topology ToolKit",
                "DOI": "10.1109/tvcg.2017.2743938",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743938",
                "FirstPage": 832,
                "LastPage": 842,
                "PaperType": "J",
                "Abstract": "This system paper presents the Topology ToolKit (TTK), a software platform designed for the topological analysis of scalar data in scientific visualization. While topological data analysis has gained in popularity over the last two decades, it has not yet been widely adopted as a standard data analysis tool for end users or developers. TTK aims at addressing this problem by providing a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependency-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code. online documentation and video tutorials are available on TTK's website [108].",
                "AuthorNamesDeduped": "Julien Tierny;Guillaume Favelier;Joshua A. Levine;Charles Gueunet;Michael Michaux",
                "AuthorNames": "Julien Tierny;Guillaume Favelier;Joshua A. Levine;Charles Gueunet;Michael Michaux",
                "AuthorAffiliation": "Sorbonne Universites, LIP6 UMR 7606, France;Sorbonne Universites, LIP6 UMR 7606, France;University of Arizona, USA;Kitware SAS, France;Sorbonne Universites, LIP6 UMR 7606, France",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/visual.2005.1532788;10.1109/visual.2004.96;10.1109/tvcg.2014.2346322;10.1109/tvcg.2010.213;10.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2012.209;10.1109/tvcg.2014.2346434;10.1109/tvcg.2015.2467432;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467449;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346318;10.1109/tvcg.2014.2346332;10.1109/tvcg.2016.2599017;10.1109/tvcg.2009.163;10.1109/tvcg.2012.228;10.1109/tvcg.2017.2743938",
                "AuthorKeywords": "Topological data analysis,scalar data,data segmentation,feature extraction,bivariate data,uncertain data",
                "AminerCitationCount": 186,
                "CitationCountCrossRef": 116,
                "PubsCitedCrossRef": 118,
                "DownloadsXplore": 1492,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 813,
                "i": [
                    813
                ]
            }
        },
        {
            "name": "Michael Michaux",
            "value": 79,
            "numPapers": 18,
            "cluster": "11",
            "visible": 1,
            "index": 600,
            "x": 104.88518983480783,
            "y": 221.47030715948426,
            "vy": 0,
            "vx": 0,
            "r": 1.0909614277489925,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "The Topology ToolKit",
                "DOI": "10.1109/tvcg.2017.2743938",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743938",
                "FirstPage": 832,
                "LastPage": 842,
                "PaperType": "J",
                "Abstract": "This system paper presents the Topology ToolKit (TTK), a software platform designed for the topological analysis of scalar data in scientific visualization. While topological data analysis has gained in popularity over the last two decades, it has not yet been widely adopted as a standard data analysis tool for end users or developers. TTK aims at addressing this problem by providing a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependency-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code. online documentation and video tutorials are available on TTK's website [108].",
                "AuthorNamesDeduped": "Julien Tierny;Guillaume Favelier;Joshua A. Levine;Charles Gueunet;Michael Michaux",
                "AuthorNames": "Julien Tierny;Guillaume Favelier;Joshua A. Levine;Charles Gueunet;Michael Michaux",
                "AuthorAffiliation": "Sorbonne Universites, LIP6 UMR 7606, France;Sorbonne Universites, LIP6 UMR 7606, France;University of Arizona, USA;Kitware SAS, France;Sorbonne Universites, LIP6 UMR 7606, France",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/visual.2005.1532788;10.1109/visual.2004.96;10.1109/tvcg.2014.2346322;10.1109/tvcg.2010.213;10.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2012.209;10.1109/tvcg.2014.2346434;10.1109/tvcg.2015.2467432;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467449;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346318;10.1109/tvcg.2014.2346332;10.1109/tvcg.2016.2599017;10.1109/tvcg.2009.163;10.1109/tvcg.2012.228;10.1109/tvcg.2017.2743938",
                "AuthorKeywords": "Topological data analysis,scalar data,data segmentation,feature extraction,bivariate data,uncertain data",
                "AminerCitationCount": 186,
                "CitationCountCrossRef": 116,
                "PubsCitedCrossRef": 118,
                "DownloadsXplore": 1492,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 813,
                "i": [
                    813
                ]
            }
        },
        {
            "name": "Bei Wang 0001",
            "value": 117,
            "numPapers": 41,
            "cluster": "11",
            "visible": 1,
            "index": 601,
            "x": -227.1289984054976,
            "y": -92.53333498429346,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Persistent Homology Guided Force-Directed Graph Layouts",
                "DOI": "10.1109/tvcg.2019.2934802",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934802",
                "FirstPage": 697,
                "LastPage": 707,
                "PaperType": "J",
                "Abstract": "Graphs are commonly used to encode relationships among entities, yet their abstractness makes them difficult to analyze. Node-link diagrams are popular for drawing graphs, and force-directed layouts provide a flexible method for node arrangements that use local relationships in an attempt to reveal the global shape of the graph. However, clutter and overlap of unrelated structures can lead to confusing graph visualizations. This paper leverages the persistent homology features of an undirected graph as derived information for interactive manipulation of force-directed layouts. We first discuss how to efficiently extract 0-dimensional persistent homology features from both weighted and unweighted undirected graphs. We then introduce the interactive persistence barcode used to manipulate the force-directed graph layout. In particular, the user adds and removes contracting and repulsing forces generated by the persistent homology features, eventually selecting the set of persistent homology features that most improve the layout. Finally, we demonstrate the utility of our approach across a variety of synthetic and real datasets.",
                "AuthorNamesDeduped": "Ashley Suh 0001;Mustafa Hajij;Bei Wang 0001;Carlos Scheidegger;Paul Rosen 0001",
                "AuthorNames": "Ashley Suh;Mustafa Hajij;Bei Wang;Carlos Scheidegger;Paul Rosen",
                "AuthorAffiliation": "University of South Florida, Tufts University;Ohio State University;University of Utah;University of Arizona;University of South Florida",
                "InternalReferences": "0.1109/tvcg.2016.2598958;10.1109/tvcg.2009.122;10.1109/tvcg.2012.208;10.1109/tvcg.2013.151;10.1109/tvcg.2011.223;10.1109/infvis.2002.1173159;10.1109/tvcg.2008.158;10.1109/tvcg.2017.2744321;10.1109/tvcg.2011.190;10.1109/tvcg.2014.2346441;10.1109/tvcg.2017.2745919;10.1109/tvcg.2018.2864911;10.1109/infvis.2003.1249008",
                "AuthorKeywords": "Graph drawing,force-directed layout,Topological Data Analysis,persistent homology",
                "AminerCitationCount": 18,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 882,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 553,
                "i": [
                    553
                ]
            }
        },
        {
            "name": "Yifan Wu",
            "value": 24,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 602,
            "x": 230.17431976946744,
            "y": -85.26301964311926,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Causal Support: Modeling Causal Inferences with Visualizations",
                "DOI": "10.1109/tvcg.2021.3114824",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114824",
                "FirstPage": 1150,
                "LastPage": 1160,
                "PaperType": "J",
                "Abstract": "Analysts often make visual causal inferences about possible data-generating models. However, visual analytics (VA) software tends to leave these models implicit in the mind of the analyst, which casts doubt on the statistical validity of informal visual “insights”. We formally evaluate the quality of causal inferences from visualizations by adopting <i>causal support</i>—a Bayesian cognition model that learns the probability of alternative causal explanations given some data—as a normative benchmark for causal inferences. We contribute two experiments assessing how well crowdworkers can detect (1) a treatment effect and (2) a confounding relationship. We find that chart users' causal inferences tend to be insensitive to sample size such that they deviate from our normative benchmark. While interactively cross-filtering data in visualizations can improve sensitivity, on average users do not perform reliably better with common visualizations than they do with textual contingency tables. These experiments demonstrate the utility of causal support as an evaluation framework for inferences in VA and point to opportunities to make analysts' mental models more explicit in VA software.",
                "AuthorNamesDeduped": "Alex Kale;Yifan Wu;Jessica Hullman",
                "AuthorNames": "Alex Kale;Yifan Wu;Jessica Hullman",
                "AuthorAffiliation": "University of Washington, USA;University of California at Berkeley, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2003.1249025;10.1109/tvcg.2019.2934287;10.1109/tvcg.2020.3030465;10.1109/tvcg.2007.70528;10.1109/tvcg.2020.3030335;10.1109/tvcg.2015.2467758;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2020.3028957",
                "AuthorKeywords": "Causal inference,visualization,contingency tables,data cognition",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 1042,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 321,
                "i": [
                    321
                ]
            }
        },
        {
            "name": "Michalis Mamakos",
            "value": 0,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 603,
            "x": -112.22205226656453,
            "y": 218.53194499907895,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Rational Agent Benchmark for Data Visualization",
                "DOI": "10.1109/tvcg.2023.3326513",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326513",
                "FirstPage": 338,
                "LastPage": 347,
                "PaperType": "J",
                "Abstract": "Understanding how helpful a visualization is from experimental results is difficult because the observed performance is confounded with aspects of the study design, such as how useful the information that is visualized is for the task. We develop a rational agent framework for designing and interpreting visualization experiments. Our framework conceives two experiments with the same setup: one with behavioral agents (human subjects), and the other one with a hypothetical rational agent. A visualization is evaluated by comparing the expected performance of behavioral agents to that of a rational agent under different assumptions. Using recent visualization decision studies from the literature, we demonstrate how the framework can be used to pre-experimentally evaluate the experiment design by bounding the expected improvement in performance from having access to visualizations, and post-experimentally to deconfound errors of information extraction from errors of optimization, among other analyses.",
                "AuthorNamesDeduped": "Yifan Wu;Ziyang Guo;Michalis Mamakos;Jason D. Hartline;Jessica Hullman",
                "AuthorNames": "Yifan Wu;Ziyang Guo;Michalis Mamakos;Jason Hartline;Jessica Hullman",
                "AuthorAffiliation": "Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3030395;10.1109/tvcg.2019.2934287;10.1109/tvcg.2018.2864889;10.1109/tvcg.2013.126;10.1109/tvcg.2023.3326516;10.1109/tvcg.2020.3030335;10.1109/tvcg.2021.3114824;10.1109/tvcg.2020.3028984;10.1109/tvcg.2009.111;10.1109/visual.2005.1532781",
                "AuthorKeywords": "Evaluation,decision-making,rational agent,scoring rule",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 172,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 116,
                "i": [
                    116
                ]
            }
        },
        {
            "name": "Jason D. Hartline",
            "value": 0,
            "numPapers": 18,
            "cluster": "5",
            "visible": 1,
            "index": 604,
            "x": -64.92087928127472,
            "y": -237.13978880260936,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "The Rational Agent Benchmark for Data Visualization",
                "DOI": "10.1109/tvcg.2023.3326513",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326513",
                "FirstPage": 338,
                "LastPage": 347,
                "PaperType": "J",
                "Abstract": "Understanding how helpful a visualization is from experimental results is difficult because the observed performance is confounded with aspects of the study design, such as how useful the information that is visualized is for the task. We develop a rational agent framework for designing and interpreting visualization experiments. Our framework conceives two experiments with the same setup: one with behavioral agents (human subjects), and the other one with a hypothetical rational agent. A visualization is evaluated by comparing the expected performance of behavioral agents to that of a rational agent under different assumptions. Using recent visualization decision studies from the literature, we demonstrate how the framework can be used to pre-experimentally evaluate the experiment design by bounding the expected improvement in performance from having access to visualizations, and post-experimentally to deconfound errors of information extraction from errors of optimization, among other analyses.",
                "AuthorNamesDeduped": "Yifan Wu;Ziyang Guo;Michalis Mamakos;Jason D. Hartline;Jessica Hullman",
                "AuthorNames": "Yifan Wu;Ziyang Guo;Michalis Mamakos;Jason Hartline;Jessica Hullman",
                "AuthorAffiliation": "Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3030395;10.1109/tvcg.2019.2934287;10.1109/tvcg.2018.2864889;10.1109/tvcg.2013.126;10.1109/tvcg.2023.3326516;10.1109/tvcg.2020.3030335;10.1109/tvcg.2021.3114824;10.1109/tvcg.2020.3028984;10.1109/tvcg.2009.111;10.1109/visual.2005.1532781",
                "AuthorKeywords": "Evaluation,decision-making,rational agent,scoring rule",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 172,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 116,
                "i": [
                    116
                ]
            }
        },
        {
            "name": "Guoning Chen",
            "value": 37,
            "numPapers": 47,
            "cluster": "11",
            "visible": 1,
            "index": 605,
            "x": 208.22828007392064,
            "y": 131.11439042857523,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Extract and Characterize Hairpin Vortices in Turbulent Flows",
                "DOI": "10.1109/tvcg.2023.3326603",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326603",
                "FirstPage": 716,
                "LastPage": 726,
                "PaperType": "J",
                "Abstract": "Hairpin vortices are one of the most important vortical structures in turbulent flows. Extracting and characterizing hairpin vortices provides useful insight into many behaviors in turbulent flows. However, hairpin vortices have complex configurations and might be entangled with other vortices, making their extraction difficult. In this work, we introduce a framework to extract and separate hairpin vortices in shear driven turbulent flows for their study. Our method first extracts general vortical regions with a region-growing strategy based on certain vortex criteria (e.g., $\\lambda_{2}$) and then separates those vortices with the help of progressive extraction of ($\\lambda_{2}$) iso-surfaces in a top-down fashion. This leads to a hierarchical tree representing the spatial proximity and merging relation of vortices. After separating individual vortices, their shape and orientation information is extracted. Candidate hairpin vortices are identified based on their shape and orientation information as well as their physical characteristics. An interactive visualization system is developed to aid the exploration, classification, and analysis of hairpin vortices based on their geometric and physical attributes. We also present additional use cases of the proposed system for the analysis and study of general vortices in other types of flows.",
                "AuthorNamesDeduped": "Adeel Zafar;Di Yang;Guoning Chen",
                "AuthorNames": "Adeel Zafar;Di Yang;Guoning Chen",
                "AuthorAffiliation": "University of Houston, USA;University of Houston, USA;University of Houston, USA",
                "InternalReferences": "10.1109/visual.1994.346327;10.1109/tvcg.2018.2864817;10.1109/tvcg.2018.2864839;10.1109/tvcg.2020.3028892;10.1109/tvcg.2019.2934367;10.1109/visual.1999.809896;10.1109/tvcg.2019.2934375;10.1109/tvcg.2008.143;10.1109/tvcg.2007.70545;10.1109/tvcg.2010.198",
                "AuthorKeywords": "Turbulent flow,vortices,hairpin vortex extraction",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 165,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 120,
                "i": [
                    120
                ]
            }
        },
        {
            "name": "Lin Yan 0003",
            "value": 25,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 606,
            "x": -242.30739261735113,
            "y": 44.012810441743646,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "A Structural Average of Labeled Merge Trees for Uncertainty Visualization",
                "DOI": "10.1109/tvcg.2019.2934242",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934242",
                "FirstPage": 832,
                "LastPage": 842,
                "PaperType": "J",
                "Abstract": "Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.",
                "AuthorNamesDeduped": "Lin Yan 0003;Yusu Wang 0001;Elizabeth Munch;Ellen Gasparovic;Bei Wang 0001",
                "AuthorNames": "Lin Yan;Yusu Wang;Elizabeth Munch;Ellen Gasparovic;Bei Wang",
                "AuthorAffiliation": "University of Utah;Ohio State University;Michigan State University;Union College;University of Utah",
                "InternalReferences": "0.1109/visual.1997.663875;10.1109/visual.2002.1183774;10.1109/tvcg.2009.114;10.1109/tvcg.2010.181;10.1109/tvcg.2017.2743938;10.1109/tvcg.2007.70601;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Topological data analysis,uncertainty visualization,merge trees",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 698,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 574,
                "i": [
                    574
                ]
            }
        },
        {
            "name": "Hyeok Kim",
            "value": 9,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 607,
            "x": 149.0624396325761,
            "y": -196.2915920022674,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "An Automated Approach to Reasoning About Task-Oriented Insights in Responsive Visualization",
                "DOI": "10.1109/tvcg.2021.3114782",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114782",
                "FirstPage": 129,
                "LastPage": 139,
                "PaperType": "J",
                "Abstract": "Authors often transform a large screen visualization for smaller displays through rescaling, aggregation and other techniques when creating visualizations for both desktop and mobile devices (i.e., responsive visualization). However, transformations can alter relationships or patterns implied by the large screen view, requiring authors to reason carefully about what information to preserve while adjusting their design for the smaller display. We propose an automated approach to approximating the loss of support for task-oriented visualization insights (identification, comparison, and trend) in responsive transformation of a source visualization. We operationalize identification, comparison, and trend loss as objective functions calculated by comparing properties of the rendered source visualization to each realized target (small screen) visualization. To evaluate the utility of our approach, we train machine learning models on human ranked small screen alternative visualizations across a set of source visualizations. We find that our approach achieves an accuracy of 84% (random forest model) in ranking visualizations. We demonstrate this approach in a prototype responsive visualization recommender that enumerates responsive transformations using Answer Set Programming and evaluates the preservation of task-oriented insights using our loss measures. We discuss implications of our approach for the development of automated and semi-automated responsive visualization recommendation.",
                "AuthorNamesDeduped": "Hyeok Kim;Ryan A. Rossi;Abhraneel Sarma;Dominik Moritz;Jessica Hullman",
                "AuthorNames": "Hyeok Kim;Ryan Rossi;Abhraneel Sarma;Dominik Moritz;Jessica Hullman",
                "AuthorAffiliation": "Northwestern University, USA;Adobe Research, USA;Northwestern University, USA;Carnegie Mellon University, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2018.2865142;10.1109/tvcg.2019.2934397;10.1109/tvcg.2013.124;10.1109/tvcg.2006.161;10.1109/tvcg.2014.2346978;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2013.163;10.1109/tvcg.2014.2346325;10.1109/tvcg.2018.2865240;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2744359;10.1109/tvcg.2019.2934432;10.1109/infvis.2003.1249005;10.1109/tvcg.2020.3030423;10.1109/tvcg.2009.153",
                "AuthorKeywords": "Task-oriented insight preservation,responsive visualization",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 77,
                "DownloadsXplore": 664,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 329,
                "i": [
                    329
                ]
            }
        },
        {
            "name": "G. Elisabeta Marai",
            "value": 165,
            "numPapers": 96,
            "cluster": "0",
            "visible": 1,
            "index": 608,
            "x": 22.697719586191276,
            "y": 245.63145874579385,
            "vy": 0,
            "vx": 0,
            "r": 1.1899827288428324,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Carla Floricel",
            "value": 22,
            "numPapers": 54,
            "cluster": "0",
            "visible": 1,
            "index": 609,
            "x": -182.80828571652197,
            "y": -165.92507548103293,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Neil Spring",
            "value": 116,
            "numPapers": 5,
            "cluster": "0",
            "visible": 1,
            "index": 610,
            "x": 247.0803605559531,
            "y": -1.1381685025568122,
            "vy": 0,
            "vx": 0,
            "r": 1.1335636154289004,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "EventAction: Visual analytics for temporal event sequence recommendation",
                "DOI": "10.1109/vast.2016.7883512",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883512",
                "FirstPage": 61,
                "LastPage": 70,
                "PaperType": "C",
                "Abstract": "Recommender systems are being widely used to assist people in making decisions, for example, recommending films to watch or books to buy. Despite its ubiquity, the problem of presenting the recommendations of temporal event sequences has not been studied. We propose EventAction, which to our knowledge, is the first attempt at a prescriptive analytics interface designed to present and explain recommendations of temporal event sequences. EventAction provides a visual analytics approach to (1) identify similar records, (2) explore potential outcomes, (3) review recommended temporal event sequences that might help achieve the users' goals, and (4) interactively assist users as they define a personalized action plan associated with a probability of success. Following the design study framework, we designed and deployed EventAction in the context of student advising and reported on the evaluation with a student review manager and three graduate students.",
                "AuthorNamesDeduped": "Fan Du;Catherine Plaisant;Neil Spring;Ben Shneiderman",
                "AuthorNames": "Fan Du;Catherine Plaisant;Neil Spring;Ben Shneiderman",
                "AuthorAffiliation": "University of Maryland;University of Maryland;University of Maryland;University of Maryland",
                "InternalReferences": "0.1109/tvcg.2009.187;10.1109/tvcg.2012.225;10.1109/tvcg.2012.213;10.1109/tvcg.2015.2467622;10.1109/tvcg.2014.2346682",
                "AuthorKeywords": null,
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 970,
                "i": [
                    970
                ]
            }
        },
        {
            "name": "Andrew Wentzel",
            "value": 37,
            "numPapers": 45,
            "cluster": "0",
            "visible": 1,
            "index": 611,
            "x": -181.56906926251685,
            "y": 167.87695817813523,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Abdallah Sherif Radwan Mohamed",
            "value": 22,
            "numPapers": 34,
            "cluster": "0",
            "visible": 1,
            "index": 612,
            "x": 20.5008672545478,
            "y": -246.6368067458939,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Clifton David Fuller",
            "value": 22,
            "numPapers": 34,
            "cluster": "0",
            "visible": 1,
            "index": 613,
            "x": 151.60767770830012,
            "y": 195.870140807363,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Guadalupe Canahuate",
            "value": 37,
            "numPapers": 45,
            "cluster": "0",
            "visible": 1,
            "index": 614,
            "x": -244.29802109182216,
            "y": -42.053262544297546,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Timothy Luciani",
            "value": 42,
            "numPapers": 24,
            "cluster": "0",
            "visible": 1,
            "index": 615,
            "x": 208.71394563253824,
            "y": -134.1211724467762,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Details-First, Show Context, Overview Last: Supporting Exploration of Viscous Fingers in Large-Scale Ensemble Simulations",
                "DOI": "10.1109/tvcg.2018.2864849",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864849",
                "FirstPage": 1225,
                "LastPage": 1235,
                "PaperType": "J",
                "Abstract": "Visualization research often seeks designs that first establish an overview of the data, in accordance to the information seeking mantra: “Overview first, zoom and filter, then details on demand”. However, in computational fluid dynamics (CFD), as well as in other domains, there are many situations where such a spatial overview is not relevant or practical for users, for example when the experts already have a good mental overview of the data, or when an analysis of a large overall structure may not be related to the specific, information-driven tasks of users. Using scientific workflow theory and, as a vehicle, the problem of viscous finger evolution, we advocate an alternative model that allows domain experts to explore features of interest first, then explore the context around those features, and finally move to a potentially unfamiliar summarization overview. In a model instantiation, we show how a computational back-end can identify and track over time low-level, small features, then be used to filter the context of those features while controlling the complexity of the visualization, and finally to summarize and compare simulations. We demonstrate the effectiveness of this approach with an online web-based exploration of a total volume of data approaching half a billion seven-dimensional data points, and report supportive feedback provided by domain experts with respect to both the instantiation and the theoretical model.",
                "AuthorNamesDeduped": "Timothy Luciani;Andrew Burks;Cassiano Sugiyama;Jonathan Komperda;G. Elisabeta Marai",
                "AuthorNames": "Timothy Luciani;Andrew Burks;Cassiano Sugiyama;Jonathan Komperda;G. Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois System, Urbana, IL, US;University of Illinois System, Urbana, IL, US;Favo Urban Agriculture, Brazil;University of Illinois System, Urbana, IL, US;University of Illinois System, Urbana, IL, US",
                "InternalReferences": "0.1109/tvcg.2007.70599;10.1109/tvcg.2014.2346448;10.1109/tvcg.2015.2466838;10.1109/tvcg.2015.2468093;10.1109/tvcg.2009.141;10.1109/tvcg.2011.209;10.1109/tvcg.2017.2744459;10.1109/tvcg.2013.161;10.1109/tvcg.2014.2346744;10.1109/vast.2006.261451;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2009.108;10.1109/tvcg.2008.140",
                "AuthorKeywords": "theory,visualization design,details-first model,discourse paper,computational fluid dynamics",
                "AminerCitationCount": 25,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 1225,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 697,
                "i": [
                    697
                ]
            }
        },
        {
            "name": "Andrew Burks",
            "value": 36,
            "numPapers": 18,
            "cluster": "0",
            "visible": 1,
            "index": 616,
            "x": -63.353019740468376,
            "y": 240.0758107135407,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Details-First, Show Context, Overview Last: Supporting Exploration of Viscous Fingers in Large-Scale Ensemble Simulations",
                "DOI": "10.1109/tvcg.2018.2864849",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864849",
                "FirstPage": 1225,
                "LastPage": 1235,
                "PaperType": "J",
                "Abstract": "Visualization research often seeks designs that first establish an overview of the data, in accordance to the information seeking mantra: “Overview first, zoom and filter, then details on demand”. However, in computational fluid dynamics (CFD), as well as in other domains, there are many situations where such a spatial overview is not relevant or practical for users, for example when the experts already have a good mental overview of the data, or when an analysis of a large overall structure may not be related to the specific, information-driven tasks of users. Using scientific workflow theory and, as a vehicle, the problem of viscous finger evolution, we advocate an alternative model that allows domain experts to explore features of interest first, then explore the context around those features, and finally move to a potentially unfamiliar summarization overview. In a model instantiation, we show how a computational back-end can identify and track over time low-level, small features, then be used to filter the context of those features while controlling the complexity of the visualization, and finally to summarize and compare simulations. We demonstrate the effectiveness of this approach with an online web-based exploration of a total volume of data approaching half a billion seven-dimensional data points, and report supportive feedback provided by domain experts with respect to both the instantiation and the theoretical model.",
                "AuthorNamesDeduped": "Timothy Luciani;Andrew Burks;Cassiano Sugiyama;Jonathan Komperda;G. Elisabeta Marai",
                "AuthorNames": "Timothy Luciani;Andrew Burks;Cassiano Sugiyama;Jonathan Komperda;G. Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois System, Urbana, IL, US;University of Illinois System, Urbana, IL, US;Favo Urban Agriculture, Brazil;University of Illinois System, Urbana, IL, US;University of Illinois System, Urbana, IL, US",
                "InternalReferences": "0.1109/tvcg.2007.70599;10.1109/tvcg.2014.2346448;10.1109/tvcg.2015.2466838;10.1109/tvcg.2015.2468093;10.1109/tvcg.2009.141;10.1109/tvcg.2011.209;10.1109/tvcg.2017.2744459;10.1109/tvcg.2013.161;10.1109/tvcg.2014.2346744;10.1109/vast.2006.261451;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2009.108;10.1109/tvcg.2008.140",
                "AuthorKeywords": "theory,visualization design,details-first model,discourse paper,computational fluid dynamics",
                "AminerCitationCount": 25,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 1225,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 697,
                "i": [
                    697
                ]
            }
        },
        {
            "name": "Adrian Maries",
            "value": 47,
            "numPapers": 7,
            "cluster": "0",
            "visible": 1,
            "index": 617,
            "x": -115.54793392484342,
            "y": -219.996988537798,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "GRACE: A Visual Comparison Framework for Integrated Spatial and Non-Spatial Geriatric Data",
                "DOI": "10.1109/tvcg.2013.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.161",
                "FirstPage": 2916,
                "LastPage": 2925,
                "PaperType": "J",
                "Abstract": "We present the design of a novel framework for the visual integration, comparison, and exploration of correlations in spatial and non-spatial geriatric research data. These data are in general high-dimensional and span both the spatial, volumetric domain - through magnetic resonance imaging volumes - and the non-spatial domain, through variables such as age, gender, or walking speed. The visual analysis framework blends medical imaging, mathematical analysis and interactive visualization techniques, and includes the adaptation of Sparse Partial Least Squares and iterated Tikhonov Regularization algorithms to quantify potential neurologymobility connections. A linked-view design geared specifically at interactive visual comparison integrates spatial and abstract visual representations to enable the users to effectively generate and refine hypotheses in a large, multidimensional, and fragmented space. In addition to the domain analysis and design description, we demonstrate the usefulness of this approach on two case studies. Last, we report the lessons learned through the iterative design and evaluation of our approach, in particular those relevant to the design of comparative visualization of spatial and non-spatial data.",
                "AuthorNamesDeduped": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William J. Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorNames": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorAffiliation": "Department of Computer Science, University of Pittsburgh, USA;Department of Mathematics, Wheeling Jesuit University, USA;The Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, USA;Center for Simulation and Modeling, University of Pittsburgh, USA;Department of Mathematics, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;Department of Computer Science, University of Pittsburgh, USA",
                "InternalReferences": "0.1109/tvcg.2009.141;10.1109/visual.2000.885739;10.1109/vast.2006.261438;10.1109/tvcg.2009.111;10.1109/tvcg.2010.137;10.1109/tvcg.2009.114;10.1109/visual.1991.175815;10.1109/tvcg.2010.162",
                "AuthorKeywords": "Design studies, methodology design, task and requirements analysis, integrating spatial and non-spatial data visualization, visual comparison, high-dimensional data, applications of visualization",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 639,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1354,
                "i": [
                    1354
                ]
            }
        },
        {
            "name": "Nathan Mays",
            "value": 47,
            "numPapers": 7,
            "cluster": "0",
            "visible": 1,
            "index": 618,
            "x": 233.99652197803647,
            "y": 84.23554892195031,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "GRACE: A Visual Comparison Framework for Integrated Spatial and Non-Spatial Geriatric Data",
                "DOI": "10.1109/tvcg.2013.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.161",
                "FirstPage": 2916,
                "LastPage": 2925,
                "PaperType": "J",
                "Abstract": "We present the design of a novel framework for the visual integration, comparison, and exploration of correlations in spatial and non-spatial geriatric research data. These data are in general high-dimensional and span both the spatial, volumetric domain - through magnetic resonance imaging volumes - and the non-spatial domain, through variables such as age, gender, or walking speed. The visual analysis framework blends medical imaging, mathematical analysis and interactive visualization techniques, and includes the adaptation of Sparse Partial Least Squares and iterated Tikhonov Regularization algorithms to quantify potential neurologymobility connections. A linked-view design geared specifically at interactive visual comparison integrates spatial and abstract visual representations to enable the users to effectively generate and refine hypotheses in a large, multidimensional, and fragmented space. In addition to the domain analysis and design description, we demonstrate the usefulness of this approach on two case studies. Last, we report the lessons learned through the iterative design and evaluation of our approach, in particular those relevant to the design of comparative visualization of spatial and non-spatial data.",
                "AuthorNamesDeduped": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William J. Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorNames": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorAffiliation": "Department of Computer Science, University of Pittsburgh, USA;Department of Mathematics, Wheeling Jesuit University, USA;The Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, USA;Center for Simulation and Modeling, University of Pittsburgh, USA;Department of Mathematics, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;Department of Computer Science, University of Pittsburgh, USA",
                "InternalReferences": "0.1109/tvcg.2009.141;10.1109/visual.2000.885739;10.1109/vast.2006.261438;10.1109/tvcg.2009.111;10.1109/tvcg.2010.137;10.1109/tvcg.2009.114;10.1109/visual.1991.175815;10.1109/tvcg.2010.162",
                "AuthorKeywords": "Design studies, methodology design, task and requirements analysis, integrating spatial and non-spatial data visualization, visual comparison, high-dimensional data, applications of visualization",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 639,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1354,
                "i": [
                    1354
                ]
            }
        },
        {
            "name": "MeganOlson Hunt",
            "value": 47,
            "numPapers": 7,
            "cluster": "0",
            "visible": 1,
            "index": 619,
            "x": -229.6274563600499,
            "y": 96.02724241387631,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "GRACE: A Visual Comparison Framework for Integrated Spatial and Non-Spatial Geriatric Data",
                "DOI": "10.1109/tvcg.2013.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.161",
                "FirstPage": 2916,
                "LastPage": 2925,
                "PaperType": "J",
                "Abstract": "We present the design of a novel framework for the visual integration, comparison, and exploration of correlations in spatial and non-spatial geriatric research data. These data are in general high-dimensional and span both the spatial, volumetric domain - through magnetic resonance imaging volumes - and the non-spatial domain, through variables such as age, gender, or walking speed. The visual analysis framework blends medical imaging, mathematical analysis and interactive visualization techniques, and includes the adaptation of Sparse Partial Least Squares and iterated Tikhonov Regularization algorithms to quantify potential neurologymobility connections. A linked-view design geared specifically at interactive visual comparison integrates spatial and abstract visual representations to enable the users to effectively generate and refine hypotheses in a large, multidimensional, and fragmented space. In addition to the domain analysis and design description, we demonstrate the usefulness of this approach on two case studies. Last, we report the lessons learned through the iterative design and evaluation of our approach, in particular those relevant to the design of comparative visualization of spatial and non-spatial data.",
                "AuthorNamesDeduped": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William J. Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorNames": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorAffiliation": "Department of Computer Science, University of Pittsburgh, USA;Department of Mathematics, Wheeling Jesuit University, USA;The Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, USA;Center for Simulation and Modeling, University of Pittsburgh, USA;Department of Mathematics, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;Department of Computer Science, University of Pittsburgh, USA",
                "InternalReferences": "0.1109/tvcg.2009.141;10.1109/visual.2000.885739;10.1109/vast.2006.261438;10.1109/tvcg.2009.111;10.1109/tvcg.2010.137;10.1109/tvcg.2009.114;10.1109/visual.1991.175815;10.1109/tvcg.2010.162",
                "AuthorKeywords": "Design studies, methodology design, task and requirements analysis, integrating spatial and non-spatial data visualization, visual comparison, high-dimensional data, applications of visualization",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 639,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1354,
                "i": [
                    1354
                ]
            }
        },
        {
            "name": "Kim F. Wong",
            "value": 47,
            "numPapers": 7,
            "cluster": "0",
            "visible": 1,
            "index": 620,
            "x": 104.53894126791899,
            "y": -226.1008840287507,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "GRACE: A Visual Comparison Framework for Integrated Spatial and Non-Spatial Geriatric Data",
                "DOI": "10.1109/tvcg.2013.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.161",
                "FirstPage": 2916,
                "LastPage": 2925,
                "PaperType": "J",
                "Abstract": "We present the design of a novel framework for the visual integration, comparison, and exploration of correlations in spatial and non-spatial geriatric research data. These data are in general high-dimensional and span both the spatial, volumetric domain - through magnetic resonance imaging volumes - and the non-spatial domain, through variables such as age, gender, or walking speed. The visual analysis framework blends medical imaging, mathematical analysis and interactive visualization techniques, and includes the adaptation of Sparse Partial Least Squares and iterated Tikhonov Regularization algorithms to quantify potential neurologymobility connections. A linked-view design geared specifically at interactive visual comparison integrates spatial and abstract visual representations to enable the users to effectively generate and refine hypotheses in a large, multidimensional, and fragmented space. In addition to the domain analysis and design description, we demonstrate the usefulness of this approach on two case studies. Last, we report the lessons learned through the iterative design and evaluation of our approach, in particular those relevant to the design of comparative visualization of spatial and non-spatial data.",
                "AuthorNamesDeduped": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William J. Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorNames": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorAffiliation": "Department of Computer Science, University of Pittsburgh, USA;Department of Mathematics, Wheeling Jesuit University, USA;The Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, USA;Center for Simulation and Modeling, University of Pittsburgh, USA;Department of Mathematics, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;Department of Computer Science, University of Pittsburgh, USA",
                "InternalReferences": "0.1109/tvcg.2009.141;10.1109/visual.2000.885739;10.1109/vast.2006.261438;10.1109/tvcg.2009.111;10.1109/tvcg.2010.137;10.1109/tvcg.2009.114;10.1109/visual.1991.175815;10.1109/tvcg.2010.162",
                "AuthorKeywords": "Design studies, methodology design, task and requirements analysis, integrating spatial and non-spatial data visualization, visual comparison, high-dimensional data, applications of visualization",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 639,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1354,
                "i": [
                    1354
                ]
            }
        },
        {
            "name": "William J. Layton",
            "value": 47,
            "numPapers": 7,
            "cluster": "0",
            "visible": 1,
            "index": 621,
            "x": 75.70612133697036,
            "y": 237.5259631958408,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "GRACE: A Visual Comparison Framework for Integrated Spatial and Non-Spatial Geriatric Data",
                "DOI": "10.1109/tvcg.2013.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.161",
                "FirstPage": 2916,
                "LastPage": 2925,
                "PaperType": "J",
                "Abstract": "We present the design of a novel framework for the visual integration, comparison, and exploration of correlations in spatial and non-spatial geriatric research data. These data are in general high-dimensional and span both the spatial, volumetric domain - through magnetic resonance imaging volumes - and the non-spatial domain, through variables such as age, gender, or walking speed. The visual analysis framework blends medical imaging, mathematical analysis and interactive visualization techniques, and includes the adaptation of Sparse Partial Least Squares and iterated Tikhonov Regularization algorithms to quantify potential neurologymobility connections. A linked-view design geared specifically at interactive visual comparison integrates spatial and abstract visual representations to enable the users to effectively generate and refine hypotheses in a large, multidimensional, and fragmented space. In addition to the domain analysis and design description, we demonstrate the usefulness of this approach on two case studies. Last, we report the lessons learned through the iterative design and evaluation of our approach, in particular those relevant to the design of comparative visualization of spatial and non-spatial data.",
                "AuthorNamesDeduped": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William J. Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorNames": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorAffiliation": "Department of Computer Science, University of Pittsburgh, USA;Department of Mathematics, Wheeling Jesuit University, USA;The Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, USA;Center for Simulation and Modeling, University of Pittsburgh, USA;Department of Mathematics, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;Department of Computer Science, University of Pittsburgh, USA",
                "InternalReferences": "0.1109/tvcg.2009.141;10.1109/visual.2000.885739;10.1109/vast.2006.261438;10.1109/tvcg.2009.111;10.1109/tvcg.2010.137;10.1109/tvcg.2009.114;10.1109/visual.1991.175815;10.1109/tvcg.2010.162",
                "AuthorKeywords": "Design studies, methodology design, task and requirements analysis, integrating spatial and non-spatial data visualization, visual comparison, high-dimensional data, applications of visualization",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 639,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1354,
                "i": [
                    1354
                ]
            }
        },
        {
            "name": "Robert Boudreau",
            "value": 47,
            "numPapers": 7,
            "cluster": "0",
            "visible": 1,
            "index": 622,
            "x": -216.4437407975997,
            "y": -124.10522579465169,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "GRACE: A Visual Comparison Framework for Integrated Spatial and Non-Spatial Geriatric Data",
                "DOI": "10.1109/tvcg.2013.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.161",
                "FirstPage": 2916,
                "LastPage": 2925,
                "PaperType": "J",
                "Abstract": "We present the design of a novel framework for the visual integration, comparison, and exploration of correlations in spatial and non-spatial geriatric research data. These data are in general high-dimensional and span both the spatial, volumetric domain - through magnetic resonance imaging volumes - and the non-spatial domain, through variables such as age, gender, or walking speed. The visual analysis framework blends medical imaging, mathematical analysis and interactive visualization techniques, and includes the adaptation of Sparse Partial Least Squares and iterated Tikhonov Regularization algorithms to quantify potential neurologymobility connections. A linked-view design geared specifically at interactive visual comparison integrates spatial and abstract visual representations to enable the users to effectively generate and refine hypotheses in a large, multidimensional, and fragmented space. In addition to the domain analysis and design description, we demonstrate the usefulness of this approach on two case studies. Last, we report the lessons learned through the iterative design and evaluation of our approach, in particular those relevant to the design of comparative visualization of spatial and non-spatial data.",
                "AuthorNamesDeduped": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William J. Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorNames": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorAffiliation": "Department of Computer Science, University of Pittsburgh, USA;Department of Mathematics, Wheeling Jesuit University, USA;The Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, USA;Center for Simulation and Modeling, University of Pittsburgh, USA;Department of Mathematics, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;Department of Computer Science, University of Pittsburgh, USA",
                "InternalReferences": "0.1109/tvcg.2009.141;10.1109/visual.2000.885739;10.1109/vast.2006.261438;10.1109/tvcg.2009.111;10.1109/tvcg.2010.137;10.1109/tvcg.2009.114;10.1109/visual.1991.175815;10.1109/tvcg.2010.162",
                "AuthorKeywords": "Design studies, methodology design, task and requirements analysis, integrating spatial and non-spatial data visualization, visual comparison, high-dimensional data, applications of visualization",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 639,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1354,
                "i": [
                    1354
                ]
            }
        },
        {
            "name": "Caterina Rosano",
            "value": 47,
            "numPapers": 7,
            "cluster": "0",
            "visible": 1,
            "index": 623,
            "x": 243.62620205853662,
            "y": -54.738228602440905,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "GRACE: A Visual Comparison Framework for Integrated Spatial and Non-Spatial Geriatric Data",
                "DOI": "10.1109/tvcg.2013.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.161",
                "FirstPage": 2916,
                "LastPage": 2925,
                "PaperType": "J",
                "Abstract": "We present the design of a novel framework for the visual integration, comparison, and exploration of correlations in spatial and non-spatial geriatric research data. These data are in general high-dimensional and span both the spatial, volumetric domain - through magnetic resonance imaging volumes - and the non-spatial domain, through variables such as age, gender, or walking speed. The visual analysis framework blends medical imaging, mathematical analysis and interactive visualization techniques, and includes the adaptation of Sparse Partial Least Squares and iterated Tikhonov Regularization algorithms to quantify potential neurologymobility connections. A linked-view design geared specifically at interactive visual comparison integrates spatial and abstract visual representations to enable the users to effectively generate and refine hypotheses in a large, multidimensional, and fragmented space. In addition to the domain analysis and design description, we demonstrate the usefulness of this approach on two case studies. Last, we report the lessons learned through the iterative design and evaluation of our approach, in particular those relevant to the design of comparative visualization of spatial and non-spatial data.",
                "AuthorNamesDeduped": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William J. Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorNames": "Adrian Maries;Nathan Mays;MeganOlson Hunt;Kim F. Wong;William Layton;Robert Boudreau;Caterina Rosano;G. Elisabeta Marai",
                "AuthorAffiliation": "Department of Computer Science, University of Pittsburgh, USA;Department of Mathematics, Wheeling Jesuit University, USA;The Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, USA;Center for Simulation and Modeling, University of Pittsburgh, USA;Department of Mathematics, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;The Department of Epidemiology, Graduate School of Public Health, University of Pittsburgh, USA;Department of Computer Science, University of Pittsburgh, USA",
                "InternalReferences": "0.1109/tvcg.2009.141;10.1109/visual.2000.885739;10.1109/vast.2006.261438;10.1109/tvcg.2009.111;10.1109/tvcg.2010.137;10.1109/tvcg.2009.114;10.1109/visual.1991.175815;10.1109/tvcg.2010.162",
                "AuthorKeywords": "Design studies, methodology design, task and requirements analysis, integrating spatial and non-spatial data visualization, visual comparison, high-dimensional data, applications of visualization",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 639,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1354,
                "i": [
                    1354
                ]
            }
        },
        {
            "name": "Guoxi Liu",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 624,
            "x": -142.78159966947092,
            "y": 205.09367322232774,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Task-Parallel Approach for Localized Topological Data Structures",
                "DOI": "10.1109/tvcg.2023.3327182",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327182",
                "FirstPage": 1271,
                "LastPage": 1281,
                "PaperType": "J",
                "Abstract": "Unstructured meshes are characterized by data points irregularly distributed in the Euclidian space. Due to the irregular nature of these data, computing connectivity information between the mesh elements requires much more time and memory than on uniformly distributed data. To lower storage costs, dynamic data structures have been proposed. These data structures compute connectivity information on the fly and discard them when no longer needed. However, on-the-fly computation slows down algorithms and results in a negative impact on the time performance. To address this issue, we propose a new task-parallel approach to proactively compute mesh connectivity. Unlike previous approaches implementing data-parallel models, where all threads run the same type of instructions, our task-parallel approach allows threads to run different functions. Specifically, some threads run the algorithm of choice while other threads compute connectivity information before they are actually needed. The approach was implemented in the new Accelerated Clustered TOPOlogical (ACTOPO) data structure, which can support any processing algorithm requiring mesh connectivity information. Our experiments show that ACTOPO combines the benefits of state-of-the-art memory-efficient (TTK CompactTriangulation) and time-efficient (TTK ExplicitTriangulation) topological data structures. It occupies a similar amount of memory as TTK CompactTriangulation while providing up to 5x speedup. Moreover, it achieves comparable time performance as TTK ExplicitTriangulation while using only half of the memory space.",
                "AuthorNamesDeduped": "Guoxi Liu;Federico Iuricich",
                "AuthorNames": "Guoxi Liu;Federico Iuricich",
                "AuthorAffiliation": "School of Computing, Clemson University, USA;School of Computing, Clemson University, USA",
                "InternalReferences": "10.1109/tvcg.2008.110;10.1109/tvcg.2012.209;10.1109/tvcg.2018.2864848;10.1109/tvcg.2014.2346434;10.1109/tvcg.2019.2934257;10.1109/tvcg.2021.3114839;10.1109/tvcg.2017.2743938;10.1109/tvcg.2021.3114869",
                "AuthorKeywords": "Data structures,parallel computation,topological data analysis,simplicial complex",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 157,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 124,
                "i": [
                    124
                ]
            }
        },
        {
            "name": "Attila Gyulassy",
            "value": 411,
            "numPapers": 71,
            "cluster": "11",
            "visible": 1,
            "index": 625,
            "x": -33.2826932302585,
            "y": -247.87549764214393,
            "vy": 0,
            "vx": 0,
            "r": 1.4732297063903281,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Federico Iuricich",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 626,
            "x": 192.1325142465608,
            "y": 160.42162251172726,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Task-Parallel Approach for Localized Topological Data Structures",
                "DOI": "10.1109/tvcg.2023.3327182",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327182",
                "FirstPage": 1271,
                "LastPage": 1281,
                "PaperType": "J",
                "Abstract": "Unstructured meshes are characterized by data points irregularly distributed in the Euclidian space. Due to the irregular nature of these data, computing connectivity information between the mesh elements requires much more time and memory than on uniformly distributed data. To lower storage costs, dynamic data structures have been proposed. These data structures compute connectivity information on the fly and discard them when no longer needed. However, on-the-fly computation slows down algorithms and results in a negative impact on the time performance. To address this issue, we propose a new task-parallel approach to proactively compute mesh connectivity. Unlike previous approaches implementing data-parallel models, where all threads run the same type of instructions, our task-parallel approach allows threads to run different functions. Specifically, some threads run the algorithm of choice while other threads compute connectivity information before they are actually needed. The approach was implemented in the new Accelerated Clustered TOPOlogical (ACTOPO) data structure, which can support any processing algorithm requiring mesh connectivity information. Our experiments show that ACTOPO combines the benefits of state-of-the-art memory-efficient (TTK CompactTriangulation) and time-efficient (TTK ExplicitTriangulation) topological data structures. It occupies a similar amount of memory as TTK CompactTriangulation while providing up to 5x speedup. Moreover, it achieves comparable time performance as TTK ExplicitTriangulation while using only half of the memory space.",
                "AuthorNamesDeduped": "Guoxi Liu;Federico Iuricich",
                "AuthorNames": "Guoxi Liu;Federico Iuricich",
                "AuthorAffiliation": "School of Computing, Clemson University, USA;School of Computing, Clemson University, USA",
                "InternalReferences": "10.1109/tvcg.2008.110;10.1109/tvcg.2012.209;10.1109/tvcg.2018.2864848;10.1109/tvcg.2014.2346434;10.1109/tvcg.2019.2934257;10.1109/tvcg.2021.3114839;10.1109/tvcg.2017.2743938;10.1109/tvcg.2021.3114869",
                "AuthorKeywords": "Data structures,parallel computation,topological data analysis,simplicial complex",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 157,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 124,
                "i": [
                    124
                ]
            }
        },
        {
            "name": "Xavier Tricoche",
            "value": 358,
            "numPapers": 73,
            "cluster": "11",
            "visible": 1,
            "index": 627,
            "x": -250.2352556124969,
            "y": 11.502906091434252,
            "vy": 0,
            "vx": 0,
            "r": 1.4122049510650547,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Tracking of vector field singularities in unstructured 3D time-dependent datasets",
                "DOI": "10.1109/visual.2004.107",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.107",
                "FirstPage": 329,
                "LastPage": 336,
                "PaperType": "C",
                "Abstract": "We present an approach for monitoring the positions of vector field singularities and related structural changes in time-dependent datasets. The concept of singularity index is discussed and extended from the well-understood planar case to the more intricate three-dimensional setting. Assuming a tetrahedral grid with linear interpolation in space and time, vector field singularities obey rules imposed by fundamental invariants (Poincare index), which we use as a basis for an efficient tracking algorithm. We apply the presented algorithm to CFD datasets to illustrate its purpose. We examine structures that exhibit topological variations with time and describe some of the insight gained with our method. Examples are given that show a correlation in the evolution of physical quantities that play a role in vortex breakdown.",
                "AuthorNamesDeduped": "Christoph Garth;Xavier Tricoche;Gerik Scheuermann",
                "AuthorNames": "C. Garth;X. Tricoche;G. Scheuermann",
                "AuthorAffiliation": "Department of Computer Science, University of Kaiserslautern, Germany;Scientific Computing and Imaging Institute, University of Utah, USA;Institute of Computer Science, University of Leipzig, Germany",
                "InternalReferences": "0.1109/visual.2003.1250376;10.1109/visual.2002.1183786;10.1109/visual.1991.175773;10.1109/visual.1997.663910",
                "AuthorKeywords": "flow visualization, topology tracking, time-dependent datasets, vortex breakdown",
                "AminerCitationCount": 133,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 203,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2518,
                "i": [
                    2518
                ]
            }
        },
        {
            "name": "Jonas Lukasczyk",
            "value": 33,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 628,
            "x": 176.8863653399676,
            "y": -177.65476001733114,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "Dynamic Nested Tracking Graphs",
                "DOI": "10.1109/tvcg.2019.2934368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934368",
                "FirstPage": 249,
                "LastPage": 258,
                "PaperType": "J",
                "Abstract": "This work describes an approach for the interactive visual analysis of large-scale simulations, where numerous superlevel set components and their evolution are of primary interest. The approach first derives, at simulation runtime, a specialized Cinema database that consists of images of component groups, and topological abstractions. This database is processed by a novel graph operation-based nested tracking graph algorithm (GO-NTG) that dynamically computes NTGs for component groups based on size, overlap, persistence, and level thresholds. The resulting NTGs are in turn used in a feature-centered visual analytics framework to query specific database elements and update feature parameters, facilitating flexible post hoc analysis.",
                "AuthorNamesDeduped": "Jonas Lukasczyk;Christoph Garth;Gunther H. Weber;Tim Biedert;Ross Maciejewski;Heike Leitte",
                "AuthorNames": "Jonas Lukasczyk;Christoph Garth;Gunther H. Weber;Tim Biedert;Ross Maciejewski;Heike Leitte",
                "AuthorAffiliation": "Technische Universität Kaiserslautern;Technische Universität Kaiserslautern;Lawrence Berkeley National Laboratory, University of California, Davis;NVIDIA Corporation;Arizona State University;Technische Universität Kaiserslautern",
                "InternalReferences": "0.1109/tvcg.2018.2865265;10.1109/visual.1998.745288;10.1109/tvcg.2012.228",
                "AuthorKeywords": "Topological Data Analysis,Nested Tracking Graphs,Image Databases,Feature Tracking,Post Hoc Visual Analytics",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 643,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 577,
                "i": [
                    577
                ]
            }
        },
        {
            "name": "Shih-Hsuan Hung",
            "value": 14,
            "numPapers": 30,
            "cluster": "11",
            "visible": 1,
            "index": 629,
            "x": -10.434726037640878,
            "y": 250.68130463303274,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Global Topology of 3D Symmetric Tensor Fields",
                "DOI": "10.1109/tvcg.2023.3326933",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326933",
                "FirstPage": 1282,
                "LastPage": 1291,
                "PaperType": "J",
                "Abstract": "There have been recent advances in the analysis and visualization of 3D symmetric tensor fields, with a focus on the robust extraction of tensor field topology. However, topological features such as degenerate curves and neutral surfaces do not live in isolation. Instead, they intriguingly interact with each other. In this paper, we introduce the notion of topological graph for 3D symmetric tensor fields to facilitate global topological analysis of such fields. The nodes of the graph include degenerate curves and regions bounded by neutral surfaces in the domain. The edges in the graph denote the adjacency information between the regions and degenerate curves. In addition, we observe that a degenerate curve can be a loop and even a knot and that two degenerate curves (whether in the same region or not) can form a link. We provide a definition and theoretical analysis of individual degenerate curves in order to help understand why knots and links may occur. Moreover, we differentiate between wedges and trisectors, thus making the analysis more detailed about degenerate curves. We incorporate this information into the topological graph. Such a graph can not only reveal the global structure in a 3D symmetric tensor field but also allow two symmetric tensor fields to be compared. We demonstrate our approach by applying it to solid mechanics and material science data sets.",
                "AuthorNamesDeduped": "Shih-Hsuan Hung;Yue Zhang 0009;Eugene Zhang",
                "AuthorNames": "Shih-Hsuan Hung;Yue Zhang;Eugene Zhang",
                "AuthorAffiliation": "School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA",
                "InternalReferences": "10.1109/visual.1999.809907;10.1109/tvcg.2021.3114808;10.1109/tvcg.2019.2934314;10.1109/tvcg.2020.3030431;10.1109/tvcg.2018.2864768;10.1109/tvcg.2008.148;10.1109/visual.2004.105;10.1109/visual.2005.1532841",
                "AuthorKeywords": "Tensor field visualization,3D symmetric tensor fields,global tensor field topology,topological graphs,degenerate curves,neutral surfaces,wedges and trisectors",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 135,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 127,
                "i": [
                    127
                ]
            }
        },
        {
            "name": "Harry Yeh",
            "value": 36,
            "numPapers": 39,
            "cluster": "11",
            "visible": 1,
            "index": 630,
            "x": -161.7668815083156,
            "y": -192.04550514676092,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Feature Curves and Surfaces of 3D Asymmetric Tensor Fields",
                "DOI": "10.1109/tvcg.2021.3114808",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114808",
                "FirstPage": 33,
                "LastPage": 42,
                "PaperType": "J",
                "Abstract": "3D asymmetric tensor fields have found many applications in science and engineering domains, such as fluid dynamics and solid mechanics. 3D asymmetric tensors can have complex eigenvalues, which makes their analysis and visualization more challenging than 3D symmetric tensors. Existing research in tensor field visualization focuses on 2D asymmetric tensor fields and 3D symmetric tensor fields. In this paper, we address the analysis and visualization of 3D asymmetric tensor fields. We introduce six topological surfaces and one topological curve, which lead to an eigenvalue space based on the tensor mode that we define. In addition, we identify several non-topological feature surfaces that are nonetheless physically important. Included in our analysis are the realizations that triple degenerate tensors are structurally stable and form curves, unlike the case for 3D symmetric tensors fields. Furthermore, there are two different ways of measuring the relative strengths of rotation and angular deformation in the tensor fields, unlike the case for 2D asymmetric tensor fields. We extract these feature surfaces using the A-patches algorithm. However, since three of our feature surfaces are quadratic, we develop a method to extract quadratic surfaces at any given accuracy. To facilitate the analysis of eigenvector fields, we visualize a hyperstreamline as a tree stem with the other two eigenvectors represented as thorns in the real domain or the dual-eigenvectors as leaves in the complex domain. To demonstrate the effectiveness of our analysis and visualization, we apply our approach to datasets from solid mechanics and fluid dynamics.",
                "AuthorNamesDeduped": "Shih-Hsuan Hung;Yue Zhang 0009;Harry Yeh;Eugene Zhang",
                "AuthorNames": "Shih-Hsuan Hung;Yue Zhang;Harry Yeh;Eugene Zhang",
                "AuthorAffiliation": "School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA;School of Civil and Construction Engineering, Oregon State University, USA;School of Electrical Engineering and Computer Science, Oregon State University, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598998;10.1109/visual.1994.346326;10.1109/tvcg.2019.2934314;10.1109/tvcg.2011.170;10.1109/tvcg.2020.3030431;10.1109/tvcg.2018.2864846;10.1109/visual.1998.745296;10.1109/tvcg.2018.2864768;10.1109/tvcg.2008.148;10.1109/visual.2004.105;10.1109/visual.2005.1532770",
                "AuthorKeywords": "Tensor field visualization,3D asymmetric tensor fields,tensor field topology,traceless tensors,feature surface extraction,degenerate surfaces,neutral surfaces,balanced surfaces,triple degenerate curves",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 617,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 360,
                "i": [
                    360
                ]
            }
        },
        {
            "name": "Lawrence Roy",
            "value": 41,
            "numPapers": 25,
            "cluster": "11",
            "visible": 1,
            "index": 631,
            "x": 249.20412815061204,
            "y": 32.36205359202863,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "Multi-Scale Topological Analysis of Asymmetric Tensor Fields on Surfaces",
                "DOI": "10.1109/tvcg.2019.2934314",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934314",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Asymmetric tensor fields have found applications in many science and engineering domains, such as fluid dynamics. Recent advances in the visualization and analysis of 2D asymmetric tensor fields focus on pointwise analysis of the tensor field and effective visualization metaphors such as colors, glyphs, and hyperstreamlines. In this paper, we provide a novel multi-scale topological analysis framework for asymmetric tensor fields on surfaces. Our multi-scale framework is based on the notions of eigenvalue and eigenvector graphs. At the core of our framework are the identification of atomic operations that modify the graphs and the scale definition that guides the order in which the graphs are simplified to enable clarity and focus for the visualization of topological analysis on data of different sizes. We also provide efficient algorithms to realize these operations. Furthermore, we provide physical interpretation of these graphs. To demonstrate the utility of our system, we apply our multi-scale analysis to data in computational fluid dynamics.",
                "AuthorNamesDeduped": "Fariba Khan;Lawrence Roy;Eugene Zhang;Botong Qu;Shih-Hsuan Hung;Harry Yeh;Robert S. Laramee;Yue Zhang 0009",
                "AuthorNames": "Fariba Khan;Lawrence Roy;Eugene Zhang;Botong Qu;Shih-Hsuan Hung;Harry Yeh;Robert S. Laramee;Yue Zhang",
                "AuthorAffiliation": "Oregon State University;Oregon State University;Oregon State University;Oregon State University;Oregon State University;Oregon State University;Swansea University;Oregon State University",
                "InternalReferences": "0.1109/visual.1994.346326;10.1109/visual.1998.745312;10.1109/tvcg.2016.2598998;10.1109/tvcg.2009.126;10.1109/visual.2005.1532850;10.1109/visual.2004.59;10.1109/visual.2004.59;10.1109/tvcg.2011.170;10.1109/tvcg.2010.199;10.1109/visual.2001.964507;10.1109/visual.2000.885716;10.1109/visual.2002.1183784;10.1109/visual.2005.1532770",
                "AuthorKeywords": "Tensor field visualization,tensor field topology,2D asymmetric tensor fields,2D asymmetric tensor field topology,eigenvalue graphs,eigenvector graphs",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 482,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 589,
                "i": [
                    589
                ]
            }
        },
        {
            "name": "Robert S. Laramee",
            "value": 185,
            "numPapers": 97,
            "cluster": "11",
            "visible": 1,
            "index": 632,
            "x": -205.7783564757922,
            "y": 144.58654158019613,
            "vy": 0,
            "vx": 0,
            "r": 1.2130109383995396,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Challenges and Opportunities in Data Visualization Education: A Call to Action",
                "DOI": "10.1109/tvcg.2023.3327378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327378",
                "FirstPage": 649,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.",
                "AuthorNamesDeduped": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorNames": "Benjamin Bach;Mandy Keck;Fateme Rajabiyazdi;Tatiana Losev;Isabel Meirelles;Jason Dykes;Robert S. Laramee;Mashael AlKadi;Christina Stoiber;Samuel Huron;Charles Perin;Luiz Morais;Wolfgang Aigner;Doris Kosminsky;Magdalena Boucher;Søren Knudsen;Areti Manataki;Jan Aerts;Uta Hinrichs;Jonathan C. Roberts;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Edinburgh, United Kingdom;University of Applied Sciences Upper Austria, Austria;Carleton University, Canada;Simon Fraser University, Canada;OCAD University, Canada;City University London, United Kingdom;University of Nottingham, United Kingdom;University of Edinburgh, United Kingdom;University of Applied Sciences St. Pölten, Austria;Télécom Paris, France;University of Victoria, Canada;Universidade Federal de Pernambuco, Brazil;University of Applied Sciences St. Pölten, Austria;Universidade Federal de Rio de Janeiro, Brazil;University of Applied Sciences St. Pölten, Austria;University of Copenhagen, Denmark;University of Edinburgh, United Kingdom;Hasselt University, Belgium;University of Edinburgh, United Kingdom;Bangor University, United Kingdom;Simon Fraser University, Canada",
                "InternalReferences": "10.1109/tvcg.2022.3209402;10.1109/tvcg.2022.3209487;10.1109/tvcg.2022.3209448;10.1109/tvcg.2019.2934804;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346984;10.1109/tvcg.2022.3209365;10.1109/tvcg.2019.2934790;10.1109/tvcg.2016.2599338;10.1109/visual.2004.78;10.1109/tvcg.2018.2865241;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209500;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114959;10.1109/tvcg.2015.2467271;10.1109/tvcg.2019.2934534;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.213;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367",
                "AuthorKeywords": "Data Visualization,Education,Challenges",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 138,
                "DownloadsXplore": 563,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3,
                "i": [
                    3
                ]
            }
        },
        {
            "name": "Prashant Kumar",
            "value": 20,
            "numPapers": 5,
            "cluster": "11",
            "visible": 1,
            "index": 633,
            "x": 54.110474753246535,
            "y": -245.80898381055619,
            "vy": 0,
            "vx": 0,
            "r": 1.023028209556707,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Robust and Fast Extraction of 3D Symmetric Tensor Field Topology",
                "DOI": "10.1109/tvcg.2018.2864768",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864768",
                "FirstPage": 1102,
                "LastPage": 1111,
                "PaperType": "J",
                "Abstract": "3D symmetric tensor fields appear in many science and engineering fields, and topology-driven analysis is important in many of these application domains, such as solid mechanics and fluid dynamics. Degenerate curves and neutral surfaces are important topological features in 3D symmetric tensor fields. Existing methods to extract degenerate curves and neutral surfaces often miss parts of the curves and surfaces, respectively. Moreover, these methods are computationally expensive due to the lack of knowledge of structures of degenerate curves and neutral surfaces.&lt;;/p&gt; &lt;;p&gt;In this paper, we provide theoretical analysis on the geometric and topological structures of degenerate curves and neutral surfaces of 3D linear tensor fields. These structures lead to parameterizations for degenerate curves and neutral surfaces that can not only provide more robust extraction of these features but also incur less computational cost.&lt;;/p&gt; &lt;;p&gt;We demonstrate the benefits of our approach by applying our degenerate curve and neutral surface detection techniques to solid mechanics simulation data sets.",
                "AuthorNamesDeduped": "Lawrence Roy;Prashant Kumar;Yue Zhang 0009;Eugene Zhang",
                "AuthorNames": "Lawrence Roy;Prashant Kumar;Yue Zhang;Eugene Zhang",
                "AuthorAffiliation": "Oregon State University, Corvallis, OR, US;Oregon State University, Corvallis, OR, US;Oregon State University, Corvallis, OR, US;Oregon State University, Corvallis, OR, US",
                "InternalReferences": "0.1109/tvcg.2009.184;10.1109/visual.1994.346326;10.1109/tvcg.2008.148;10.1109/visual.2000.885716;10.1109/visual.2004.105;10.1109/visual.2005.1532841",
                "AuthorKeywords": "Tensor field visualization,3D symmetric tensor fields,tensor field topology,traceless tensors,degenerate curve extraction,neutral surface extraction",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 380,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 719,
                "i": [
                    719
                ]
            }
        },
        {
            "name": "Carl-Fredrik Westin",
            "value": 113,
            "numPapers": 21,
            "cluster": "11",
            "visible": 1,
            "index": 634,
            "x": 126.24172323625076,
            "y": 217.97483183693905,
            "vy": 0,
            "vx": 0,
            "r": 1.1301093839953944,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Invariant Crease Lines for Topological and Structural Analysis of Tensor fields",
                "DOI": "10.1109/tvcg.2008.148",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.148",
                "FirstPage": 1627,
                "LastPage": 1634,
                "PaperType": "J",
                "Abstract": "We introduce a versatile framework for characterizing and extracting salient structures in three-dimensional symmetric second-order tensor fields. The key insight is that degenerate lines in tensor fields, as defined by the standard topological approach, are exactly crease (ridge and valley) lines of a particular tensor invariant called mode. This reformulation allows us to apply well-studied approaches from scientific visualization or computer vision to the extraction of topological lines in tensor fields. More generally, this main result suggests that other tensor invariants, such as anisotropy measures like fractional anisotropy (FA), can be used in the same framework in lieu of mode to identify important structural properties in tensor fields. Our implementation addresses the specific challenge posed by the non-linearity of the considered scalar measures and by the smoothness requirement of the crease manifold computation. We use a combination of smooth reconstruction kernels and adaptive refinement strategy that automatically adjust the resolution of the analysis to the spatial variation of the considered quantities. Together, these improvements allow for the robust application of existing ridge line extraction algorithms in the tensor context of our problem. Results are proposed for a diffusion tensor MRI dataset, and for a benchmark stress tensor field used in engineering research.",
                "AuthorNamesDeduped": "Xavier Tricoche;Gordon L. Kindlmann;Carl-Fredrik Westin",
                "AuthorNames": "Xavier Tricoche;Gordon Kindlmann;Carl-Fredrik Westin",
                "AuthorAffiliation": "Computer Science Department, Purdue University, USA;Brigham and Women's Hospital, Harvard Medical School;Brigham and Women's Hospital, Harvard Medical School",
                "InternalReferences": "0.1109/visual.2004.105;10.1109/tvcg.2007.70602;10.1109/visual.1999.809896;10.1109/visual.1991.175773;10.1109/visual.1994.346326;10.1109/visual.1994.346326;10.1109/visual.1990.146359;10.1109/tvcg.2007.70554",
                "AuthorKeywords": "Tensor fields, tensor invariants, ridge lines, crease extraction, structural analysis, topology",
                "AminerCitationCount": 69,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 386,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2053,
                "i": [
                    2053
                ]
            }
        },
        {
            "name": "Xiaoqiang Zheng",
            "value": 132,
            "numPapers": 27,
            "cluster": "11",
            "visible": 1,
            "index": 635,
            "x": -240.51590923281617,
            "y": -75.51223348512298,
            "vy": 0,
            "vx": 0,
            "r": 1.151986183074266,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Topological lines in 3D tensor fields",
                "DOI": "10.1109/visual.2004.105",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.105",
                "FirstPage": 313,
                "LastPage": 320,
                "PaperType": "C",
                "Abstract": "Visualization of 3D tensor fields continues to be a major challenge in terms of providing intuitive and uncluttered images that allow the users to better understand their data. The primary focus of this paper is on finding a formulation that lends itself to a stable numerical algorithm for extracting stable and persistent topological features from 2nd order real symmetric 3D tensors. While features in 2D tensors can be identified as either wedge or trisector points, in 3D, the corresponding stable features are lines, not just points. These topological feature lines provide a compact representation of the 3D tensor field and are essential in helping scientists and engineers understand their complex nature. Existing techniques work by finding degenerate points and are not numerically stable, and worse, produce both false positive and false negative feature points. This work seeks to address this problem with a robust algorithm that can extract these features in a numerically stable, accurate, and complete manner.",
                "AuthorNamesDeduped": "Xiaoqiang Zheng;Alex Pang",
                "AuthorNames": "X. Zheng;A. Pang",
                "AuthorAffiliation": "Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA",
                "InternalReferences": "0.1109/visual.1998.745316;10.1109/visual.1999.809894;10.1109/visual.1993.398849;10.1109/visual.2003.1250379;10.1109/visual.2002.1183798;10.1109/visual.1994.346326;10.1109/visual.1999.809905;10.1109/visual.1998.745294;10.1109/visual.1999.809886",
                "AuthorKeywords": "hyperstreamlines, real symmetric tensors, degenerate tensors, tensor topology, topological lines",
                "AminerCitationCount": 92,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 273,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2525,
                "i": [
                    2525
                ]
            }
        },
        {
            "name": "Alex Pang",
            "value": 323,
            "numPapers": 80,
            "cluster": "11",
            "visible": 1,
            "index": 636,
            "x": 228.5363233207486,
            "y": -106.86977553562204,
            "vy": 0,
            "vx": 0,
            "r": 1.3719055843408174,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Topological lines in 3D tensor fields",
                "DOI": "10.1109/visual.2004.105",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.105",
                "FirstPage": 313,
                "LastPage": 320,
                "PaperType": "C",
                "Abstract": "Visualization of 3D tensor fields continues to be a major challenge in terms of providing intuitive and uncluttered images that allow the users to better understand their data. The primary focus of this paper is on finding a formulation that lends itself to a stable numerical algorithm for extracting stable and persistent topological features from 2nd order real symmetric 3D tensors. While features in 2D tensors can be identified as either wedge or trisector points, in 3D, the corresponding stable features are lines, not just points. These topological feature lines provide a compact representation of the 3D tensor field and are essential in helping scientists and engineers understand their complex nature. Existing techniques work by finding degenerate points and are not numerically stable, and worse, produce both false positive and false negative feature points. This work seeks to address this problem with a robust algorithm that can extract these features in a numerically stable, accurate, and complete manner.",
                "AuthorNamesDeduped": "Xiaoqiang Zheng;Alex Pang",
                "AuthorNames": "X. Zheng;A. Pang",
                "AuthorAffiliation": "Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA",
                "InternalReferences": "0.1109/visual.1998.745316;10.1109/visual.1999.809894;10.1109/visual.1993.398849;10.1109/visual.2003.1250379;10.1109/visual.2002.1183798;10.1109/visual.1994.346326;10.1109/visual.1999.809905;10.1109/visual.1998.745294;10.1109/visual.1999.809886",
                "AuthorKeywords": "hyperstreamlines, real symmetric tensors, degenerate tensors, tensor topology, topological lines",
                "AminerCitationCount": 92,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 273,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2525,
                "i": [
                    2525
                ]
            }
        },
        {
            "name": "Bryan Triana",
            "value": 0,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 637,
            "x": -96.40171506517146,
            "y": 233.3596137563085,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Comparative Study of the Perceptual Sensitivity of Topological Visualizations to Feature Variations",
                "DOI": "10.1109/tvcg.2023.3326592",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326592",
                "FirstPage": 1074,
                "LastPage": 1084,
                "PaperType": "J",
                "Abstract": "Color maps are a commonly used visualization technique in which data are mapped to optical properties, e.g., color or opacity. Color maps, however, do not explicitly convey structures (e.g., positions and scale of features) within data. Topology-based visualizations reveal and explicitly communicate structures underlying data. Although our understanding of what types of features are captured by topological visualizations is good, our understanding of people's perception of those features is not. This paper evaluates the sensitivity of topology-based isocontour, Reeb graph, and persistence diagram visualizations compared to a reference color map visualization for synthetically generated scalar fields on 2-manifold triangular meshes embedded in 3D. In particular, we built and ran a human-subject study that evaluated the perception of data features characterized by Gaussian signals and measured how effectively each visualization technique portrays variations of data features arising from the position and amplitude variation of a mixture of Gaussians. For positional feature variations, the results showed that only the Reeb graph visualization had high sensitivity. For amplitude feature variations, persistence diagrams and color maps demonstrated the highest sensitivity, whereas isocontours showed only weak sensitivity. These results take an important step toward understanding which topology-based tools are best for various data and task scenarios and their effectiveness in conveying topological variations as compared to conventional color mapping.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Bryan Triana;Tanmay Kotha;Dave Pugmire;Paul Rosen 0001",
                "AuthorNames": "Tushar M. Athawale;Bryan Triana;Tanmay Kotha;Dave Pugmire;Paul Rosen",
                "AuthorAffiliation": "Oak Ridge National Laboratory, USA;University of South Florida, USA;University of South Florida, USA;Oak Ridge National Laboratory, USA;University of Utah, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209395;10.1109/tvcg.2011.185;10.1109/tvcg.2009.170;10.1109/vast.2010.5652460;10.1109/tvcg.2018.2864432;10.1109/tvcg.2009.126;10.1109/tvcg.2006.186;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114839;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744359;10.1109/tvcg.2016.2599017;10.1109/tvcg.2017.2743938;10.1109/visual.2005.1532781;10.1109/tvcg.2019.2934256;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Perception & cognition,computational topology-based techniques,comparison and similarity",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 127,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 128,
                "i": [
                    128
                ]
            }
        },
        {
            "name": "Tanmay Kotha",
            "value": 0,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 638,
            "x": -86.61638428310327,
            "y": -237.3764983601405,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Comparative Study of the Perceptual Sensitivity of Topological Visualizations to Feature Variations",
                "DOI": "10.1109/tvcg.2023.3326592",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326592",
                "FirstPage": 1074,
                "LastPage": 1084,
                "PaperType": "J",
                "Abstract": "Color maps are a commonly used visualization technique in which data are mapped to optical properties, e.g., color or opacity. Color maps, however, do not explicitly convey structures (e.g., positions and scale of features) within data. Topology-based visualizations reveal and explicitly communicate structures underlying data. Although our understanding of what types of features are captured by topological visualizations is good, our understanding of people's perception of those features is not. This paper evaluates the sensitivity of topology-based isocontour, Reeb graph, and persistence diagram visualizations compared to a reference color map visualization for synthetically generated scalar fields on 2-manifold triangular meshes embedded in 3D. In particular, we built and ran a human-subject study that evaluated the perception of data features characterized by Gaussian signals and measured how effectively each visualization technique portrays variations of data features arising from the position and amplitude variation of a mixture of Gaussians. For positional feature variations, the results showed that only the Reeb graph visualization had high sensitivity. For amplitude feature variations, persistence diagrams and color maps demonstrated the highest sensitivity, whereas isocontours showed only weak sensitivity. These results take an important step toward understanding which topology-based tools are best for various data and task scenarios and their effectiveness in conveying topological variations as compared to conventional color mapping.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Bryan Triana;Tanmay Kotha;Dave Pugmire;Paul Rosen 0001",
                "AuthorNames": "Tushar M. Athawale;Bryan Triana;Tanmay Kotha;Dave Pugmire;Paul Rosen",
                "AuthorAffiliation": "Oak Ridge National Laboratory, USA;University of South Florida, USA;University of South Florida, USA;Oak Ridge National Laboratory, USA;University of Utah, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209395;10.1109/tvcg.2011.185;10.1109/tvcg.2009.170;10.1109/vast.2010.5652460;10.1109/tvcg.2018.2864432;10.1109/tvcg.2009.126;10.1109/tvcg.2006.186;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114839;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744359;10.1109/tvcg.2016.2599017;10.1109/tvcg.2017.2743938;10.1109/visual.2005.1532781;10.1109/tvcg.2019.2934256;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Perception & cognition,computational topology-based techniques,comparison and similarity",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 127,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 128,
                "i": [
                    128
                ]
            }
        },
        {
            "name": "Dave Pugmire",
            "value": 0,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 639,
            "x": 224.3892566392232,
            "y": 116.61672909534391,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "A Comparative Study of the Perceptual Sensitivity of Topological Visualizations to Feature Variations",
                "DOI": "10.1109/tvcg.2023.3326592",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326592",
                "FirstPage": 1074,
                "LastPage": 1084,
                "PaperType": "J",
                "Abstract": "Color maps are a commonly used visualization technique in which data are mapped to optical properties, e.g., color or opacity. Color maps, however, do not explicitly convey structures (e.g., positions and scale of features) within data. Topology-based visualizations reveal and explicitly communicate structures underlying data. Although our understanding of what types of features are captured by topological visualizations is good, our understanding of people's perception of those features is not. This paper evaluates the sensitivity of topology-based isocontour, Reeb graph, and persistence diagram visualizations compared to a reference color map visualization for synthetically generated scalar fields on 2-manifold triangular meshes embedded in 3D. In particular, we built and ran a human-subject study that evaluated the perception of data features characterized by Gaussian signals and measured how effectively each visualization technique portrays variations of data features arising from the position and amplitude variation of a mixture of Gaussians. For positional feature variations, the results showed that only the Reeb graph visualization had high sensitivity. For amplitude feature variations, persistence diagrams and color maps demonstrated the highest sensitivity, whereas isocontours showed only weak sensitivity. These results take an important step toward understanding which topology-based tools are best for various data and task scenarios and their effectiveness in conveying topological variations as compared to conventional color mapping.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Bryan Triana;Tanmay Kotha;Dave Pugmire;Paul Rosen 0001",
                "AuthorNames": "Tushar M. Athawale;Bryan Triana;Tanmay Kotha;Dave Pugmire;Paul Rosen",
                "AuthorAffiliation": "Oak Ridge National Laboratory, USA;University of South Florida, USA;University of South Florida, USA;Oak Ridge National Laboratory, USA;University of Utah, USA",
                "InternalReferences": "10.1109/tvcg.2022.3209395;10.1109/tvcg.2011.185;10.1109/tvcg.2009.170;10.1109/vast.2010.5652460;10.1109/tvcg.2018.2864432;10.1109/tvcg.2009.126;10.1109/tvcg.2006.186;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114839;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744359;10.1109/tvcg.2016.2599017;10.1109/tvcg.2017.2743938;10.1109/visual.2005.1532781;10.1109/tvcg.2019.2934256;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Perception & cognition,computational topology-based techniques,comparison and similarity",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 127,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 128,
                "i": [
                    128
                ]
            }
        },
        {
            "name": "Oliver Beuing",
            "value": 69,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 640,
            "x": -244.42200322991079,
            "y": 65.63447521750658,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Combined Visualization of Vessel Deformation and Hemodynamics in Cerebral Aneurysms",
                "DOI": "10.1109/tvcg.2016.2598795",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598795",
                "FirstPage": 761,
                "LastPage": 770,
                "PaperType": "J",
                "Abstract": "We present the first visualization tool that combines patient-specific hemodynamics with information about the vessel wall deformation and wall thickness in cerebral aneurysms. Such aneurysms bear the risk of rupture, whereas their treatment also carries considerable risks for the patient. For the patient-specific rupture risk evaluation and treatment analysis, both morphological and hemodynamic data have to be investigated. Medical researchers emphasize the importance of analyzing correlations between wall properties such as the wall deformation and thickness, and hemodynamic attributes like the Wall Shear Stress and near-wall flow. Our method uses a linked 2.5D and 3D depiction of the aneurysm together with blood flow information that enables the simultaneous exploration of wall characteristics and hemodynamic attributes during the cardiac cycle. We thus offer medical researchers an effective visual exploration tool for aneurysm treatment risk assessment. The 2.5D view serves as an overview that comprises a projection of the vessel surface to a 2D map, providing an occlusion-free surface visualization combined with a glyph-based depiction of the local wall thickness. The 3D view represents the focus upon which the data exploration takes place. To support the time-dependent parameter exploration and expert collaboration, a camera path is calculated automatically, where the user can place landmarks for further exploration of the properties. We developed a GPU-based implementation of our visualizations with a flexible interactive data exploration mechanism. We designed our techniques in collaboration with domain experts, and provide details about the evaluation.",
                "AuthorNamesDeduped": "Monique Meuschke;Samuel Voß;Oliver Beuing;Bernhard Preim;Kai Lawonn",
                "AuthorNames": "Monique Meuschke;Samuel Voss;Oliver Beuing;Bernhard Preim;Kai Lawonn",
                "AuthorAffiliation": "University of Magdeburg, Germany and Research Campus STIMULATE;Research Campus STIMULATE and University of Magdeburg, Germany;Research Campus STIMULATE and University of Magdeburg, Germany;Research Campus STIMULATE and University of Magdeburg, Germany;University of Koblenz-Landau, Germany",
                "InternalReferences": "0.1109/tvcg.2011.215;10.1109/tvcg.2011.243;10.1109/tvcg.2014.2346406;10.1109/tvcg.2010.153;10.1109/tvcg.2015.2467961;10.1109/tvcg.2013.189;10.1109/tvcg.2012.202",
                "AuthorKeywords": "Medical visualizations;aneurysms;blood flow;wall thickness;wall deformation;projections",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 633,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 934,
                "i": [
                    934
                ]
            }
        },
        {
            "name": "Bernhard Preim",
            "value": 283,
            "numPapers": 77,
            "cluster": "6",
            "visible": 1,
            "index": 641,
            "x": 135.99977004273143,
            "y": -213.66811308270627,
            "vy": 0,
            "vx": 0,
            "r": 1.3258491652274036,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Interactive Visual Analysis of Image-Centric Cohort Study Data",
                "DOI": "10.1109/tvcg.2014.2346591",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346591",
                "FirstPage": 1673,
                "LastPage": 1682,
                "PaperType": "J",
                "Abstract": "Epidemiological population studies impose information about a set of subjects (a cohort) to characterize disease-specific risk factors. Cohort studies comprise heterogenous variables describing the medical condition as well as demographic and lifestyle factors and, more recently, medical image data. We propose an Interactive Visual Analysis (IVA) approach that enables epidemiologists to rapidly investigate the entire data pool for hypothesis validation and generation. We incorporate image data, which involves shape-based object detection and the derivation of attributes describing the object shape. The concurrent investigation of image-based and non-image data is realized in a web-based multiple coordinated view system, comprising standard views from information visualization and epidemiological data representations such as pivot tables. The views are equipped with brushing facilities and augmented by 3D shape renderings of the segmented objects, e.g., each bar in a histogram is overlaid with a mean shape of the associated subgroup of the cohort. We integrate an overview visualization, clustering of variables and object shape for data-driven subgroup definition and statistical key figures for measuring the association between variables. We demonstrate the IVA approach by validating and generating hypotheses related to lower back pain as part of a qualitative evaluation.",
                "AuthorNamesDeduped": "Paul Klemm;Steffen Oeltze-Jafra;Kai Lawonn;Katrin Hegenscheid;Henry Völzke;Bernhard Preim",
                "AuthorNames": "Paul Klemm;Steffen Oeltze-Jafra;Kai Lawonn;Katrin Hegenscheid;Henry Völzke;Bernhard Preim",
                "AuthorAffiliation": "Otto-von-Guericke University Magdeburg, Germany;Otto-von-Guericke University Magdeburg, Germany;Otto-von-Guericke University Magdeburg, Germany;Ernst-Moritz-Arndt University Greifswald, Germany;Ernst-Moritz-Arndt University Greifswald, Germany;Otto-von-Guericke University Magdeburg, Germany",
                "InternalReferences": "0.1109/tvcg.2013.160;10.1109/tvcg.2011.185;10.1109/visual.2000.885739;10.1109/tvcg.2011.217;10.1109/tvcg.2007.70569",
                "AuthorKeywords": "Interactive Visual Analysis, Epidemiology, Spine",
                "AminerCitationCount": 58,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 947,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1266,
                "i": [
                    1266
                ]
            }
        },
        {
            "name": "Samuel Voß",
            "value": 19,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 642,
            "x": 44.083057965898654,
            "y": 249.6130685688857,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Combined Visualization of Vessel Deformation and Hemodynamics in Cerebral Aneurysms",
                "DOI": "10.1109/tvcg.2016.2598795",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598795",
                "FirstPage": 761,
                "LastPage": 770,
                "PaperType": "J",
                "Abstract": "We present the first visualization tool that combines patient-specific hemodynamics with information about the vessel wall deformation and wall thickness in cerebral aneurysms. Such aneurysms bear the risk of rupture, whereas their treatment also carries considerable risks for the patient. For the patient-specific rupture risk evaluation and treatment analysis, both morphological and hemodynamic data have to be investigated. Medical researchers emphasize the importance of analyzing correlations between wall properties such as the wall deformation and thickness, and hemodynamic attributes like the Wall Shear Stress and near-wall flow. Our method uses a linked 2.5D and 3D depiction of the aneurysm together with blood flow information that enables the simultaneous exploration of wall characteristics and hemodynamic attributes during the cardiac cycle. We thus offer medical researchers an effective visual exploration tool for aneurysm treatment risk assessment. The 2.5D view serves as an overview that comprises a projection of the vessel surface to a 2D map, providing an occlusion-free surface visualization combined with a glyph-based depiction of the local wall thickness. The 3D view represents the focus upon which the data exploration takes place. To support the time-dependent parameter exploration and expert collaboration, a camera path is calculated automatically, where the user can place landmarks for further exploration of the properties. We developed a GPU-based implementation of our visualizations with a flexible interactive data exploration mechanism. We designed our techniques in collaboration with domain experts, and provide details about the evaluation.",
                "AuthorNamesDeduped": "Monique Meuschke;Samuel Voß;Oliver Beuing;Bernhard Preim;Kai Lawonn",
                "AuthorNames": "Monique Meuschke;Samuel Voss;Oliver Beuing;Bernhard Preim;Kai Lawonn",
                "AuthorAffiliation": "University of Magdeburg, Germany and Research Campus STIMULATE;Research Campus STIMULATE and University of Magdeburg, Germany;Research Campus STIMULATE and University of Magdeburg, Germany;Research Campus STIMULATE and University of Magdeburg, Germany;University of Koblenz-Landau, Germany",
                "InternalReferences": "0.1109/tvcg.2011.215;10.1109/tvcg.2011.243;10.1109/tvcg.2014.2346406;10.1109/tvcg.2010.153;10.1109/tvcg.2015.2467961;10.1109/tvcg.2013.189;10.1109/tvcg.2012.202",
                "AuthorKeywords": "Medical visualizations;aneurysms;blood flow;wall thickness;wall deformation;projections",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 633,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 934,
                "i": [
                    934
                ]
            }
        },
        {
            "name": "Nicholas Diakopoulos",
            "value": 201,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 643,
            "x": -201.27313031692876,
            "y": -154.3992455047129,
            "vy": 0,
            "vx": 0,
            "r": 1.231433506044905,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Swaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms",
                "DOI": "10.1109/tvcg.2023.3327356",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327356",
                "FirstPage": 23,
                "LastPage": 33,
                "PaperType": "J",
                "Abstract": "We conducted a longitudinal study during the 2022 U.S. midterm elections, investigating the real-world impacts of uncertainty visualizations. Using our forecast model of the governor elections in 33 states, we created a website and deployed four uncertainty visualizations for the election forecasts: single quantile dotplot (1-Dotplot), dual quantile dotplots (2-Dotplot), dual histogram intervals (2-Interval), and Plinko quantile dotplot (Plinko), an animated design with a physical and probabilistic analogy. Our online experiment ran from Oct. 18, 2022, to Nov. 23, 2022, involving 1,327 participants from 15 states. We use Bayesian multilevel modeling and post-stratification to produce demographically-representative estimates of people's emotions, trust in forecasts, and political participation intention. We find that election forecast visualizations can heighten emotions, increase trust, and slightly affect people's intentions to participate in elections. 2-Interval shows the strongest effects across all measures; 1-Dotplot increases trust the most after elections. Both visualizations create emotional and trust gaps between different partisan identities, especially when a Republican candidate is predicted to win. Our qualitative analysis uncovers the complex political and social contexts of election forecast visualizations, showcasing that visualizations may provoke polarization. This intriguing interplay between visualization types, partisanship, and trust exemplifies the fundamental challenge of disentangling visualization from its context, underscoring a need for deeper investigation into the real-world impacts of visualizations. Our preprint and supplements are available at https://doi.org/osf.io/ajq8f.",
                "AuthorNamesDeduped": "Fumeng Yang;Mandi Cai;Chloe Mortenson;Hoda Fakhari;Ayse D. Lokmanoglu;Jessica Hullman;Steven Franconeri;Nicholas Diakopoulos;Erik C. Nisbet;Matthew Kay 0001",
                "AuthorNames": "Fumeng Yang;Mandi Cai;Chloe Mortenson;Hoda Fakhari;Ayse D. Lokmanoglu;Jessica Hullman;Steven Franconeri;Nicholas Diakopoulos;Erik C. Nisbet;Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA;Northwestern University, USA",
                "InternalReferences": "10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2020.3030335;10.1109/tvcg.2022.3209500;10.1109/tvcg.2022.3209457;10.1109/tvcg.2022.3209348;10.1109/tvcg.2022.3209383;10.1109/tvcg.2021.3114679",
                "AuthorKeywords": "Uncertainty visualization,Probabilistic forecasts,Elections,Emotions,Trust,Political participation,Longitudinal study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 481,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 14,
                "i": [
                    14
                ]
            }
        },
        {
            "name": "Huixuan Xie",
            "value": 52,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 644,
            "x": 252.9040119501352,
            "y": -22.126019513818125,
            "vy": 0,
            "vx": 0,
            "r": 1.059873344847438,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "ASTF: Visual Abstractions of Time-Varying Patterns in Radio Signals",
                "DOI": "10.1109/tvcg.2022.3209469",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209469",
                "FirstPage": 214,
                "LastPage": 224,
                "PaperType": "J",
                "Abstract": "A time-frequency diagram is a commonly used visualization for observing the time-frequency distribution of radio signals and analyzing their time-varying patterns of communication states in radio monitoring and management. While it excels when performing short-term signal analyses, it becomes inadaptable for long-term signal analyses because it cannot adequately depict signal time-varying patterns in a large time span on a space-limited screen. This research thus presents an abstract signal time-frequency (ASTF) diagram to address this problem. In the diagram design, a visual abstraction method is proposed to visually encode signal communication state changes in time slices. A time segmentation algorithm is proposed to divide a large time span into time slices. Three new quantified metrics and a loss function are defined to ensure the preservation of important time-varying information in the time segmentation. An algorithm performance experiment and a user study are conducted to evaluate the effectiveness of the diagram for long-term signal analyses.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Luhao Ge;Huixuan Xie;Genghuai Bai;Zhao Zhang;Qiang Wei;Yun Lin 0005;Yuchao Liu;Fangfang Zhou",
                "AuthorNames": "Ying Zhao;Luhao Ge;Huixuan Xie;Genghuai Bai;Zhao Zhang;Qiang Wei;Yun Lin;Yuchao Liu;Fangfang Zhou",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;National Key Laboratory of Science and Technology on Blind Signal Processing, Chengdu, China;College of Information and Communication Engineering, Harbin Engineering University, Harbin, China;China Research Institute of Radiowave Propagation, Qingdao, China;School of Computer Science and Engineering, Central South University, Changsha, China",
                "InternalReferences": "0.1109/vast.2014.7042479;10.1109/tvcg.2019.2934433;10.1109/tvcg.2010.193;10.1109/vast.2014.7042484;10.1109/tvcg.2008.109;10.1109/infvis.2005.1532144;10.1109/tvcg.2015.2467751;10.1109/tvcg.2011.195;10.1109/tvcg.2020.3030428;10.1109/tvcg.2019.2934655",
                "AuthorKeywords": "Radio signal,visual abstraction,time-oriented data,binary sequence",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 759,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 136,
                "i": [
                    136
                ]
            }
        },
        {
            "name": "Yi Chen 0007",
            "value": 106,
            "numPapers": 38,
            "cluster": "1",
            "visible": 1,
            "index": 645,
            "x": -171.67066261331476,
            "y": 187.2943768453967,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Evaluating Multi-Dimensional Visualizations for Understanding Fuzzy Clusters",
                "DOI": "10.1109/tvcg.2018.2865020",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865020",
                "FirstPage": 12,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "Fuzzy clustering assigns a probability of membership for a datum to a cluster, which veritably reflects real-world clustering scenarios but significantly increases the complexity of understanding fuzzy clusters. Many studies have demonstrated that visualization techniques for multi-dimensional data are beneficial to understand fuzzy clusters. However, no empirical evidence exists on the effectiveness and efficiency of these visualization techniques in solving analytical tasks featured by fuzzy clusters. In this paper, we conduct a controlled experiment to evaluate the ability of fuzzy clusters analysis to use four multi-dimensional visualization techniques, namely, parallel coordinate plot, scatterplot matrix, principal component analysis, and Radviz. First, we define the analytical tasks and their representative questions specific to fuzzy clusters analysis. Then, we design objective questionnaires to compare the accuracy, time, and satisfaction in using the four techniques to solve the questions. We also design subjective questionnaires to collect the experience of the volunteers with the four techniques in terms of ease of use, informativeness, and helpfulness. With a complete experiment process and a detailed result analysis, we test against four hypotheses that are formulated on the basis of our experience, and provide instructive guidance for analysts in selecting appropriate and efficient visualization techniques to analyze fuzzy clusters.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "Central South University;Central South University;Central South University;Central South University;Central South University;Central South University;Shandong University;Beijing Technology, Business University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2745138;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2009.153;10.1109/tvcg.2016.2598831;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744198;10.1109/tvcg.2015.2467324;10.1109/tvcg.2013.153;10.1109/tvcg.2008.173;10.1109/visual.1990.146375;10.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598479;10.1109/infvis.2003.1249015",
                "AuthorKeywords": "Evaluation,multi-dimensional visualization,fuzzy clustering,parallel coordinate plot,scatterplot matrix,principal component analysis,radviz",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1464,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 739,
                "i": [
                    739
                ]
            }
        },
        {
            "name": "Paulo E. Rauber",
            "value": 160,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 646,
            "x": 0.06912364189847471,
            "y": -254.26363330590974,
            "vy": 0,
            "vx": 0,
            "r": 1.1842256764536556,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visualizing the Hidden Activity of Artificial Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598838",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598838",
                "FirstPage": 101,
                "LastPage": 110,
                "PaperType": "J",
                "Abstract": "In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles.",
                "AuthorNamesDeduped": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorNames": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorAffiliation": "University of Groningen, University of Campinas;University of São Paulo;University of Campinas;University of Groningen",
                "InternalReferences": "0.1109/tvcg.2011.178;10.1109/tvcg.2011.220;10.1109/tvcg.2013.150;10.1109/tvcg.2014.2346578;10.1109/tvcg.2008.125;10.1109/tvcg.2015.2467553",
                "AuthorKeywords": "Artificial neural networks;dimensionality reduction;algorithm understanding",
                "AminerCitationCount": 303,
                "CitationCountCrossRef": 197,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 5987,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 956,
                "i": [
                    956
                ]
            }
        },
        {
            "name": "Samuel G. Fadel",
            "value": 196,
            "numPapers": 11,
            "cluster": "1",
            "visible": 1,
            "index": 647,
            "x": 171.83438877875284,
            "y": 187.67776328865503,
            "vy": 0,
            "vx": 0,
            "r": 1.2256764536557283,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visualizing the Hidden Activity of Artificial Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598838",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598838",
                "FirstPage": 101,
                "LastPage": 110,
                "PaperType": "J",
                "Abstract": "In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles.",
                "AuthorNamesDeduped": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorNames": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorAffiliation": "University of Groningen, University of Campinas;University of São Paulo;University of Campinas;University of Groningen",
                "InternalReferences": "0.1109/tvcg.2011.178;10.1109/tvcg.2011.220;10.1109/tvcg.2013.150;10.1109/tvcg.2014.2346578;10.1109/tvcg.2008.125;10.1109/tvcg.2015.2467553",
                "AuthorKeywords": "Artificial neural networks;dimensionality reduction;algorithm understanding",
                "AminerCitationCount": 303,
                "CitationCountCrossRef": 197,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 5987,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 956,
                "i": [
                    956
                ]
            }
        },
        {
            "name": "Alexandre X. Falcão",
            "value": 160,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 648,
            "x": -253.67549982996158,
            "y": -22.33250514427728,
            "vy": 0,
            "vx": 0,
            "r": 1.1842256764536556,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visualizing the Hidden Activity of Artificial Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598838",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598838",
                "FirstPage": 101,
                "LastPage": 110,
                "PaperType": "J",
                "Abstract": "In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles.",
                "AuthorNamesDeduped": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorNames": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorAffiliation": "University of Groningen, University of Campinas;University of São Paulo;University of Campinas;University of Groningen",
                "InternalReferences": "0.1109/tvcg.2011.178;10.1109/tvcg.2011.220;10.1109/tvcg.2013.150;10.1109/tvcg.2014.2346578;10.1109/tvcg.2008.125;10.1109/tvcg.2015.2467553",
                "AuthorKeywords": "Artificial neural networks;dimensionality reduction;algorithm understanding",
                "AminerCitationCount": 303,
                "CitationCountCrossRef": 197,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 5987,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 956,
                "i": [
                    956
                ]
            }
        },
        {
            "name": "Alexandru C. Telea",
            "value": 425,
            "numPapers": 71,
            "cluster": "2",
            "visible": 1,
            "index": 649,
            "x": 202.29359941922579,
            "y": -155.0074179967327,
            "vy": 0,
            "vx": 0,
            "r": 1.489349453080023,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visualizing the Hidden Activity of Artificial Neural Networks",
                "DOI": "10.1109/tvcg.2016.2598838",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598838",
                "FirstPage": 101,
                "LastPage": 110,
                "PaperType": "J",
                "Abstract": "In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles.",
                "AuthorNamesDeduped": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorNames": "Paulo E. Rauber;Samuel G. Fadel;Alexandre X. Falcão;Alexandru C. Telea",
                "AuthorAffiliation": "University of Groningen, University of Campinas;University of São Paulo;University of Campinas;University of Groningen",
                "InternalReferences": "0.1109/tvcg.2011.178;10.1109/tvcg.2011.220;10.1109/tvcg.2013.150;10.1109/tvcg.2014.2346578;10.1109/tvcg.2008.125;10.1109/tvcg.2015.2467553",
                "AuthorKeywords": "Artificial neural networks;dimensionality reduction;algorithm understanding",
                "AminerCitationCount": 303,
                "CitationCountCrossRef": 197,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 5987,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 956,
                "i": [
                    956
                ]
            }
        },
        {
            "name": "Junxiu Tang",
            "value": 59,
            "numPapers": 35,
            "cluster": "3",
            "visible": 1,
            "index": 650,
            "x": -44.49321044379164,
            "y": 251.13811782444435,
            "vy": 0,
            "vx": 0,
            "r": 1.0679332181922856,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "What Makes a Data-GIF Understandable?",
                "DOI": "10.1109/tvcg.2020.3030396",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030396",
                "FirstPage": 1492,
                "LastPage": 1502,
                "PaperType": "J",
                "Abstract": "GIFs are enjoying increasing popularity on social media as a format for data-driven storytelling with visualization; simple visual messages are embedded in short animations that usually last less than 15 seconds and are played in automatic repetition. In this paper, we ask the question, “What makes a data-GIF understandable?” While other storytelling formats such as data videos, infographics, or data comics are relatively well studied, we have little knowledge about the design factors and principles for “data-GIFs”. To close this gap, we provide results from semi-structured interviews and an online study with a total of 118 participants investigating the impact of design decisions on the understandability of data-GIFs. The study and our consequent analysis are informed by a systematic review and structured design space of 108 data-GIFs that we found online. Our results show the impact of design dimensions from our design space such as animation encoding, context preservation, or repetition on viewers understanding of the GIF's core message. The paper concludes with a list of suggestions for creating more effective Data-GIFs.",
                "AuthorNamesDeduped": "Xinhuan Shu;Aoyu Wu;Junxiu Tang;Benjamin Bach;Yingcai Wu;Huamin Qu",
                "AuthorNames": "Xinhuan Shu;Aoyu Wu;Junxiu Tang;Benjamin Bach;Yingcai Wu;Huamin Qu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University and Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;State Key Lab of CAD&CG, Zhejiang University;Edinburgh University;State Key Lab of CAD&CG, Zhejiang University;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2019.2934397;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70539;10.1109/tvcg.2018.2864909;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598920;10.1109/tvcg.2019.2934401;10.1109/tvcg.2008.125;10.1109/tvcg.2018.2864903;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864899;10.1109/tvcg.2013.234",
                "AuthorKeywords": "Data-GIFs,Data-driven Storytelling,Evaluation",
                "AminerCitationCount": 32,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 1161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 371,
                "i": [
                    371
                ]
            }
        },
        {
            "name": "Tai-Quan Peng",
            "value": 242,
            "numPapers": 62,
            "cluster": "1",
            "visible": 1,
            "index": 651,
            "x": -136.93858765003003,
            "y": -215.4015394847842,
            "vy": 0,
            "vx": 0,
            "r": 1.2786413356361543,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Seek for Success: A Visualization Approach for Understanding the Dynamics of Academic Careers",
                "DOI": "10.1109/tvcg.2021.3114790",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114790",
                "FirstPage": 475,
                "LastPage": 485,
                "PaperType": "J",
                "Abstract": "How to achieve academic career success has been a long-standing research question in social science research. With the growing availability of large-scale well-documented academic profiles and career trajectories, scholarly interest in career success has been reinvigorated, which has emerged to be an active research domain called the Science of Science (i.e., SciSci). In this study, we adopt an innovative dynamic perspective to examine how individual and social factors will influence career success over time. We propose <i>ACSeeker</i>, an interactive visual analytics approach to explore the potential factors of success and how the influence of multiple factors changes at different stages of academic careers. We first applied a Multi-factor Impact Analysis framework to estimate the effect of different factors on academic career success over time. We then developed a visual analytics system to understand the dynamic effects interactively. A novel timeline is designed to reveal and compare the factor impacts based on the whole population. A customized career line showing the individual career development is provided to allow a detailed inspection. To validate the effectiveness and usability of <i>ACSeeker</i>, we report two case studies and interviews with a social scientist and general researchers.",
                "AuthorNamesDeduped": "Yifang Wang 0001;Tai-Quan Peng;Huihua Lu;Haoren Wang;Xiao Xie;Huamin Qu;Yingcai Wu",
                "AuthorNames": "Yifang Wang;Tai-Quan Peng;Huihua Lu;Haoren Wang;Xiao Xie;Huamin Qu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University and the Hong Kong University of Science and Technology, China;Michigan State University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sport Science, Zhejiang University, China;Hong Kong University of Science and Technology, Hong Kong;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2020.3030442;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2015.2467620;10.1109/tvcg.2019.2934267;10.1109/tvcg.2009.111;10.1109/vast47406.2019.8986934;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864899;10.1109/vast50239.2020.00009;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2015.2468151;10.1109/tvcg.2014.2346913;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2019.2934656;10.1109/tvcg.2019.2934630",
                "AuthorKeywords": "Career Analysis,Academic Profiles,Science of Science,Publication Data,Citation Data,Sequence Analysis",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 1149,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 285,
                "i": [
                    285
                ]
            }
        },
        {
            "name": "Kai Yan",
            "value": 97,
            "numPapers": 11,
            "cluster": "1",
            "visible": 1,
            "index": 652,
            "x": 246.6649898251772,
            "y": 66.3805904956052,
            "vy": 0,
            "vx": 0,
            "r": 1.1116868163500289,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "OpinionFlow: Visual Analysis of Opinion Diffusion on Social Media",
                "DOI": "10.1109/tvcg.2014.2346920",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346920",
                "FirstPage": 1763,
                "LastPage": 1772,
                "PaperType": "J",
                "Abstract": "It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.",
                "AuthorNamesDeduped": "Yingcai Wu;Shixia Liu;Kai Yan;Mengchen Liu;Fangzhao Wu",
                "AuthorNames": "Yingcai Wu;Shixia Liu;Kai Yan;Mengchen Liu;Fangzhao Wu",
                "AuthorAffiliation": "Microsoft Research;Microsoft Research;Harbin Institute of Technology;Tsinghua University;Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2011.239;10.1109/tvcg.2013.162;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346433;10.1109/infvis.2005.1532152;10.1109/tvcg.2012.291;10.1109/vast.2006.261431;10.1109/tvcg.2010.129;10.1109/tvcg.2013.196;10.1109/tvcg.2014.2346919;10.1109/tvcg.2010.183;10.1109/vast.2009.5333919",
                "AuthorKeywords": "Opinion visualization, opinion diffusion, opinion flow, influence estimation, kernel density estimation, level-of-detail",
                "AminerCitationCount": 205,
                "CitationCountCrossRef": 137,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 3643,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1245,
                "i": [
                    1245
                ]
            }
        },
        {
            "name": "Fangzhao Wu",
            "value": 97,
            "numPapers": 11,
            "cluster": "1",
            "visible": 1,
            "index": 653,
            "x": -226.89619864257688,
            "y": 117.76296124651545,
            "vy": 0,
            "vx": 0,
            "r": 1.1116868163500289,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "OpinionFlow: Visual Analysis of Opinion Diffusion on Social Media",
                "DOI": "10.1109/tvcg.2014.2346920",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346920",
                "FirstPage": 1763,
                "LastPage": 1772,
                "PaperType": "J",
                "Abstract": "It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.",
                "AuthorNamesDeduped": "Yingcai Wu;Shixia Liu;Kai Yan;Mengchen Liu;Fangzhao Wu",
                "AuthorNames": "Yingcai Wu;Shixia Liu;Kai Yan;Mengchen Liu;Fangzhao Wu",
                "AuthorAffiliation": "Microsoft Research;Microsoft Research;Harbin Institute of Technology;Tsinghua University;Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2011.239;10.1109/tvcg.2013.162;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346433;10.1109/infvis.2005.1532152;10.1109/tvcg.2012.291;10.1109/vast.2006.261431;10.1109/tvcg.2010.129;10.1109/tvcg.2013.196;10.1109/tvcg.2014.2346919;10.1109/tvcg.2010.183;10.1109/vast.2009.5333919",
                "AuthorKeywords": "Opinion visualization, opinion diffusion, opinion flow, influence estimation, kernel density estimation, level-of-detail",
                "AminerCitationCount": 205,
                "CitationCountCrossRef": 137,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 3643,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1245,
                "i": [
                    1245
                ]
            }
        },
        {
            "name": "Christian Partl",
            "value": 175,
            "numPapers": 25,
            "cluster": "4",
            "visible": 1,
            "index": 654,
            "x": 87.82557737721568,
            "y": -240.28455622107447,
            "vy": 0,
            "vx": 0,
            "r": 1.201496833621186,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Entourage: Visualizing Relationships between Biological Pathways using Contextual Subsets",
                "DOI": "10.1109/tvcg.2013.154",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.154",
                "FirstPage": 2536,
                "LastPage": 2545,
                "PaperType": "J",
                "Abstract": "Biological pathway maps are highly relevant tools for many tasks in molecular biology. They reduce the complexity of the overall biological network by partitioning it into smaller manageable parts. While this reduction of complexity is their biggest strength, it is, at the same time, their biggest weakness. By removing what is deemed not important for the primary function of the pathway, biologists lose the ability to follow and understand cross-talks between pathways. Considering these cross-talks is, however, critical in many analysis scenarios, such as judging effects of drugs. In this paper we introduce Entourage, a novel visualization technique that provides contextual information lost due to the artificial partitioning of the biological network, but at the same time limits the presented information to what is relevant to the analyst's task. We use one pathway map as the focus of an analysis and allow a larger set of contextual pathways. For these context pathways we only show the contextual subsets, i.e., the parts of the graph that are relevant to a selection. Entourage suggests related pathways based on similarities and highlights parts of a pathway that are interesting in terms of mapped experimental data. We visualize interdependencies between pathways using stubs of visual links, which we found effective yet not obtrusive. By combining this approach with visualization of experimental data, we can provide domain experts with a highly valuable tool. We demonstrate the utility of Entourage with case studies conducted with a biochemist who researches the effects of drugs on pathways. We show that the technique is well suited to investigate interdependencies between pathways and to analyze, understand, and predict the effect that drugs have on different cell types.",
                "AuthorNamesDeduped": "Alexander Lex;Christian Partl;Denis Kalkofen;Marc Streit;Samuel Gratzl;Anne Mai Wassermann;Dieter Schmalstieg;Hanspeter Pfister",
                "AuthorNames": "Alexander Lex;Christian Partl;Denis Kalkofen;Marc Streit;Samuel Gratzl;Anne Mai Wassermann;Dieter Schmalstieg;Hanspeter Pfister",
                "AuthorAffiliation": "Harvard University, USA;Graz University of Technology, Austria;Graz University of Technology, Austria;Johannes Kepler University of Linz, Austria;Johannes Kepler University of Linz, Austria;Novartis Institutes for BioMedical Research, Switzerland;Graz University of Technology, Austria;Harvard University, USA",
                "InternalReferences": "0.1109/vast.2009.5333443;10.1109/tvcg.2011.250;10.1109/tvcg.2011.213;10.1109/tvcg.2009.122;10.1109/tvcg.2011.183;10.1109/infvis.2000.885087",
                "AuthorKeywords": "Pathway visualization, biological networks, subsets, graphs, biomolecular data",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 635,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1318,
                "i": [
                    1318
                ]
            }
        },
        {
            "name": "Zhitao Hou",
            "value": 21,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 655,
            "x": 97.6245322527403,
            "y": 236.6842848657968,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Towards Natural Language-Based Visualization Authoring",
                "DOI": "10.1109/tvcg.2022.3209357",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209357",
                "FirstPage": 1222,
                "LastPage": 1232,
                "PaperType": "J",
                "Abstract": "A key challenge to visualization authoring is the process of getting familiar with the complex user interfaces of authoring tools. Natural Language Interface (NLI) presents promising benefits due to its learnability and usability. However, supporting NLIs for authoring tools requires expertise in natural language processing, while existing NLIs are mostly designed for visual analytic workflow. In this paper, we propose an authoring-oriented NLI pipeline by introducing a structured representation of users' visualization editing intents, called editing actions, based on a formative study and an extensive survey on visualization construction tools. The editing actions are executable, and thus decouple natural language interpretation and visualization applications as an intermediate layer. We implement a deep learning-based NL interpreter to translate NL utterances into editing actions. The interpreter is reusable and extensible across authoring tools. The authoring tools only need to map the editing actions into tool-specific operations. To illustrate the usages of the NL interpreter, we implement an Excel chart editor and a proof-of-concept authoring tool, VisTalk. We conduct a user study with VisTalk to understand the usage patterns of NL-based authoring systems. Finally, we discuss observations on how users author charts with natural language, as well as implications for future research.",
                "AuthorNamesDeduped": "Yun Wang 0012;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang 0001",
                "AuthorNames": "Yun Wang;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Tsinghua University, China;Carnegie Mellon University, USA;Oxford University, USA;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2744684;10.1109/tvcg.2016.2598620;10.1109/tvcg.2021.3114848;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030378;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2745219;10.1109/infvis.2000.885086;10.1109/infvis.2005.1532146;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Visualization authoring,Natural language interface,Natural language understanding",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 75,
                "DownloadsXplore": 1174,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 140,
                "i": [
                    140
                ]
            }
        },
        {
            "name": "Tongshuang Wu",
            "value": 28,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 656,
            "x": -232.04002147417944,
            "y": -108.66199167262828,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Towards Natural Language-Based Visualization Authoring",
                "DOI": "10.1109/tvcg.2022.3209357",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209357",
                "FirstPage": 1222,
                "LastPage": 1232,
                "PaperType": "J",
                "Abstract": "A key challenge to visualization authoring is the process of getting familiar with the complex user interfaces of authoring tools. Natural Language Interface (NLI) presents promising benefits due to its learnability and usability. However, supporting NLIs for authoring tools requires expertise in natural language processing, while existing NLIs are mostly designed for visual analytic workflow. In this paper, we propose an authoring-oriented NLI pipeline by introducing a structured representation of users' visualization editing intents, called editing actions, based on a formative study and an extensive survey on visualization construction tools. The editing actions are executable, and thus decouple natural language interpretation and visualization applications as an intermediate layer. We implement a deep learning-based NL interpreter to translate NL utterances into editing actions. The interpreter is reusable and extensible across authoring tools. The authoring tools only need to map the editing actions into tool-specific operations. To illustrate the usages of the NL interpreter, we implement an Excel chart editor and a proof-of-concept authoring tool, VisTalk. We conduct a user study with VisTalk to understand the usage patterns of NL-based authoring systems. Finally, we discuss observations on how users author charts with natural language, as well as implications for future research.",
                "AuthorNamesDeduped": "Yun Wang 0012;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang 0001",
                "AuthorNames": "Yun Wang;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Tsinghua University, China;Carnegie Mellon University, USA;Oxford University, USA;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2744684;10.1109/tvcg.2016.2598620;10.1109/tvcg.2021.3114848;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030378;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2745219;10.1109/infvis.2000.885086;10.1109/infvis.2005.1532146;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Visualization authoring,Natural language interface,Natural language understanding",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 75,
                "DownloadsXplore": 1174,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 140,
                "i": [
                    140
                ]
            }
        },
        {
            "name": "Jiaqi Wang",
            "value": 21,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 657,
            "x": 244.6853543503443,
            "y": -76.67514177650037,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Towards Natural Language-Based Visualization Authoring",
                "DOI": "10.1109/tvcg.2022.3209357",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209357",
                "FirstPage": 1222,
                "LastPage": 1232,
                "PaperType": "J",
                "Abstract": "A key challenge to visualization authoring is the process of getting familiar with the complex user interfaces of authoring tools. Natural Language Interface (NLI) presents promising benefits due to its learnability and usability. However, supporting NLIs for authoring tools requires expertise in natural language processing, while existing NLIs are mostly designed for visual analytic workflow. In this paper, we propose an authoring-oriented NLI pipeline by introducing a structured representation of users' visualization editing intents, called editing actions, based on a formative study and an extensive survey on visualization construction tools. The editing actions are executable, and thus decouple natural language interpretation and visualization applications as an intermediate layer. We implement a deep learning-based NL interpreter to translate NL utterances into editing actions. The interpreter is reusable and extensible across authoring tools. The authoring tools only need to map the editing actions into tool-specific operations. To illustrate the usages of the NL interpreter, we implement an Excel chart editor and a proof-of-concept authoring tool, VisTalk. We conduct a user study with VisTalk to understand the usage patterns of NL-based authoring systems. Finally, we discuss observations on how users author charts with natural language, as well as implications for future research.",
                "AuthorNamesDeduped": "Yun Wang 0012;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang 0001",
                "AuthorNames": "Yun Wang;Zhitao Hou;Leixian Shen;Tongshuang Wu;Jiaqi Wang;He Huang;Haidong Zhang;Dongmei Zhang",
                "AuthorAffiliation": "Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Tsinghua University, China;Carnegie Mellon University, USA;Oxford University, USA;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China;Microsoft Research Asia (MSRA), China",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2744684;10.1109/tvcg.2016.2598620;10.1109/tvcg.2021.3114848;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030378;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2019.2934281;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2745219;10.1109/infvis.2000.885086;10.1109/infvis.2005.1532146;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Visualization authoring,Natural language interface,Natural language understanding",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 75,
                "DownloadsXplore": 1174,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 140,
                "i": [
                    140
                ]
            }
        },
        {
            "name": "Eston Schweickart",
            "value": 137,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 658,
            "x": -128.7278315030203,
            "y": 221.98906593913586,
            "vy": 0,
            "vx": 0,
            "r": 1.1577432354634427,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Data-Driven Guides: Supporting Expressive Design for Information Graphics",
                "DOI": "10.1109/tvcg.2016.2598620",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598620",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.",
                "AuthorNamesDeduped": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu 0001;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorNames": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorAffiliation": "John A. Paulson School of Engineering and Applied Sciences, Harvard University;Computer Science department, Cornell University;Adobe Research;Adobe Research;Adobe Research;Adobe Research;John A. Paulson School of Engineering and Applied Sciences, Harvard University",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/infvis.1996.559212;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598609;10.1109/tvcg.2013.234;10.1109/infvis.2004.64;10.1109/tvcg.2012.197;10.1109/infvis.2000.885086;10.1109/infvis.2000.885093;10.1109/tvcg.2014.2346979;10.1109/tvcg.2014.2346320;10.1109/tvcg.2014.2346291;10.1109/tvcg.2015.2467732;10.1109/infvis.2004.12;10.1109/tvcg.2013.191;10.1109/tvcg.2011.251;10.1109/tvcg.2010.144;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70577;10.1109/tvcg.2013.134",
                "AuthorKeywords": "Information graphics;visualization;design tools;2D graphics",
                "AminerCitationCount": 114,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2062,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 893,
                "i": [
                    893
                ]
            }
        },
        {
            "name": "Mira Dontcheva",
            "value": 198,
            "numPapers": 30,
            "cluster": "5",
            "visible": 1,
            "index": 659,
            "x": -55.07333247436418,
            "y": -250.8324700874433,
            "vy": 0,
            "vx": 0,
            "r": 1.2279792746113989,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Data-Driven Guides: Supporting Expressive Design for Information Graphics",
                "DOI": "10.1109/tvcg.2016.2598620",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598620",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.",
                "AuthorNamesDeduped": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu 0001;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorNames": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorAffiliation": "John A. Paulson School of Engineering and Applied Sciences, Harvard University;Computer Science department, Cornell University;Adobe Research;Adobe Research;Adobe Research;Adobe Research;John A. Paulson School of Engineering and Applied Sciences, Harvard University",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/infvis.1996.559212;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598609;10.1109/tvcg.2013.234;10.1109/infvis.2004.64;10.1109/tvcg.2012.197;10.1109/infvis.2000.885086;10.1109/infvis.2000.885093;10.1109/tvcg.2014.2346979;10.1109/tvcg.2014.2346320;10.1109/tvcg.2014.2346291;10.1109/tvcg.2015.2467732;10.1109/infvis.2004.12;10.1109/tvcg.2013.191;10.1109/tvcg.2011.251;10.1109/tvcg.2010.144;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70577;10.1109/tvcg.2013.134",
                "AuthorKeywords": "Information graphics;visualization;design tools;2D graphics",
                "AminerCitationCount": 114,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2062,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 893,
                "i": [
                    893
                ]
            }
        },
        {
            "name": "Wilmot Li",
            "value": 140,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 660,
            "x": 210.20344516816854,
            "y": 147.86653319609806,
            "vy": 0,
            "vx": 0,
            "r": 1.1611974668969487,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Data-Driven Guides: Supporting Expressive Design for Information Graphics",
                "DOI": "10.1109/tvcg.2016.2598620",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598620",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.",
                "AuthorNamesDeduped": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu 0001;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorNames": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorAffiliation": "John A. Paulson School of Engineering and Applied Sciences, Harvard University;Computer Science department, Cornell University;Adobe Research;Adobe Research;Adobe Research;Adobe Research;John A. Paulson School of Engineering and Applied Sciences, Harvard University",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/infvis.1996.559212;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598609;10.1109/tvcg.2013.234;10.1109/infvis.2004.64;10.1109/tvcg.2012.197;10.1109/infvis.2000.885086;10.1109/infvis.2000.885093;10.1109/tvcg.2014.2346979;10.1109/tvcg.2014.2346320;10.1109/tvcg.2014.2346291;10.1109/tvcg.2015.2467732;10.1109/infvis.2004.12;10.1109/tvcg.2013.191;10.1109/tvcg.2011.251;10.1109/tvcg.2010.144;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70577;10.1109/tvcg.2013.134",
                "AuthorKeywords": "Information graphics;visualization;design tools;2D graphics",
                "AminerCitationCount": 114,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2062,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 893,
                "i": [
                    893
                ]
            }
        },
        {
            "name": "Jovan Popovic",
            "value": 137,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 661,
            "x": -255.0727582303287,
            "y": 32.98314734485279,
            "vy": 0,
            "vx": 0,
            "r": 1.1577432354634427,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Data-Driven Guides: Supporting Expressive Design for Information Graphics",
                "DOI": "10.1109/tvcg.2016.2598620",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598620",
                "FirstPage": 491,
                "LastPage": 500,
                "PaperType": "J",
                "Abstract": "In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.",
                "AuthorNamesDeduped": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu 0001;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorNames": "Nam Wook Kim;Eston Schweickart;Zhicheng Liu;Mira Dontcheva;Wilmot Li;Jovan Popovic;Hanspeter Pfister",
                "AuthorAffiliation": "John A. Paulson School of Engineering and Applied Sciences, Harvard University;Computer Science department, Cornell University;Adobe Research;Adobe Research;Adobe Research;Adobe Research;John A. Paulson School of Engineering and Applied Sciences, Harvard University",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/infvis.1996.559212;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598609;10.1109/tvcg.2013.234;10.1109/infvis.2004.64;10.1109/tvcg.2012.197;10.1109/infvis.2000.885086;10.1109/infvis.2000.885093;10.1109/tvcg.2014.2346979;10.1109/tvcg.2014.2346320;10.1109/tvcg.2014.2346291;10.1109/tvcg.2015.2467732;10.1109/infvis.2004.12;10.1109/tvcg.2013.191;10.1109/tvcg.2011.251;10.1109/tvcg.2010.144;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70577;10.1109/tvcg.2013.134",
                "AuthorKeywords": "Information graphics;visualization;design tools;2D graphics",
                "AminerCitationCount": 114,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 2062,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 893,
                "i": [
                    893
                ]
            }
        },
        {
            "name": "Tobias Höllerer",
            "value": 139,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 662,
            "x": 165.92819377634547,
            "y": -196.76847946284371,
            "vy": 0,
            "vx": 0,
            "r": 1.1600460564191135,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "iVisDesigner: Expressive Interactive Design of Information Visualizations",
                "DOI": "10.1109/tvcg.2014.2346291",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346291",
                "FirstPage": 2092,
                "LastPage": 2101,
                "PaperType": "J",
                "Abstract": "We present the design, implementation and evaluation of iVisDesigner, a web-based system that enables users to design information visualizations for complex datasets interactively, without the need for textual programming. Our system achieves high interactive expressiveness through conceptual modularity, covering a broad information visualization design space. iVisDesigner supports the interactive design of interactive visualizations, such as provisioning for responsive graph layouts and different types of brushing and linking interactions. We present the system design and implementation, exemplify it through a variety of illustrative visualization designs and discuss its limitations. A performance analysis and an informal user study are presented to evaluate the system.",
                "AuthorNamesDeduped": "Donghao Ren;Tobias Höllerer;Xiaoru Yuan",
                "AuthorNames": "Donghao Ren;Tobias Höllerer;Xiaoru Yuan",
                "AuthorAffiliation": "School of EECS, Peking University and Department of Computer Science, University of California, Santa Barbara;Department of Computer Science, University of California, Santa Barbara;Key Laboratory of Machine Perception (Ministry of Education), Peking University",
                "InternalReferences": "0.1109/infvis.2004.12;10.1109/tvcg.2010.144;10.1109/tvcg.2009.179;10.1109/tvcg.2009.174;10.1109/visual.2005.1532788;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70577;10.1109/infvis.2004.64;10.1109/tvcg.2010.126;10.1109/tvcg.2013.191;10.1109/infvis.1997.636792;10.1109/tvcg.2011.201;10.1109/tvcg.2011.261;10.1109/tvcg.2012.275;10.1109/infvis.1997.636761",
                "AuthorKeywords": "Visualization design, Interactive Design, Interaction, Expressiveness, Web-based visualization",
                "AminerCitationCount": 108,
                "CitationCountCrossRef": 80,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1765,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1174,
                "i": [
                    1174
                ]
            }
        },
        {
            "name": "Haijun Xia",
            "value": 59,
            "numPapers": 28,
            "cluster": "3",
            "visible": 1,
            "index": 663,
            "x": 10.572882519212348,
            "y": 257.36785765754615,
            "vy": 0,
            "vx": 0,
            "r": 1.0679332181922856,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Augmenting Sports Videos with VisCommentator",
                "DOI": "10.1109/tvcg.2021.3114806",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114806",
                "FirstPage": 824,
                "LastPage": 834,
                "PaperType": "J",
                "Abstract": "Visualizing data in sports videos is gaining traction in sports analytics, given its ability to communicate insights and explicate player strategies engagingly. However, augmenting sports videos with such data visualizations is challenging, especially for sports analysts, as it requires considerable expertise in video editing. To ease the creation process, we present a design space that characterizes augmented sports videos at an element-level <i>(what the constituents are)</i> and clip-level <i>(how those constituents are organized)</i>. We do so by systematically reviewing 233 examples of augmented sports videos collected from TV channels, teams, and leagues. The design space guides selection of data insights and visualizations for various purposes. Informed by the design space and close collaboration with domain experts, we design VisCommentator, a fast prototyping tool, to eases the creation of augmented table tennis videos by leveraging machine learning-based data extractors and design space-based visualization recommendations. With VisCommentator, sports analysts can create an augmented video by <i>selecting the data</i> to visualize instead of manually <i>drawing the graphical marks</i>. Our system can be generalized to other racket sports <i>(e.g</i>., tennis, badminton) once the underlying datasets and models are available. A user study with seven domain experts shows high satisfaction with our system, confirms that the participants can reproduce augmented sports videos in a short period, and provides insightful implications into future improvements and opportunities.",
                "AuthorNamesDeduped": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang 0051;Huamin Qu;Yingcai Wu",
                "AuthorNames": "Zhutian Chen;Shuainan Ye;Xiangtong Chu;Haijun Xia;Hui Zhang;Huamin Qu;Yingcai Wu",
                "AuthorAffiliation": "Department of Cognitive Science and Design Lab, State Key Lab of CAD & CG, Zhejiang University and Hong Kong University of Science and Technology, University of California, San Diego, United States;State Key Lab of CAD & CG, Zhejiang University, China;State Key Lab of CAD & CG, Zhejiang University, China;Department of Cognitive Science and Design Lab, University of California, San Diego, United States;Department of Sport Science, Zhejiang University, China;Hong Kong University of Science and Technology, Hong Kong;State Key Lab of CAD & CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2019.2934810;10.1109/tvcg.2014.2346250;10.1109/tvcg.2018.2865240;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2017.2744218;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2019.2934656;10.1109/tvcg.2020.3030458",
                "AuthorKeywords": "Augmented Sports Videos,Video-based Visualization,Sports visualization,Intelligent Design Tool,Storytelling",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 1771,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 259,
                "i": [
                    259
                ]
            }
        },
        {
            "name": "Chase Stokes",
            "value": 32,
            "numPapers": 26,
            "cluster": "5",
            "visible": 1,
            "index": 664,
            "x": -181.7824372902268,
            "y": -182.77074572486913,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Striking a Balance: Reader Takeaways and Preferences when Integrating Text and Charts",
                "DOI": "10.1109/tvcg.2022.3209383",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209383",
                "FirstPage": 1233,
                "LastPage": 1243,
                "PaperType": "J",
                "Abstract": "While visualizations are an effective way to represent insights about information, they rarely stand alone. When designing a visualization, text is often added to provide additional context and guidance for the reader. However, there is little experimental evidence to guide designers as to what is the right amount of text to show within a chart, what its qualitative properties should be, and where it should be placed. Prior work also shows variation in personal preferences for charts versus textual representations. In this paper, we explore several research questions about the relative value of textual components of visualizations. 302 participants ranked univariate line charts containing varying amounts of text, ranging from no text (except for the axes) to a written paragraph with no visuals. Participants also described what information they could take away from line charts containing text with varying semantic content. We find that heavily annotated charts were not penalized. In fact, participants preferred the charts with the largest number of textual annotations over charts with fewer annotations or text alone. We also find effects of semantic content. For instance, the text that describes statistical or relational components of a chart leads to more takeaways referring to statistics or relational comparisons than text describing elemental or encoded components. Finally, we find different effects for the semantic levels based on the placement of the text on the chart; some kinds of information are best placed in the title, while others should be placed closer to the data. We compile these results into four chart design guidelines and discuss future implications for the combination of text and charts.",
                "AuthorNamesDeduped": "Chase Stokes;Vidya Setlur;Bridget Cogley;Arvind Satyanarayan;Marti A. Hearst",
                "AuthorNames": "Chase Stokes;Vidya Setlur;Bridget Cogley;Arvind Satyanarayan;Marti A. Hearst",
                "AuthorAffiliation": "UC Berkeley, USA;Tableau Research, USA;Versalytix, USA;MIT CSAIL, USA;UC Berkeley, USA",
                "InternalReferences": "0.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2017.2744684;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2021.3114846;10.1109/tvcg.2021.3114802;10.1109/tvcg.2021.3114770;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145",
                "AuthorKeywords": "Visualization,text,annotation,user preference,takeaways,design,line charts",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 876,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 143,
                "i": [
                    143
                ]
            }
        },
        {
            "name": "Jonathan Zong",
            "value": 77,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 665,
            "x": 257.6942589475223,
            "y": 11.986196456228718,
            "vy": 0,
            "vx": 0,
            "r": 1.0886586067933217,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Lyra 2: Designing Interactive Visualizations by Demonstration",
                "DOI": "10.1109/tvcg.2020.3030367",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030367",
                "FirstPage": 304,
                "LastPage": 314,
                "PaperType": "J",
                "Abstract": "Recent graphical interfaces offer direct manipulation mechanisms for authoring visualizations, but are largely restricted to static output. To author interactive visualizations, users must instead turn to textual specification, but such approaches impose a higher technical burden. To bridge this gap, we introduce Lyra 2, a system that extends a prior visualization design environment with novel methods for authoring interaction techniques by demonstration. Users perform an interaction (e.g., button clicks, drags, or key presses) directly on the visualization they are editing. The system interprets this performance using a set of heuristics and enumerates suggestions of possible interaction designs. These heuristics account for the properties of the interaction (e.g., target and event type) as well as the visualization (e.g., mark and scale types, and multiple views). Interaction design suggestions are displayed as thumbnails; users can preview and test these suggestions, iteratively refine them through additional demonstrations, and finally apply and customize them via property inspectors. We evaluate our approach through a gallery of diverse examples, and evaluate its usability through a first-use study and via an analysis of its cognitive dimensions. We find that, in Lyra 2, interaction design by demonstration enables users to rapidly express a wide range of interactive visualizations.",
                "AuthorNamesDeduped": "Jonathan Zong;Dhiraj Barnwal;Rupayan Neogy;Arvind Satyanarayan",
                "AuthorNames": "Jonathan Zong;Dhiraj Barnwal;Rupayan Neogy;Arvind Satyanarayan",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Indian Institute of Technology Kharagpur;Massachusetts Institute of Technology;Massachusetts Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346250;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2865240;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2598839;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/infvis.2004.12;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Direct manipulation,interactive visualization,interaction design by demonstration",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 851,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 379,
                "i": [
                    379
                ]
            }
        },
        {
            "name": "Josh Pollock",
            "value": 20,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 666,
            "x": -198.26107479867161,
            "y": 165.35581701190782,
            "vy": 0,
            "vx": 0,
            "r": 1.023028209556707,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Animated Vega-Lite: Unifying Animation with a Grammar of Interactive Graphics",
                "DOI": "10.1109/tvcg.2022.3209369",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209369",
                "FirstPage": 149,
                "LastPage": 159,
                "PaperType": "J",
                "Abstract": "We present Animated Vega-Lite, a set of extensions to Vega-Lite that model animated visualizations as time-varying data queries. In contrast to alternate approaches for specifying animated visualizations, which prize a highly expressive design space, Animated Vega-Lite prioritizes unifying animation with the language's existing abstractions for static and interactive visualizations to enable authors to smoothly move between or combine these modalities. Thus, to compose animation with static visualizations, we represent time as an encoding channel. Time encodings map a data field to animation keyframes, providing a lightweight specification for animations without interaction. To compose animation and interaction, we also represent time as an event stream; Vega-Lite selections, which provide dynamic data queries, are now driven not only by input events but by timer ticks as well. We evaluate the expressiveness of our approach through a gallery of diverse examples that demonstrate coverage over taxonomies of both interaction and animation. We also critically reflect on the conceptual affordances and limitations of our contribution by interviewing five expert developers of existing animation grammars. These reflections highlight the key motivating role of in-the-wild examples, and identify three central tradeoffs: the language design process, the types of animated transitions supported, and how the systems model keyframes.",
                "AuthorNamesDeduped": "Jonathan Zong;Josh Pollock;Dylan Wootton;Arvind Satyanarayan",
                "AuthorNames": "Jonathan Zong;Josh Pollock;Dylan Wootton;Arvind Satyanarayan",
                "AuthorAffiliation": "MIT CSAIL, USA;MIT CSAIL, USA;MIT CSAIL, USA;MIT CSAIL, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70539;10.1109/tvcg.2018.2864909;10.1109/tvcg.2020.3030360;10.1109/tvcg.2014.2346250;10.1109/tvcg.2018.2865240;10.1109/tvcg.2008.125;10.1109/tvcg.2019.2934281;10.1109/tvcg.2022.3209369;10.1109/tvcg.2015.2467091;10.1109/tvcg.2020.3030396;10.1109/tvcg.2015.2467191;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367;10.1109/tvcg.2016.2599030",
                "AuthorKeywords": "Information visualization,Animation,Interaction,Toolkits,Systems,Declarative Specification",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 533,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 145,
                "i": [
                    145
                ]
            }
        },
        {
            "name": "Dylan Wootton",
            "value": 20,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 667,
            "x": 34.52116504237592,
            "y": -256.043529822796,
            "vy": 0,
            "vx": 0,
            "r": 1.023028209556707,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Animated Vega-Lite: Unifying Animation with a Grammar of Interactive Graphics",
                "DOI": "10.1109/tvcg.2022.3209369",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209369",
                "FirstPage": 149,
                "LastPage": 159,
                "PaperType": "J",
                "Abstract": "We present Animated Vega-Lite, a set of extensions to Vega-Lite that model animated visualizations as time-varying data queries. In contrast to alternate approaches for specifying animated visualizations, which prize a highly expressive design space, Animated Vega-Lite prioritizes unifying animation with the language's existing abstractions for static and interactive visualizations to enable authors to smoothly move between or combine these modalities. Thus, to compose animation with static visualizations, we represent time as an encoding channel. Time encodings map a data field to animation keyframes, providing a lightweight specification for animations without interaction. To compose animation and interaction, we also represent time as an event stream; Vega-Lite selections, which provide dynamic data queries, are now driven not only by input events but by timer ticks as well. We evaluate the expressiveness of our approach through a gallery of diverse examples that demonstrate coverage over taxonomies of both interaction and animation. We also critically reflect on the conceptual affordances and limitations of our contribution by interviewing five expert developers of existing animation grammars. These reflections highlight the key motivating role of in-the-wild examples, and identify three central tradeoffs: the language design process, the types of animated transitions supported, and how the systems model keyframes.",
                "AuthorNamesDeduped": "Jonathan Zong;Josh Pollock;Dylan Wootton;Arvind Satyanarayan",
                "AuthorNames": "Jonathan Zong;Josh Pollock;Dylan Wootton;Arvind Satyanarayan",
                "AuthorAffiliation": "MIT CSAIL, USA;MIT CSAIL, USA;MIT CSAIL, USA;MIT CSAIL, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70539;10.1109/tvcg.2018.2864909;10.1109/tvcg.2020.3030360;10.1109/tvcg.2014.2346250;10.1109/tvcg.2018.2865240;10.1109/tvcg.2008.125;10.1109/tvcg.2019.2934281;10.1109/tvcg.2022.3209369;10.1109/tvcg.2015.2467091;10.1109/tvcg.2020.3030396;10.1109/tvcg.2015.2467191;10.1109/tvcg.2007.70515;10.1109/tvcg.2020.3030367;10.1109/tvcg.2016.2599030",
                "AuthorKeywords": "Information visualization,Animation,Interaction,Toolkits,Systems,Declarative Specification",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 533,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 145,
                "i": [
                    145
                ]
            }
        },
        {
            "name": "Wai Tong",
            "value": 62,
            "numPapers": 33,
            "cluster": "5",
            "visible": 1,
            "index": 668,
            "x": 147.61053217282105,
            "y": 212.27607211284214,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Exploring Interactions with Printed Data Visualizations in Augmented Reality",
                "DOI": "10.1109/tvcg.2022.3209386",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209386",
                "FirstPage": 418,
                "LastPage": 428,
                "PaperType": "J",
                "Abstract": "This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ($\\mathrm{N}=20$) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ($\\mathrm{N}=12$, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement “point” for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.",
                "AuthorNamesDeduped": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorNames": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, USA;Harvard University, USA;Carnegie Mellon University, USA;Hong Kong University of Science and Technology, Hong Kong, USA;Hong Kong University of Science and Technology, Hong Kong, USA;University of Edinburgh, United Kingdom;Hong Kong University of Science and Technology, Hong Kong, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467201;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114806;10.1109/tvcg.2021.3114861;10.1109/tvcg.2019.2934283;10.1109/tvcg.2020.3030334;10.1109/tvcg.2013.121;10.1109/tvcg.2013.134;10.1109/tvcg.2017.2744319;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2020.3028948;10.1109/tvcg.2010.177;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.237;10.1109/tvcg.2020.3030392;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2599211",
                "AuthorKeywords": "Interaction design,augmented reality,paper interaction,tangible user interface,printed data visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 1055,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 147,
                "i": [
                    147
                ]
            }
        },
        {
            "name": "Fritz Lekschas",
            "value": 99,
            "numPapers": 34,
            "cluster": "5",
            "visible": 1,
            "index": 669,
            "x": -252.42242499113436,
            "y": -56.858766796292265,
            "vy": 0,
            "vx": 0,
            "r": 1.1139896373056994,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Gosling: A Grammar-based Toolkit for Scalable and Interactive Genomics Data Visualization",
                "DOI": "10.1109/tvcg.2021.3114876",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114876",
                "FirstPage": 140,
                "LastPage": 150,
                "PaperType": "J",
                "Abstract": "The combination of diverse data types and analysis tasks in genomics has resulted in the development of a wide range of visualization techniques and tools. However, most existing tools are tailored to a specific problem or data type and offer limited customization, making it challenging to optimize visualizations for new analysis tasks or datasets. To address this challenge, we designed Gosling-a grammar for interactive and scalable genomics data visualization. Gosling balances expressiveness for comprehensive multi-scale genomics data visualizations with accessibility for domain scientists. Our accompanying JavaScript toolkit called Gosling.js provides scalable and interactive rendering. Gosling.js is built on top of an existing platform for web-based genomics data visualization to further simplify the visualization of common genomics data formats. We demonstrate the expressiveness of the grammar through a variety of real-world examples. Furthermore, we show how Gosling supports the design of novel genomics visualizations. An online editor and examples of Gosling.js, its source code, and documentation are available at <uri>https://gosling.js.org</uri>.",
                "AuthorNamesDeduped": "Sehi L'Yi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorNames": "Sehi LYi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorAffiliation": "Harvard Medical School, Boston, MA, USA;Harvard Medical School, Boston, MA, USA;Harvard School of Engineering and Applied Sciences, Boston, MA, USA;Harvard Medical School, Boston, MA, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2013.214;10.1109/tvcg.2018.2865141;10.1109/tvcg.2017.2745978;10.1109/tvcg.2013.179;10.1109/tvcg.2009.167;10.1109/tvcg.2010.163;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2016.2598796;10.1109/tvcg.2020.3030372;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934555",
                "AuthorKeywords": "Genomics,declarative specification,visualization grammar",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 90,
                "DownloadsXplore": 1426,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 266,
                "i": [
                    266
                ]
            }
        },
        {
            "name": "Aritra Dasgupta",
            "value": 224,
            "numPapers": 42,
            "cluster": "3",
            "visible": 1,
            "index": 670,
            "x": 224.70361217095876,
            "y": -128.67900635815997,
            "vy": 0,
            "vx": 0,
            "r": 1.257915947035118,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods",
                "DOI": "10.1109/tvcg.2016.2598544",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598544",
                "FirstPage": 271,
                "LastPage": 280,
                "PaperType": "J",
                "Abstract": "Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.",
                "AuthorNamesDeduped": "Aritra Dasgupta;Joon-Yong Lee;Ryan Wilson;Robert A. Lafrance;Nick Cramer;Kristin A. Cook;Samuel H. Payne",
                "AuthorNames": "Aritra Dasgupta;Joon-Yong Lee;Ryan Wilson;Robert A. Lafrance;Nick Cramer;Kristin Cook;Samuel Payne",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory",
                "InternalReferences": "0.1109/tvcg.2015.2467591;10.1109/vast.2015.7347625;10.1109/tvcg.2012.224;10.1109/infvis.2005.1532136;10.1109/vast.2006.261416;10.1109/tvcg.2013.124;10.1109/tvcg.2013.120",
                "AuthorKeywords": "trust;transparency;familiarity;uncertainty;biological data analysis",
                "AminerCitationCount": 41,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1667,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 975,
                "i": [
                    975
                ]
            }
        },
        {
            "name": "Racquel Fygenson",
            "value": 30,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 671,
            "x": -78.82674692969,
            "y": 246.85287919828403,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Multiple Forecast Visualizations (MFVs): Trade-offs in Trust and Performance in Multiple COVID-19 Forecast Visualizations",
                "DOI": "10.1109/tvcg.2022.3209457",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209457",
                "FirstPage": 12,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "The prevalence of inadequate SARS-COV-2 (COVID-19) responses may indicate a lack of trust in forecasts and risk communication. However, no work has empirically tested how multiple forecast visualization choices impact trust and task-based performance. The three studies presented in this paper ($N=1299$) examine how visualization choices impact trust in COVID-19 mortality forecasts and how they influence performance in a trend prediction task. These studies focus on line charts populated with real-time COVID-19 data that varied the number and color encoding of the forecasts and the presence of best/worst-case forecasts. The studies reveal that trust in COVID-19 forecast visualizations initially increases with the number of forecasts and then plateaus after 6–9 forecasts. However, participants were most trusting of visualizations that showed less visual information, including a 95% confidence interval, single forecast, and grayscale encoded forecasts. Participants maintained high trust in intervals labeled with 50% and 25% and did not proportionally scale their trust to the indicated interval size. Despite the high trust, the 95% CI condition was the most likely to evoke predictions that did not correspond with the actual COVID-19 trend. Qualitative analysis of participants' strategies confirmed that many participants trusted both the simplistic visualizations and those with numerous forecasts. This work provides practical guides for how COVID-19 forecast visualizations influence trust, including recommendations for identifying the range where forecasts balance trade-offs between trust and task-based performance.",
                "AuthorNamesDeduped": "Lace M. K. Padilla;Racquel Fygenson;Spencer C. Castro;Enrico Bertini",
                "AuthorNames": "Lace Padilla;Racquel Fygenson;Spencer C. Castro;Enrico Bertini",
                "AuthorAffiliation": "University of California Merced, USA;New York University, USA;University of California Merced, USA;Northeastern University, USA",
                "InternalReferences": "0.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2017.2743898;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865193;10.1109/infvis.2004.15",
                "AuthorKeywords": "COVID-19,multiple forecast visualizations,uncertainty visualization,line charts,time-series data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 2098,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 150,
                "i": [
                    150
                ]
            }
        },
        {
            "name": "Spencer C. Castro",
            "value": 55,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 672,
            "x": -108.70318434129686,
            "y": -235.44344907867378,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Multiple Forecast Visualizations (MFVs): Trade-offs in Trust and Performance in Multiple COVID-19 Forecast Visualizations",
                "DOI": "10.1109/tvcg.2022.3209457",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209457",
                "FirstPage": 12,
                "LastPage": 22,
                "PaperType": "J",
                "Abstract": "The prevalence of inadequate SARS-COV-2 (COVID-19) responses may indicate a lack of trust in forecasts and risk communication. However, no work has empirically tested how multiple forecast visualization choices impact trust and task-based performance. The three studies presented in this paper ($N=1299$) examine how visualization choices impact trust in COVID-19 mortality forecasts and how they influence performance in a trend prediction task. These studies focus on line charts populated with real-time COVID-19 data that varied the number and color encoding of the forecasts and the presence of best/worst-case forecasts. The studies reveal that trust in COVID-19 forecast visualizations initially increases with the number of forecasts and then plateaus after 6–9 forecasts. However, participants were most trusting of visualizations that showed less visual information, including a 95% confidence interval, single forecast, and grayscale encoded forecasts. Participants maintained high trust in intervals labeled with 50% and 25% and did not proportionally scale their trust to the indicated interval size. Despite the high trust, the 95% CI condition was the most likely to evoke predictions that did not correspond with the actual COVID-19 trend. Qualitative analysis of participants' strategies confirmed that many participants trusted both the simplistic visualizations and those with numerous forecasts. This work provides practical guides for how COVID-19 forecast visualizations influence trust, including recommendations for identifying the range where forecasts balance trade-offs between trust and task-based performance.",
                "AuthorNamesDeduped": "Lace M. K. Padilla;Racquel Fygenson;Spencer C. Castro;Enrico Bertini",
                "AuthorNames": "Lace Padilla;Racquel Fygenson;Spencer C. Castro;Enrico Bertini",
                "AuthorAffiliation": "University of California Merced, USA;New York University, USA;University of California Merced, USA;Northeastern University, USA",
                "InternalReferences": "0.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2017.2743898;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865193;10.1109/infvis.2004.15",
                "AuthorKeywords": "COVID-19,multiple forecast visualizations,uncertainty visualization,line charts,time-series data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 2098,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 150,
                "i": [
                    150
                ]
            }
        },
        {
            "name": "Sarah H. Creem-Regehr",
            "value": 56,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 673,
            "x": 239.37188320598474,
            "y": 100.25518206267648,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Visualizing Uncertain Tropical Cyclone Predictions using Representative Samples from Ensembles of Forecast Tracks",
                "DOI": "10.1109/tvcg.2018.2865193",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865193",
                "FirstPage": 882,
                "LastPage": 891,
                "PaperType": "J",
                "Abstract": "A common approach to sampling the space of a prediction is the generation of an ensemble of potential outcomes, where the ensemble's distribution reveals the statistical structure of the prediction space. For example, the US National Hurricane Center generates multiple day predictions for a storm's path, size, and wind speed, and then uses a Monte Carlo approach to sample this prediction into a large ensemble of potential storm outcomes. Various forms of summary visualizations are generated from such an ensemble, often using spatial spread to indicate its statistical characteristics. However, studies have shown that changes in the size of such summary glyphs, representing changes in the uncertainty of the prediction, are frequently confounded with other attributes of the phenomenon, such as its size or strength. In addition, simulation ensembles typically encode multivariate information, which can be difficult or confusing to include in a summary display. This problem can be overcome by directly displaying the ensemble as a set of annotated trajectories, however this solution will not be effective if ensembles are densely overdrawn or structurally disorganized. We propose to overcome these difficulties by selectively sampling the original ensemble, constructing a smaller representative and spatially well organized ensemble. This can be drawn directly as a set of paths that implicitly reveals the underlying spatial uncertainty distribution of the prediction. Since this approach does not use a visual channel to encode uncertainty, additional information can more easily be encoded in the display without leading to visual confusion. To demonstrate our argument, we describe the development of a visualization for ensembles of tropical cyclone forecast tracks, explaining how their spatial and temporal predictions, as well as other crucial storm characteristics such as size and intensity, can be clearly revealed. We verify the effectiveness of this visualization approach through a cognitive study exploring how storm damage estimates are affected by the density of tracks drawn, and by the presence or absence of annotating information on storm size and intensity.",
                "AuthorNamesDeduped": "Le Liu 0007;Lace M. K. Padilla;Sarah H. Creem-Regehr;Donald H. House",
                "AuthorNames": "Le Liu;Lace Padilla;Sarah H. Creem-Regehr;Donald H. House",
                "AuthorAffiliation": "Magic Weaver Inc., Santa Clara, CA;Northwestern University, Evanston, IL, US;University of Utah, Salt Lake City, UT, US;Clemson University, Clemson, SC, US",
                "InternalReferences": "0.1109/tvcg.2017.2743898;10.1109/tvcg.2010.181;10.1109/tvcg.2014.2346455",
                "AuthorKeywords": "uncertainty visualization,hurricane forecasts,ensemble visualization,ensemble sampling,implicit uncertainty",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 1056,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 663,
                "i": [
                    663
                ]
            }
        },
        {
            "name": "Wei Peng",
            "value": 219,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 674,
            "x": -244.40802378398487,
            "y": 87.83346691328474,
            "vy": 0,
            "vx": 0,
            "r": 1.2521588946459412,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Clutter Reduction in Multi-Dimensional Data Visualization Using Dimension Reordering",
                "DOI": "10.1109/infvis.2004.15",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.15",
                "FirstPage": 89,
                "LastPage": 96,
                "PaperType": "C",
                "Abstract": "Visual clutter denotes a disordered collection of graphical entities in information visualization. Clutter can obscure the structure present in the data. Even in a small dataset, clutter can make it hard for the viewer to find patterns, relationships and structure. In this paper, we define visual clutter as any aspect of the visualization that interferes with the viewer's understanding of the data, and present the concept of clutter-based dimension reordering. Dimension order is an attribute that can significantly affect a visualization's expressiveness. By varying the dimension order in a display, it is possible to reduce clutter without reducing information content or modifying the data in any way. Clutter reduction is a display-dependent task. In this paper, we follow a three-step procedure for four different visualization techniques. For each display technique, first, we determine what constitutes clutter in terms of display properties; then we design a metric to measure visual clutter in this display; finally we search for an order that minimizes the clutter in a display",
                "AuthorNamesDeduped": "Wei Peng;Matthew O. Ward;Elke A. Rundensteiner",
                "AuthorNames": "Wei Peng;M.O. Ward;E.A. Rundensteiner",
                "AuthorAffiliation": "Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA",
                "InternalReferences": "0.1109/infvis.2003.1249015;10.1109/visual.1996.567800;10.1109/visual.1990.146386;10.1109/infvis.1998.729559;10.1109/visual.1999.809866;10.1109/infvis.1996.559215;10.1109/infvis.2000.885086",
                "AuthorKeywords": "Multidimensional visualization, dimension order, visual clutter, visual structure",
                "AminerCitationCount": 412,
                "CitationCountCrossRef": 101,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 2002,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2452,
                "i": [
                    2452
                ]
            }
        },
        {
            "name": "Michelle Borkin",
            "value": 213,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 675,
            "x": 120.97779589632174,
            "y": -230.0312433128768,
            "vy": 0,
            "vx": 0,
            "r": 1.2452504317789292,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "What Makes a Visualization Memorable?",
                "DOI": "10.1109/tvcg.2013.234",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.234",
                "FirstPage": 2306,
                "LastPage": 2315,
                "PaperType": "J",
                "Abstract": "An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.",
                "AuthorNamesDeduped": "Michelle Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorNames": "Michelle A. Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorAffiliation": "School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA",
                "InternalReferences": "0.1109/tvcg.2012.221;10.1109/infvis.2004.59;10.1109/tvcg.2012.197;10.1109/tvcg.2012.245;10.1109/tvcg.2011.175",
                "AuthorKeywords": "Visualization taxonomy, information visualization, memorability",
                "AminerCitationCount": 635,
                "CitationCountCrossRef": 371,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 12816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1297,
                "i": [
                    1297
                ]
            }
        },
        {
            "name": "Azalea A. Vo",
            "value": 188,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 676,
            "x": 66.22757739449897,
            "y": 251.52317585553754,
            "vy": 0,
            "vx": 0,
            "r": 1.2164651698330455,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "What Makes a Visualization Memorable?",
                "DOI": "10.1109/tvcg.2013.234",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.234",
                "FirstPage": 2306,
                "LastPage": 2315,
                "PaperType": "J",
                "Abstract": "An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.",
                "AuthorNamesDeduped": "Michelle Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorNames": "Michelle A. Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorAffiliation": "School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA",
                "InternalReferences": "0.1109/tvcg.2012.221;10.1109/infvis.2004.59;10.1109/tvcg.2012.197;10.1109/tvcg.2012.245;10.1109/tvcg.2011.175",
                "AuthorKeywords": "Visualization taxonomy, information visualization, memorability",
                "AminerCitationCount": 635,
                "CitationCountCrossRef": 371,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 12816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1297,
                "i": [
                    1297
                ]
            }
        },
        {
            "name": "Phillip Isola",
            "value": 188,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 677,
            "x": -218.89722593745316,
            "y": -140.83325060825513,
            "vy": 0,
            "vx": 0,
            "r": 1.2164651698330455,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "What Makes a Visualization Memorable?",
                "DOI": "10.1109/tvcg.2013.234",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.234",
                "FirstPage": 2306,
                "LastPage": 2315,
                "PaperType": "J",
                "Abstract": "An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.",
                "AuthorNamesDeduped": "Michelle Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorNames": "Michelle A. Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorAffiliation": "School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA",
                "InternalReferences": "0.1109/tvcg.2012.221;10.1109/infvis.2004.59;10.1109/tvcg.2012.197;10.1109/tvcg.2012.245;10.1109/tvcg.2011.175",
                "AuthorKeywords": "Visualization taxonomy, information visualization, memorability",
                "AminerCitationCount": 635,
                "CitationCountCrossRef": 371,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 12816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1297,
                "i": [
                    1297
                ]
            }
        },
        {
            "name": "Shashank Sunkavalli",
            "value": 188,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 678,
            "x": 256.7287540290796,
            "y": -44.04936837999304,
            "vy": 0,
            "vx": 0,
            "r": 1.2164651698330455,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "What Makes a Visualization Memorable?",
                "DOI": "10.1109/tvcg.2013.234",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.234",
                "FirstPage": 2306,
                "LastPage": 2315,
                "PaperType": "J",
                "Abstract": "An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.",
                "AuthorNamesDeduped": "Michelle Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorNames": "Michelle A. Borkin;Azalea A. Vo;Zoya Bylinskii;Phillip Isola;Shashank Sunkavalli;Aude Oliva;Hanspeter Pfister",
                "AuthorAffiliation": "School of Engineering & Applied Sciences, Harvard University, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA;Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA;School of Engineering & Applied Sciences, Harvard University, USA",
                "InternalReferences": "0.1109/tvcg.2012.221;10.1109/infvis.2004.59;10.1109/tvcg.2012.197;10.1109/tvcg.2012.245;10.1109/tvcg.2011.175",
                "AuthorKeywords": "Visualization taxonomy, information visualization, memorability",
                "AminerCitationCount": 635,
                "CitationCountCrossRef": 371,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 12816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1297,
                "i": [
                    1297
                ]
            }
        },
        {
            "name": "Mengdi Sun",
            "value": 23,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 679,
            "x": -159.66640393357545,
            "y": 206.05008967462334,
            "vy": 0,
            "vx": 0,
            "r": 1.0264824409902131,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Supporting Expressive and Faithful Pictorial Visualization Design with Visual Style Transfer",
                "DOI": "10.1109/tvcg.2022.3209486",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209486",
                "FirstPage": 236,
                "LastPage": 246,
                "PaperType": "J",
                "Abstract": "Pictorial visualizations portray data with figurative messages and approximate the audience to the visualization. Previous research on pictorial visualizations has developed authoring tools or generation systems, but their methods are restricted to specific visualization types and templates. Instead, we propose to augment pictorial visualization authoring with visual style transfer, enabling a more extensible approach to visualization design. To explore this, our work presents Vistylist, a design support tool that disentangles the visual style of a source pictorial visualization from its content and transfers the visual style to one or more intended pictorial visualizations. We evaluated Vistylist through a survey of example pictorial visualizations, a controlled user study, and a series of expert interviews. The results of our evaluation indicated that Vistylist is useful for creating expressive and faithful pictorial visualizations.",
                "AuthorNamesDeduped": "Yang Shi 0007;Pei Liu;Siji Chen;Mengdi Sun;Nan Cao 0001",
                "AuthorNames": "Yang Shi;Pei Liu;Siji Chen;Mengdi Sun;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934785;10.1109/tvcg.2016.2598620;10.1109/tvcg.2021.3114775;10.1109/tvcg.2020.3030406;10.1109/tvcg.2020.3030448;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403",
                "AuthorKeywords": "Pictorial visualization,data-driven design",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 1159,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 151,
                "i": [
                    151
                ]
            }
        },
        {
            "name": "Yuchen Yang",
            "value": 25,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 680,
            "x": -21.467577756230018,
            "y": -259.97912051793736,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization",
                "DOI": "10.1109/tvcg.2022.3209447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209447",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.",
                "AuthorNamesDeduped": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu 0001;Yingcai Wu",
                "AuthorNames": "Lu Ying;Xinhuan Shu;Dazhen Deng;Yuchen Yang;Tan Tang;Lingyun Yu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;School of Art and Archaeology, Zhejiang University, Hangzhou, China;Department of Computing, Xi'an Jiaotong-Liverpool University, Suzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2021.3114792;10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209468;10.1109/tvcg.2018.2864769;10.1109/tvcg.2015.2468292;10.1109/tvcg.2016.2598620;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2013.206;10.1109/tvcg.2017.2745258;10.1109/tvcg.2020.3030359;10.1109/tvcg.2021.3114877;10.1109/vast50239.2020.00014;10.1109/tvcg.2022.3209360;10.1109/tvcg.2019.2934613;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Glyph-based visualization,metaphor,machine learning,automatic visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 152,
                "i": [
                    152
                ]
            }
        },
        {
            "name": "Furu Wei",
            "value": 161,
            "numPapers": 32,
            "cluster": "1",
            "visible": 1,
            "index": 681,
            "x": 191.583508113701,
            "y": 177.32952212998117,
            "vy": 0,
            "vx": 0,
            "r": 1.185377086931491,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "An Uncertainty-Aware Approach for Exploratory Microblog Retrieval",
                "DOI": "10.1109/tvcg.2015.2467554",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467554",
                "FirstPage": 250,
                "LastPage": 259,
                "PaperType": "J",
                "Abstract": "Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.",
                "AuthorNamesDeduped": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorNames": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorAffiliation": "Tsinghua University;Tsinghua University;USTC;Microsoft;Microsoft;University of Maryland, Baltimore County",
                "InternalReferences": "0.1109/tvcg.2013.186;10.1109/tvcg.2012.291;10.1109/vast.2009.5332611;10.1109/tvcg.2013.223;10.1109/tvcg.2011.233;10.1109/vast.2014.7042494;10.1109/visual.1996.568116;10.1109/infvis.2005.1532150;10.1109/vast.2010.5652931;10.1109/tvcg.2011.197;10.1109/tvcg.2014.2346919;10.1109/tvcg.2013.232;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346920;10.1109/tvcg.2010.183;10.1109/tvcg.2012.285;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "microblog data, mutual reinforcement model, uncertainty modeling, uncertainty visualization, uncertainty propagation",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 1373,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1112,
                "i": [
                    1112
                ]
            }
        },
        {
            "name": "Nina McCurdy",
            "value": 57,
            "numPapers": 23,
            "cluster": "1",
            "visible": 1,
            "index": 682,
            "x": -261.2435448899305,
            "y": -1.3454565555493048,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "A Framework for Externalizing Implicit Error Using Visualization",
                "DOI": "10.1109/tvcg.2018.2864913",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864913",
                "FirstPage": 925,
                "LastPage": 935,
                "PaperType": "J",
                "Abstract": "This paper presents a framework for externalizing and analyzing expert knowledge about discrepancies in data through the use of visualization. Grounded in an 18-month design study with global health experts, the framework formalizes the notion of data discrepancies as implicit error, both in global health data and more broadly. We use the term implicit error to describe measurement error that is inherent to and pervasive throughout a dataset, but that isn't explicitly accounted for or defined. Instead, implicit error exists in the minds of experts, is mainly qualitative, and is accounted for subjectively during expert interpretation of the data. Externalizing knowledge surrounding implicit error can assist in synchronizing, validating, and enhancing interpretation, and can inform error analysis and mitigation. The framework consists of a description of implicit error components that are important for downstream analysis, along with a process model for externalizing and analyzing implicit error using visualization. As a second contribution, we provide a rich, reflective, and verifiable description of our research process as an exemplar summary toward the ongoing inquiry into ways of increasing the validity and transferability of design study research.",
                "AuthorNamesDeduped": "Nina McCurdy;Julie Gerdes;Miriah D. Meyer",
                "AuthorNames": "Nina Mccurdy;Julie Gerdes;Miriah Meyer",
                "AuthorAffiliation": "University of Utah, Salt Lake City, UT, US;Texas Tech University, Lubbock, TX, US;University of Utah, Salt Lake City, UT, US",
                "InternalReferences": "0.1109/vast.2011.6102457;10.1109/vast.2010.5652885;10.1109/tvcg.2017.2743898;10.1109/tvcg.2017.2745240;10.1109/infvis.2005.1532134;10.1109/tvcg.2015.2467551;10.1109/visual.2005.1532781;10.1109/tvcg.2007.70577;10.1109/tvcg.2013.132;10.1109/tvcg.2007.70589;10.1109/tvcg.2016.2598543;10.1109/tvcg.2012.213",
                "AuthorKeywords": "implicit error,knowledge externalization,design study",
                "AminerCitationCount": 40,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 75,
                "DownloadsXplore": 864,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 675,
                "i": [
                    675
                ]
            }
        },
        {
            "name": "Aditeya Pandey",
            "value": 19,
            "numPapers": 43,
            "cluster": "5",
            "visible": 1,
            "index": 683,
            "x": 193.68343933904362,
            "y": -175.60388755890065,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MEDLEY: Intent-based Recommendations to Support Dashboard Composition",
                "DOI": "10.1109/tvcg.2022.3209421",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209421",
                "FirstPage": 1135,
                "LastPage": 1145,
                "PaperType": "J",
                "Abstract": "Despite the ever-growing popularity of dashboards across a wide range of domains, their authoring still remains a tedious and complex process. Current tools offer considerable support for creating individual visualizations but provide limited support for discovering groups of visualizations that can be collectively useful for composing analytic dashboards. To address this problem, we present Medley, a mixed-initiative interface that assists in dashboard composition by recommending dashboard collections (i.e., a logically grouped set of views and filtering widgets) that map to specific analytical intents. Users can specify dashboard intents (namely, measure analysis, change analysis, category analysis, or distribution analysis) explicitly through an input panel in the interface or implicitly by selecting data attributes and views of interest. The system recommends collections based on these analytic intents, and views and widgets can be selected to compose a variety of dashboards. Medley also provides a lightweight direct manipulation interface to configure interactions between views in a dashboard. Based on a study with 13 participants performing both targeted and open-ended tasks, we discuss how Medley's recommendations guide dashboard composition and facilitate different user workflows. Observations from the study identify potential directions for future work, including combining manual view specification with dashboard recommendations and designing natural language interfaces for dashboard authoring.",
                "AuthorNamesDeduped": "Aditeya Pandey;Arjun Srinivasan;Vidya Setlur",
                "AuthorNames": "Aditeya Pandey;Arjun Srinivasan;Vidya Setlur",
                "AuthorAffiliation": "Northeastern University, USA;Tableau Research, Germany;Tableau Research, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2020.3030338;10.1109/tvcg.2020.3030424;10.1109/tvcg.2021.3114860;10.1109/tvcg.2021.3114848;10.1109/tvcg.2007.70594;10.1109/tvcg.2020.3030378;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2017.2744184;10.1109/tvcg.2016.2599030;10.1109/tvcg.2013.120;10.1109/tvcg.2018.2865145;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2021.3114826",
                "AuthorKeywords": "Dashboards,intent,recommendations,direct manipulation,multi-view coordination",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 1038,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 155,
                "i": [
                    155
                ]
            }
        },
        {
            "name": "Michael Brudno",
            "value": 72,
            "numPapers": 48,
            "cluster": "5",
            "visible": 1,
            "index": 684,
            "x": -24.215113328410478,
            "y": 260.5064841544107,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "ChartWalk: Navigating large collections of text notes in electronic health records for clinical chart review",
                "DOI": "10.1109/tvcg.2022.3209444",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209444",
                "FirstPage": 1244,
                "LastPage": 1254,
                "PaperType": "J",
                "Abstract": "Before seeing a patient for the first time, healthcare workers will typically conduct a comprehensive clinical chart review of the patient's electronic health record (EHR). Within the diverse documentation pieces included there, text notes are among the most important and thoroughly perused segments for this task; and yet they are among the least supported medium in terms of content navigation and overview. In this work, we delve deeper into the task of clinical chart review from a data visualization perspective and propose a hybrid graphics+text approach via ChartWalk, an interactive tool to support the review of text notes in EHRs. We report on our iterative design process grounded in input provided by a diverse range of healthcare professionals, with steps including: (a) initial requirements distilled from interviews and the literature, (b) an interim evaluation to validate design decisions, and (c) a task-based qualitative evaluation of our final design. We contribute lessons learned to better support the design of tools not only for clinical chart reviews but also other healthcare-related tasks around medical text analysis.",
                "AuthorNamesDeduped": "Nicole Sultanum;Farooq Naeem;Michael Brudno;Fanny Chevalier",
                "AuthorNames": "Nicole Sultanum;Farooq Naeem;Michael Brudno;Fanny Chevalier",
                "AuthorAffiliation": "University of Toronto, Canada;Centre for Addiction and Mental Health (CAMH), Canada;University of Toronto, Canada;University of Toronto, Canada",
                "InternalReferences": "0.1109/vast.2014.7042493;10.1109/tvcg.2015.2467757;10.1109/tvcg.2014.2346431;10.1109/vast.2010.5652922;10.1109/vast.2012.6400485;10.1109/tvcg.2014.2346743;10.1109/vast.2007.4389006;10.1109/tvcg.2015.2467759;10.1109/tvcg.2018.2864905;10.1109/vast.2014.7042496;10.1109/tvcg.2010.129;10.1109/tvcg.2014.2346677",
                "AuthorKeywords": "Electronic Health Record (EHR),Text Visualization,Close+Distant Reading,Clinical Overview,Medicine",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 1002,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 156,
                "i": [
                    156
                ]
            }
        },
        {
            "name": "Yixuan Zhang 0001",
            "value": 126,
            "numPapers": 26,
            "cluster": "5",
            "visible": 1,
            "index": 685,
            "x": -158.22958461706267,
            "y": -208.598654242811,
            "vy": 0,
            "vx": 0,
            "r": 1.145077720207254,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visualization Design Practices in a Crisis: Behind the Scenes with COVID-19 Dashboard Creators",
                "DOI": "10.1109/tvcg.2022.3209493",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209493",
                "FirstPage": 1037,
                "LastPage": 1047,
                "PaperType": "J",
                "Abstract": "During the COVID-19 pandemic, a number of data visualizations were created to inform the public about the rapidly evolving crisis. Data dashboards, a form of information dissemination used during the pandemic, have facilitated this process by visualizing statistics regarding the number of COVID-19 cases over time. Prior work on COVID-19 visualizations has primarily focused on the design and evaluation of specific visualization systems from technology-centered perspectives. However, little is known about what occurs behind the scenes during the visualization creation processes, given the complex sociotechnical contexts in which they are embedded. Yet, such ecological knowledge is necessary to help characterize the nuances and trajectories of visualization design practices in the wild, as well as generate insights into how creators come to understand and approach visualization design on their own terms and for their own situated purposes. In this research, we conducted a qualitative interview study among dashboard creators from federal agencies, state health departments, mainstream news media outlets, and other organizations that created (often widely-used) COVID-19 dashboards to answer the following questions: how did visualization creators engage in COVID-19 dashboard design, and what tensions, conflicts, and challenges arose during this process? Our findings detail the trajectory of design practices—from creation to expansion, maintenance, and termination—that are shaped by the complex interplay between design goals, tools and technologies, labor, emerging crisis contexts, and public engagement. We particularly examined the tensions between designers and the general public involved in these processes. These conflicts, which often materialized due to a divergence between public demands and standing policies, centered around the type and amount of information to be visualized, how public perceptions shape and are shaped by visualization design, and the strategies utilized to deal with (potential) misinterpretations and misuse of visualizations. Our findings and lessons learned shed light on new ways of thinking in visualization design, focusing on the bundled activities that are invariably involved in human and nonhuman participation throughout the entire trajectory of design practice.",
                "AuthorNamesDeduped": "Yixuan Zhang 0001;Yifan Sun 0002;Joseph D. Gaggiano;Neha Kumar 0001;Clio Andris;Andrea G. Parker",
                "AuthorNames": "Yixuan Zhang;Yifan Sun;Joseph D. Gaggiano;Neha Kumar;Clio Andris;Andrea G. Parker",
                "AuthorAffiliation": "The Georgia Institute of Technology, USA;The College of William & Mary, USA;The Georgia Institute of Technology, USA;The Georgia Institute of Technology, USA;The Georgia Institute of Technology, USA;The Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/vast.2011.6102438;10.1109/tvcg.2014.2346930;10.1109/tvcg.2014.2346331;10.1109/tvcg.2021.3114959;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934538;10.1109/tvcg.2011.225",
                "AuthorKeywords": "Design practices,data visualization,COVID-19,qualitative research,general public,public health,crisis,dashboard",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 892,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 157,
                "i": [
                    157
                ]
            }
        },
        {
            "name": "Robert Kosara",
            "value": 357,
            "numPapers": 50,
            "cluster": "5",
            "visible": 1,
            "index": 686,
            "x": 257.76774657267174,
            "y": 46.9658261595268,
            "vy": 0,
            "vx": 0,
            "r": 1.4110535405872193,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "Legible Cities: Focus-Dependent Multi-Resolution Visualization of Urban Relationships",
                "DOI": "10.1109/tvcg.2007.70574",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70574",
                "FirstPage": 1169,
                "LastPage": 1175,
                "PaperType": "J",
                "Abstract": "Numerous systems have been developed to display large collections of data for urban contexts; however, most have focused on layering of single dimensions of data and manual calculations to understand relationships within the urban environment. Furthermore, these systems often limit the user's perspectives on the data, thereby diminishing the user's spatial understanding of the viewing region. In this paper, we introduce a highly interactive urban visualization tool that provides intuitive understanding of the urban data. Our system utilizes an aggregation method that combines buildings and city blocks into legible clusters, thus providing continuous levels of abstraction while preserving the user's mental model of the city. In conjunction with a 3D view of the urban model, a separate but integrated information visualization view displays multiple disparate dimensions of the urban data, allowing the user to understand the urban environment both spatially and cognitively in one glance. For our evaluation, expert users from various backgrounds viewed a real city model with census data and confirmed that our system allowed them to gain more intuitive and deeper understanding of the urban model from different perspectives and levels of abstraction than existing commercial urban visualization systems.",
                "AuthorNamesDeduped": "Remco Chang;Ginette Wessel;Robert Kosara;Eric Sauda;William Ribarsky",
                "AuthorNames": "Remco Chang;Ginette Wessel;Robert Kosara;Eric Sauda;William Ribarsky",
                "AuthorAffiliation": "Department of Computer Science, UNC-Charlotte, USA;UNC Charlotte College of Architecture, USA;Department of Computer Science, UNC-Charlotte, USA;UNC Charlotte College of Architecture, USA;Department of Computer Science, UNC-Charlotte, USA",
                "InternalReferences": "0.1109/infvis.2004.12;10.1109/visual.1990.146402;10.1109/infvis.2005.1532149",
                "AuthorKeywords": "Urban models, information visualization, multi-resolution",
                "AminerCitationCount": 91,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 718,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2106,
                "i": [
                    2106
                ]
            }
        },
        {
            "name": "Christine Nothelfer",
            "value": 125,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 687,
            "x": -221.95635537364095,
            "y": 139.59002940486124,
            "vy": 0,
            "vx": 0,
            "r": 1.1439263097294186,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Perception of Average Value in Multiclass Scatterplots",
                "DOI": "10.1109/tvcg.2013.183",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.183",
                "FirstPage": 2316,
                "LastPage": 2325,
                "PaperType": "J",
                "Abstract": "The visual system can make highly efficient aggregate judgements about a set of objects, with speed roughly independent of the number of objects considered. While there is a rich literature on these mechanisms and their ramifications for visual summarization tasks, this prior work rarely considers more complex tasks requiring multiple judgements over long periods of time, and has not considered certain critical aggregation types, such as the localization of the mean value of a set of points. In this paper, we explore these questions using a common visualization task as a case study: relative mean value judgements within multi-class scatterplots. We describe how the perception literature provides a set of expected constraints on the task, and evaluate these predictions with a large-scale perceptual study with crowd-sourced participants. Judgements are no harder when each set contains more points, redundant and conflicting encodings, as well as additional sets, do not strongly affect performance, and judgements are harder when using less salient encodings. These results have concrete ramifications for the design of scatterplots.",
                "AuthorNamesDeduped": "Michael Gleicher;Michael Correll;Christine Nothelfer;Steven Franconeri",
                "AuthorNames": "Michael Gleicher;Michael Correll;Christine Nothelfer;Steven Franconeri",
                "AuthorAffiliation": "Department of Computer Sciences, University of Wisconsin, Madison, USA;Department of Computer Sciences, University of Wisconsin, Madison, USA;Department of Psychology, Northwestern University, USA;Department of Psychology, Northwestern University, USA",
                "InternalReferences": "0.1109/tvcg.2012.233",
                "AuthorKeywords": "Psychophysics, Information Visualization, Perceptual Study",
                "AminerCitationCount": 99,
                "CitationCountCrossRef": 70,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1155,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1309,
                "i": [
                    1309
                ]
            }
        },
        {
            "name": "Lane Harrison",
            "value": 218,
            "numPapers": 39,
            "cluster": "5",
            "visible": 1,
            "index": 688,
            "x": 69.42243273526243,
            "y": -253.04253759618751,
            "vy": 0,
            "vx": 0,
            "r": 1.251007484168106,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "HindSight: Encouraging Exploration through Direct Encoding of Personal Interaction History",
                "DOI": "10.1109/tvcg.2016.2599058",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599058",
                "FirstPage": 351,
                "LastPage": 360,
                "PaperType": "J",
                "Abstract": "Physical and digital objects often leave markers of our use. Website links turn purple after we visit them, for example, showing us information we have yet to explore. These “footprints” of interaction offer substantial benefits in information saturated environments - they enable us to easily revisit old information, systematically explore new information, and quickly resume tasks after interruption. While applying these design principles have been successful in HCI contexts, direct encodings of personal interaction history have received scarce attention in data visualization. One reason is that there is little guidance for integrating history into visualizations where many visual channels are already occupied by data. More importantly, there is not firm evidence that making users aware of their interaction history results in benefits with regards to exploration or insights. Following these observations, we propose HindSight - an umbrella term for the design space of representing interaction history directly in existing data visualizations. In this paper, we examine the value of HindSight principles by augmenting existing visualizations with visual indicators of user interaction history (e.g. How the Recession Shaped the Economy in 255 Charts, NYTimes). In controlled experiments of over 400 participants, we found that HindSight designs generally encouraged people to visit more data and recall different insights after interaction. The results of our experiments suggest that simple additions to visualizations can make users aware of their interaction history, and that these additions significantly impact users' exploration and insights.",
                "AuthorNamesDeduped": "Mi Feng;Cheng Deng;Evan M. Peck;Lane Harrison",
                "AuthorNames": "Mi Feng;Cheng Deng;Evan M. Peck;Lane Harrison",
                "AuthorAffiliation": "Worcester Polytechnic Institute;Worcester Polytechnic Institute;Bucknell University;Worcester Polytechnic Institute",
                "InternalReferences": "0.1109/visual.2002.1183791;10.1109/visual.2005.1532788;10.1109/tvcg.2014.2346452;10.1109/tvcg.2008.137;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.109",
                "AuthorKeywords": "History;Visualization;Interaction",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 955,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 914,
                "i": [
                    914
                ]
            }
        },
        {
            "name": "Joel Shapiro",
            "value": 57,
            "numPapers": 9,
            "cluster": "5",
            "visible": 1,
            "index": 689,
            "x": 119.82476087299715,
            "y": 233.64936696196946,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Illusion of Causality in Visualized Data",
                "DOI": "10.1109/tvcg.2019.2934399",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934399",
                "FirstPage": 853,
                "LastPage": 862,
                "PaperType": "J",
                "Abstract": "Students who eat breakfast more frequently tend to have a higher grade point average. From this data, many people might confidently state that a before-school breakfast program would lead to higher grades. This is a reasoning error, because correlation does not necessarily indicate causation – X and Y can be correlated without one directly causing the other. While this error is pervasive, its prevalence might be amplified or mitigated by the way that the data is presented to a viewer. Across three crowdsourced experiments, we examined whether how simple data relations are presented would mitigate this reasoning error. The first experiment tested examples similar to the breakfast-GPA relation, varying in the plausibility of the causal link. We asked participants to rate their level of agreement that the relation was correlated, which they rated appropriately as high. However, participants also expressed high agreement with a causal interpretation of the data. Levels of support for the causal interpretation were not equally strong across visualization types: causality ratings were highest for text descriptions and bar graphs, but weaker for scatter plots. But is this effect driven by bar graphs aggregating data into two groups or by the visual encoding type? We isolated data aggregation versus visual encoding type and examined their individual effect on perceived causality. Overall, different visualization designs afford different cognitive reasoning affordances across the same data. High levels of data aggregation by graphs tend to be associated with higher perceived causality in data. Participants perceived line and dot visual encodings as more causal than bar encodings. Our results demonstrate how some visualization designs trigger stronger causal links while choosing others can help mitigate unwarranted perceptions of causality.",
                "AuthorNamesDeduped": "Cindy Xiong;Joel Shapiro;Jessica Hullman;Steven Franconeri",
                "AuthorNames": "Cindy Xiong;Joel Shapiro;Jessica Hullman;Steven Franconeri",
                "AuthorAffiliation": "Northwestern University;Northwestern University, Kellogg School of Management;Northwestern University;Northwestern University",
                "InternalReferences": "0.1109/vast.2017.8585665;10.1109/tvcg.2014.2346298;10.1109/tvcg.2016.2598594;10.1109/tvcg.2013.173;10.1109/tvcg.2014.2346979;10.1109/tvcg.2017.2743898;10.1109/tvcg.2018.2864909;10.1109/tvcg.2017.2745240;10.1109/tvcg.2014.2346419;10.1109/tvcg.2017.2744184",
                "AuthorKeywords": "Information Visualization,Correlation and Causation,Visualization Design,Reasoning Affordance",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1373,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 533,
                "i": [
                    533
                ]
            }
        },
        {
            "name": "Raimund Dachselt",
            "value": 95,
            "numPapers": 82,
            "cluster": "5",
            "visible": 1,
            "index": 690,
            "x": -246.36138724315515,
            "y": -91.41152485123608,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Personal Augmented Reality for Information Visualization on Large Interactive Displays",
                "DOI": "10.1109/tvcg.2020.3030460",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030460",
                "FirstPage": 1182,
                "LastPage": 1192,
                "PaperType": "J",
                "Abstract": "In this work we propose the combination of large interactive displays with personal head-mounted Augmented Reality (AR) for information visualization to facilitate data exploration and analysis. Even though large displays provide more display space, they are challenging with regard to perception, effective multi-user support, and managing data density and complexity. To address these issues and illustrate our proposed setup, we contribute an extensive design space comprising first, the spatial alignment of display, visualizations, and objects in AR space. Next, we discuss which parts of a visualization can be augmented. Finally, we analyze how AR can be used to display personal views in order to show additional information and to minimize the mutual disturbance of data analysts. Based on this conceptual foundation, we present a number of exemplary techniques for extending visualizations with AR and discuss their relation to our design space. We further describe how these techniques address typical visualization problems that we have identified during our literature research. To examine our concepts, we introduce a generic AR visualization framework as well as a prototype implementing several example techniques. In order to demonstrate their potential, we further present a use case walkthrough in which we analyze a movie data set. From these experiences, we conclude that the contributed techniques can be useful in exploring and understanding multivariate data. We are convinced that the extension of large displays with AR for information visualization has a great potential for data analysis and sense-making.",
                "AuthorNamesDeduped": "Patrick Reipschläger;Tamara Flemisch;Raimund Dachselt",
                "AuthorNames": "Patrick Reipschlager;Tamara Flemisch;Raimund Dachselt",
                "AuthorAffiliation": "Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden;Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden;Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2017.2745941;10.1109/tvcg.2019.2934803;10.1109/tvcg.2012.251;10.1109/tvcg.2008.153;10.1109/tvcg.2019.2934415;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.197;10.1109/tvcg.2013.163;10.1109/tvcg.2013.166;10.1109/tvcg.2018.2865235;10.1109/tvcg.2012.204;10.1109/tvcg.2017.2744184;10.1109/tvcg.2009.162;10.1109/tvcg.2017.2745958;10.1109/tvcg.2012.275;10.1109/tvcg.2017.2745258;10.1109/tvcg.2016.2598608;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Augmented Reality,Information Visualization,InfoVis,Large Displays,Immersive Analytics,Physical Navigation,Multiple Coordinated Views",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 78,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 5904,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 361,
                "i": [
                    361
                ]
            }
        },
        {
            "name": "Rita Sevastjanova",
            "value": 73,
            "numPapers": 26,
            "cluster": "5",
            "visible": 1,
            "index": 691,
            "x": 243.58300779773592,
            "y": -99.08238144195032,
            "vy": 0,
            "vx": 0,
            "r": 1.0840529648819806,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Comparison of Language Model Adaptation",
                "DOI": "10.1109/tvcg.2022.3209458",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209458",
                "FirstPage": 1178,
                "LastPage": 1188,
                "PaperType": "J",
                "Abstract": "Neural language models are widely used; however, their model parameters often need to be adapted to the specific domains and tasks of an application, which is time- and resource-consuming. Thus, adapters have recently been introduced as a lightweight alternative for model adaptation. They consist of a small set of task-specific parameters with a reduced training time and simple parameter composition. The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces. To help developers overcome these challenges, we provide a twofold contribution. First, in close collaboration with NLP researchers, we conducted a requirement analysis for an approach supporting adapter evaluation and detected, among others, the need for both intrinsic (i.e., embedding similarity-based) and extrinsic (i.e., prediction-based) explanation methods. Second, motivated by the gathered requirements, we designed a flexible visual analytics workspace that enables the comparison of adapter properties. In this paper, we discuss several design iterations and alternatives for interactive, comparative visual explanation methods. Our comparative visualizations show the differences in the adapted embedding vectors and prediction outcomes for diverse human-interpretable concepts (e.g., person names, human qualities). We evaluate our workspace through case studies and show that, for instance, an adapter trained on the language debiasing task according to context-0 (decontextualized) embeddings introduces a new type of bias where words (even gender-independent words such as countries) become more similar to female- than male pronouns. We demonstrate that these are artifacts of context-0 embeddings, and the adapter effectively eliminates the gender information from the contextualized word representations.",
                "AuthorNamesDeduped": "Rita Sevastjanova;Eren Cakmak;Shauli Ravfogel;Ryan Cotterell;Mennatallah El-Assady",
                "AuthorNames": "Rita Sevastjanova;Eren Cakmak;Shauli Ravfogel;Ryan Cotterell;Mennatallah El-Assady",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Bar-Ilan University, Israel;ETH, Israel;ETH, AI Center, Israel",
                "InternalReferences": "0.1109/tvcg.2020.3028976;10.1109/tvcg.2017.2744199;10.1109/vast.2018.8802454;10.1109/tvcg.2017.2745141;10.1109/tvcg.2018.2865230;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2865044",
                "AuthorKeywords": "Language Model Adaptation,Adapter,Word Embeddings,Sequence Classification,Visual Analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 592,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 178,
                "i": [
                    178
                ]
            }
        },
        {
            "name": "Yihong Wu 0003",
            "value": 8,
            "numPapers": 16,
            "cluster": "3",
            "visible": 1,
            "index": 692,
            "x": -112.76278877213633,
            "y": 237.7699591376727,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "OBTracker: Visual Analytics of Off-ball Movements in Basketball",
                "DOI": "10.1109/tvcg.2022.3209373",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209373",
                "FirstPage": 929,
                "LastPage": 939,
                "PaperType": "J",
                "Abstract": "In a basketball play, players who are not in possession of the ball (i.e., off-ball players) can still effectively contribute to the team's offense, such as making a sudden move to create scoring opportunities. Analyzing the movements of off-ball players can thus facilitate the development of effective strategies for coaches. However, common basketball statistics (e.g., points and assists) primarily focus on what happens around the ball and are mostly result-oriented, making it challenging to objectively assess and fully understand the contributions of off-ball movements. To address these challenges, we collaborate closely with domain experts and summarize the multi-level requirements for off-ball movement analysis in basketball. We first establish an assessment model to quantitatively evaluate the offensive contribution of an off-ball movement considering both the position of players and the team cooperation. Based on the model, we design and develop a visual analytics system called OBTracker to support the multifaceted analysis of off-ball movements. OBTracker enables users to identify the frequency and effectiveness of off-ball movement patterns and learn the performance of different off-ball players. A tailored visualization based on the Voronoi diagram is proposed to help users interpret the contribution of off-ball movements from a temporal perspective. We conduct two case studies based on the tracking data from NBA games and demonstrate the effectiveness and usability of OBTracker through expert feedback.",
                "AuthorNamesDeduped": "Yihong Wu 0003;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Yihong Wu;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/tvcg.2013.207;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2015.2468111;10.1109/tvcg.2019.2934630;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2021.3114877;10.1109/tvcg.2018.2864503;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Sports visualization,basketball tracking data,off-ball movement analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 974,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 166,
                "i": [
                    166
                ]
            }
        },
        {
            "name": "Kejian Zhao",
            "value": 152,
            "numPapers": 14,
            "cluster": "3",
            "visible": 1,
            "index": 693,
            "x": -77.5194387872418,
            "y": -251.6758562320015,
            "vy": 0,
            "vx": 0,
            "r": 1.1750143926309728,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Tac-Simur: Tactic-based Simulative Visual Analytics of Table Tennis",
                "DOI": "10.1109/tvcg.2019.2934630",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934630",
                "FirstPage": 407,
                "LastPage": 417,
                "PaperType": "J",
                "Abstract": "Simulative analysis in competitive sports can provide prospective insights, which can help improve the performance of players in future matches. However, adequately simulating the complex competition process and effectively explaining the simulation result to domain experts are typically challenging. This work presents a design study to address these challenges in table tennis. We propose a well-established hybrid second-order Markov chain model to characterize and simulate the competition process in table tennis. Compared with existing methods, our approach is the first to support the effective simulation of tactics, which represent high-level competition strategies in table tennis. Furthermore, we introduce a visual analytics system called Tac-Simur based on the proposed model for simulative visual analytics. Tac-Simur enables users to easily navigate different players and their tactics based on their respective performance in matches to identify the player and the tactics of interest for further analysis. Then, users can utilize the system to interactively explore diverse simulation tasks and visually explain the simulation results. The effectiveness and usefulness of this work are demonstrated by two case studies, in which domain experts utilize Tac-Simur to find interesting and valuable insights. The domain experts also provide positive feedback on the usability of Tac-Simur. Our work can be extended to other similar sports such as tennis and badminton.",
                "AuthorNamesDeduped": "Jiachen Wang;Kejian Zhao;Dazhen Deng;Anqi Cao;Xiao Xie;Zheng Zhou;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Jiachen Wang;Kejian Zhao;Dazhen Deng;Anqi Cao;Xiao Xie;Zheng Zhou;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Department of Sport Science, Zhejiang University;Department of Sport Science, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/tvcg.2016.2598432;10.1109/tvcg.2013.192;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865126;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041",
                "AuthorKeywords": "Simulative Visual Analytics,Table Tennis,Design Study",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1246,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 602,
                "i": [
                    602
                ]
            }
        },
        {
            "name": "Moqi He",
            "value": 8,
            "numPapers": 16,
            "cluster": "3",
            "visible": 1,
            "index": 694,
            "x": 227.3287423505255,
            "y": 133.31032556155736,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "OBTracker: Visual Analytics of Off-ball Movements in Basketball",
                "DOI": "10.1109/tvcg.2022.3209373",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209373",
                "FirstPage": 929,
                "LastPage": 939,
                "PaperType": "J",
                "Abstract": "In a basketball play, players who are not in possession of the ball (i.e., off-ball players) can still effectively contribute to the team's offense, such as making a sudden move to create scoring opportunities. Analyzing the movements of off-ball players can thus facilitate the development of effective strategies for coaches. However, common basketball statistics (e.g., points and assists) primarily focus on what happens around the ball and are mostly result-oriented, making it challenging to objectively assess and fully understand the contributions of off-ball movements. To address these challenges, we collaborate closely with domain experts and summarize the multi-level requirements for off-ball movement analysis in basketball. We first establish an assessment model to quantitatively evaluate the offensive contribution of an off-ball movement considering both the position of players and the team cooperation. Based on the model, we design and develop a visual analytics system called OBTracker to support the multifaceted analysis of off-ball movements. OBTracker enables users to identify the frequency and effectiveness of off-ball movement patterns and learn the performance of different off-ball players. A tailored visualization based on the Voronoi diagram is proposed to help users interpret the contribution of off-ball movements from a temporal perspective. We conduct two case studies based on the tracking data from NBA games and demonstrate the effectiveness and usability of OBTracker through expert feedback.",
                "AuthorNamesDeduped": "Yihong Wu 0003;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Yihong Wu;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/tvcg.2013.207;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2015.2468111;10.1109/tvcg.2019.2934630;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2021.3114877;10.1109/tvcg.2018.2864503;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Sports visualization,basketball tracking data,off-ball movement analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 974,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 166,
                "i": [
                    166
                ]
            }
        },
        {
            "name": "Jie Xu",
            "value": 8,
            "numPapers": 16,
            "cluster": "3",
            "visible": 1,
            "index": 695,
            "x": -257.860415151049,
            "y": 55.29924319670826,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "OBTracker: Visual Analytics of Off-ball Movements in Basketball",
                "DOI": "10.1109/tvcg.2022.3209373",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209373",
                "FirstPage": 929,
                "LastPage": 939,
                "PaperType": "J",
                "Abstract": "In a basketball play, players who are not in possession of the ball (i.e., off-ball players) can still effectively contribute to the team's offense, such as making a sudden move to create scoring opportunities. Analyzing the movements of off-ball players can thus facilitate the development of effective strategies for coaches. However, common basketball statistics (e.g., points and assists) primarily focus on what happens around the ball and are mostly result-oriented, making it challenging to objectively assess and fully understand the contributions of off-ball movements. To address these challenges, we collaborate closely with domain experts and summarize the multi-level requirements for off-ball movement analysis in basketball. We first establish an assessment model to quantitatively evaluate the offensive contribution of an off-ball movement considering both the position of players and the team cooperation. Based on the model, we design and develop a visual analytics system called OBTracker to support the multifaceted analysis of off-ball movements. OBTracker enables users to identify the frequency and effectiveness of off-ball movement patterns and learn the performance of different off-ball players. A tailored visualization based on the Voronoi diagram is proposed to help users interpret the contribution of off-ball movements from a temporal perspective. We conduct two case studies based on the tracking data from NBA games and demonstrate the effectiveness and usability of OBTracker through expert feedback.",
                "AuthorNamesDeduped": "Yihong Wu 0003;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Yihong Wu;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/tvcg.2013.207;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2015.2468111;10.1109/tvcg.2019.2934630;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2021.3114877;10.1109/tvcg.2018.2864503;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Sports visualization,basketball tracking data,off-ball movement analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 974,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 166,
                "i": [
                    166
                ]
            }
        },
        {
            "name": "Hongzeng Zhang",
            "value": 8,
            "numPapers": 16,
            "cluster": "3",
            "visible": 1,
            "index": 696,
            "x": 152.89394113791346,
            "y": -215.1126280889066,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "OBTracker: Visual Analytics of Off-ball Movements in Basketball",
                "DOI": "10.1109/tvcg.2022.3209373",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209373",
                "FirstPage": 929,
                "LastPage": 939,
                "PaperType": "J",
                "Abstract": "In a basketball play, players who are not in possession of the ball (i.e., off-ball players) can still effectively contribute to the team's offense, such as making a sudden move to create scoring opportunities. Analyzing the movements of off-ball players can thus facilitate the development of effective strategies for coaches. However, common basketball statistics (e.g., points and assists) primarily focus on what happens around the ball and are mostly result-oriented, making it challenging to objectively assess and fully understand the contributions of off-ball movements. To address these challenges, we collaborate closely with domain experts and summarize the multi-level requirements for off-ball movement analysis in basketball. We first establish an assessment model to quantitatively evaluate the offensive contribution of an off-ball movement considering both the position of players and the team cooperation. Based on the model, we design and develop a visual analytics system called OBTracker to support the multifaceted analysis of off-ball movements. OBTracker enables users to identify the frequency and effectiveness of off-ball movement patterns and learn the performance of different off-ball players. A tailored visualization based on the Voronoi diagram is proposed to help users interpret the contribution of off-ball movements from a temporal perspective. We conduct two case studies based on the tracking data from NBA games and demonstrate the effectiveness and usability of OBTracker through expert feedback.",
                "AuthorNamesDeduped": "Yihong Wu 0003;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang 0051;Yingcai Wu",
                "AuthorNames": "Yihong Wu;Dazhen Deng;Xiao Xie;Moqi He;Jie Xu;Hongzeng Zhang;Hui Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/tvcg.2013.207;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/vast.2014.7042477;10.1109/tvcg.2017.2745181;10.1109/tvcg.2015.2468111;10.1109/tvcg.2019.2934630;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392;10.1109/tvcg.2021.3114877;10.1109/tvcg.2018.2864503;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Sports visualization,basketball tracking data,off-ball movement analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 974,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 166,
                "i": [
                    166
                ]
            }
        },
        {
            "name": "Ji Lan",
            "value": 88,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 697,
            "x": 32.5906294433401,
            "y": 262.0836715106206,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "iTTVis: Interactive Visualization of Table Tennis Data",
                "DOI": "10.1109/tvcg.2017.2744218",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744218",
                "FirstPage": 709,
                "LastPage": 718,
                "PaperType": "J",
                "Abstract": "The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.",
                "AuthorNamesDeduped": "Yingcai Wu;Ji Lan;Xinhuan Shu;Chenyang Ji;Kejian Zhao;Jiachen Wang;Hui Zhang 0051",
                "AuthorNames": "Yingcai Wu;Ji Lan;Xinhuan Shu;Chenyang Ji;Kejian Zhao;Jiachen Wang;Hui Zhang",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Department of Physical Education, Zhejiang University",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/vast.2014.7042477;10.1109/infvis.1996.559229;10.1109/tvcg.2011.208;10.1109/tvcg.2013.192;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/tvcg.2012.213",
                "AuthorKeywords": "Sports visualization,visual knowledge discovery,sports analytics,visual knowledge representation",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 59,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2322,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 782,
                "i": [
                    782
                ]
            }
        },
        {
            "name": "Chenyang Ji",
            "value": 88,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 698,
            "x": -201.21037417742036,
            "y": -171.36039601781525,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "iTTVis: Interactive Visualization of Table Tennis Data",
                "DOI": "10.1109/tvcg.2017.2744218",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744218",
                "FirstPage": 709,
                "LastPage": 718,
                "PaperType": "J",
                "Abstract": "The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.",
                "AuthorNamesDeduped": "Yingcai Wu;Ji Lan;Xinhuan Shu;Chenyang Ji;Kejian Zhao;Jiachen Wang;Hui Zhang 0051",
                "AuthorNames": "Yingcai Wu;Ji Lan;Xinhuan Shu;Chenyang Ji;Kejian Zhao;Jiachen Wang;Hui Zhang",
                "AuthorAffiliation": "State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;State Key Lab of CAD & CG, Zhejiang University;Department of Physical Education, Zhejiang University",
                "InternalReferences": "0.1109/vast.2014.7042478;10.1109/vast.2014.7042477;10.1109/infvis.1996.559229;10.1109/tvcg.2011.208;10.1109/tvcg.2013.192;10.1109/tvcg.2012.263;10.1109/tvcg.2014.2346445;10.1109/tvcg.2012.213",
                "AuthorKeywords": "Sports visualization,visual knowledge discovery,sports analytics,visual knowledge representation",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 59,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2322,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 782,
                "i": [
                    782
                ]
            }
        },
        {
            "name": "Ulrik Brandes",
            "value": 181,
            "numPapers": 32,
            "cluster": "3",
            "visible": 1,
            "index": 699,
            "x": 264.30754581723056,
            "y": -9.56667257057443,
            "vy": 0,
            "vx": 0,
            "r": 1.208405296488198,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Interactive Level-of-Detail Rendering of Large Graphs",
                "DOI": "10.1109/tvcg.2012.238",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.238",
                "FirstPage": 2486,
                "LastPage": 2495,
                "PaperType": "J",
                "Abstract": "We propose a technique that allows straight-line graph drawings to be rendered interactively with adjustable level of detail. The approach consists of a novel combination of edge cumulation with density-based node aggregation and is designed to exploit common graphics hardware for speed. It operates directly on graph data and does not require precomputed hierarchies or meshes. As proof of concept, we present an implementation that scales to graphs with millions of nodes and edges, and discuss several example applications.",
                "AuthorNamesDeduped": "Michael Zinsmaier;Ulrik Brandes;Oliver Deussen;Hendrik Strobelt",
                "AuthorNames": "Michael Zinsmaier;Ulrik Brandes;Oliver Deussen;Hendrik Strobelt",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532150;10.1109/tvcg.2006.120;10.1109/tvcg.2011.233;10.1109/tvcg.2008.135;10.1109/tvcg.2006.187;10.1109/tvcg.2006.147;10.1109/tvcg.2010.154;10.1109/infvis.2004.66",
                "AuthorKeywords": "Graph visualization, OpenGL, edge aggregation",
                "AminerCitationCount": 123,
                "CitationCountCrossRef": 70,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 1698,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1409,
                "i": [
                    1409
                ]
            }
        },
        {
            "name": "Wenwen Dou",
            "value": 503,
            "numPapers": 80,
            "cluster": "5",
            "visible": 1,
            "index": 700,
            "x": -188.56460499177584,
            "y": 185.7239611474393,
            "vy": 0,
            "vx": 0,
            "r": 1.5791594703511802,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "HierarchicalTopics: Visually Exploring Large Text Collections Using Topic Hierarchies",
                "DOI": "10.1109/tvcg.2013.162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.162",
                "FirstPage": 2002,
                "LastPage": 2011,
                "PaperType": "J",
                "Abstract": "Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system - HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.",
                "AuthorNamesDeduped": "Wenwen Dou;Li Yu;Xiaoyu Wang 0001;Zhiqiang Ma 0004;William Ribarsky",
                "AuthorNames": "Wenwen Dou;Li Yu;Xiaoyu Wang;Zhiqiang Ma;William Ribarsky",
                "AuthorAffiliation": "University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/vast.2012.6400557;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485",
                "AuthorKeywords": "Hierarchical topic representation, topic modeling, visual analytics, rose tree",
                "AminerCitationCount": 189,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2934,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1370,
                "i": [
                    1370
                ]
            }
        },
        {
            "name": "Bettina Speckmann",
            "value": 173,
            "numPapers": 20,
            "cluster": "1",
            "visible": 1,
            "index": 701,
            "x": 13.59663263112181,
            "y": -264.5092277806094,
            "vy": 0,
            "vx": 0,
            "r": 1.1991940126655152,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Necklace Maps",
                "DOI": "10.1109/tvcg.2010.180",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.180",
                "FirstPage": 881,
                "LastPage": 889,
                "PaperType": "J",
                "Abstract": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.",
                "AuthorNamesDeduped": "Bettina Speckmann;Kevin Verbeek",
                "AuthorNames": "authro Speckmann;Kevin Verbeek",
                "AuthorAffiliation": "TU, Eindhoven, Netherlands;TU, Eindhoven, Netherlands",
                "InternalReferences": "0.1109/infvis.2004.57;10.1109/tvcg.2008.165",
                "AuthorKeywords": "Geographic Visualization, Automated Cartography, Proportional Symbol Maps, Necklace Maps",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 853,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1707,
                "i": [
                    1707
                ]
            }
        },
        {
            "name": "Jie Li 0006",
            "value": 19,
            "numPapers": 59,
            "cluster": "1",
            "visible": 1,
            "index": 702,
            "x": 168.76784459949857,
            "y": 204.37077733677944,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Vismate: Interactive Visual Analysis of Station-Based Observation Data on Climate Changes",
                "DOI": "10.1109/vast.2014.7042489",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042489",
                "FirstPage": 133,
                "LastPage": 142,
                "PaperType": "C",
                "Abstract": "We present a new approach to visualizing the climate data of multi-dimensional, time-series, and geo-related characteristics. Our approach integrates three new highly interrelated visualization techniques, and uses the same input data types as in the traditional model-based analysis methods. As the main visualization view, Global Radial Map is used to identify the overall state of climate changes and provide users with a compact and intuitive view for analyzing spatial and temporal patterns at the same time. Other two visualization techniques, providing complementary views, are specialized in analysing time trend and detecting abnormal cases, which are two important analysis tasks in any climate change study. Case studies and expert reviews have been conducted, through which the effectiveness and scalability of the proposed approach has been confirmed.",
                "AuthorNamesDeduped": "Jie Li 0006;Kang Zhang 0001;Zhao-Peng Meng",
                "AuthorNames": "Jie Li;Kang Zhang;Zhao-Peng Meng",
                "AuthorAffiliation": "School of Computer Science and Technology, National Ocean Technology Center, Tianjin, China;Department of Computer Science, The University of Texas, Dallas, USA;School of Computer Software, Tianjin University, China",
                "InternalReferences": "0.1109/vast.2012.6400491;10.1109/tvcg.2010.194;10.1109/infvis.2000.885098;10.1109/tvcg.2007.70523;10.1109/tvcg.2009.199;10.1109/tvcg.2010.183;10.1109/vast.2012.6400553;10.1109/tvcg.2010.180;10.1109/tvcg.2009.197;10.1109/tvcg.2008.187;10.1109/tvcg.2012.284",
                "AuthorKeywords": "climate changes, spatiotemporal visualization, station-based observation data, radial layout, visual analytics",
                "AminerCitationCount": 44,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 710,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1284,
                "i": [
                    1284
                ]
            }
        },
        {
            "name": "Ran Chen",
            "value": 58,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 703,
            "x": -262.6813952299321,
            "y": -36.72171837014465,
            "vy": 0,
            "vx": 0,
            "r": 1.0667818077144502,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "SRVis: Towards Better Spatial Integration in Ranking Visualization",
                "DOI": "10.1109/tvcg.2018.2865126",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865126",
                "FirstPage": 459,
                "LastPage": 469,
                "PaperType": "J",
                "Abstract": "Interactive ranking techniques have substantially promoted analysts' ability in making judicious and informed decisions effectively based on multiple criteria. However, the existing techniques cannot satisfactorily support the analysis tasks involved in ranking large-scale spatial alternatives, such as selecting optimal locations for chain stores, where the complex spatial contexts involved are essential to the decision-making process. Limitations observed in the prior attempts of integrating rankings with spatial contexts motivate us to develop a context-integrated visual ranking technique. Based on a set of generic design requirements we summarized by collaborating with domain experts, we propose SRVis, a novel spatial ranking visualization technique that supports efficient spatial multi-criteria decision-making processes by addressing three major challenges in the aforementioned context integration, namely, a) the presentation of spatial rankings and contexts, b) the scalability of rankings' visual representations, and c) the analysis of context-integrated spatial rankings. Specifically, we encode massive rankings and their cause with scalable matrix-based visualizations and stacked bar charts based on a novel two-phase optimization framework that minimizes the information loss, and the flexible spatial filtering and intuitive comparative analysis are adopted to enable the in-depth evaluation of the rankings and assist users in selecting the best spatial alternative. The effectiveness of the proposed technique has been evaluated and demonstrated with an empirical study of optimization methods, two case studies, and expert interviews.",
                "AuthorNamesDeduped": "Di Weng;Ran Chen;Zikun Deng;Feiran Wu;Jingmin Chen;Yingcai Wu",
                "AuthorNames": "Di Weng;Ran Chen;Zikun Deng;Feiran Wu;Jingmin Chen;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Alibaba Group, Hangzhou, China;Alibaba Group, Hangzhou, China;Zhejiang University, Hangzhou, Zhejiang, CN",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2013.193;10.1109/tvcg.2011.185;10.1109/tvcg.2008.166;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467771;10.1109/tvcg.2008.181;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865018;10.1109/vast.2011.6102455;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598585;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467112;10.1109/tvcg.2012.253;10.1109/tvcg.2015.2467717;10.1109/tvcg.2017.2745078;10.1109/tvcg.2014.2346913",
                "AuthorKeywords": "Spatial ranking,visualization",
                "AminerCitationCount": 37,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1388,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 676,
                "i": [
                    676
                ]
            }
        },
        {
            "name": "Yanwei Huang",
            "value": 6,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 704,
            "x": 218.6534886035098,
            "y": -150.4681093172729,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Rigel: Transforming Tabular Data by Declarative Mapping",
                "DOI": "10.1109/tvcg.2022.3209385",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209385",
                "FirstPage": 128,
                "LastPage": 138,
                "PaperType": "J",
                "Abstract": "We present Rigel, an interactive system for rapid transformation of tabular data. Rigel implements a new declarative mapping approach that formulates the data transformation procedure as direct mappings from data to the row, column, and cell channels of the target table. To construct such mappings, Rigel allows users to directly drag data attributes from input data to these three channels and indirectly drag or type data values in a spreadsheet, and possible mappings that do not contradict these interactions are recommended to achieve efficient and straightforward data transformation. The recommended mappings are generated by enumerating and composing data variables based on the row, column, and cell channels, thereby revealing the possibility of alternative tabular forms and facilitating open-ended exploration in many data transformation scenarios, such as designing tables for presentation. In contrast to existing systems that transform data by composing operations (like transposing and pivoting), Rigel requires less prior knowledge on these operations, and constructing tables from the channels is more efficient and results in less ambiguity than generating operation sequences as done by the traditional by-example approaches. User study results demonstrated that Rigel is significantly less demanding in terms of time and interactions and suits more scenarios compared to the state-of-the-art by-example approach. A gallery of diverse transformation cases is also presented to show the potential of Rigel's expressiveness.",
                "AuthorNamesDeduped": "Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu",
                "AuthorNames": "Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Zhejiang University of Technology, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2021.3114830;10.1109/vast47406.2019.8986909;10.1109/tvcg.2011.185;10.1109/vast.2011.6102441;10.1109/tvcg.2012.219;10.1109/tvcg.2020.3030462;10.1109/tvcg.2022.3209354;10.1109/tvcg.2019.2934593;10.1109/vast.2011.6102440;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467191;10.1109/tvcg.2022.3209470",
                "AuthorKeywords": "Data transformation,self-service data transformation,programming by example,declarative specification",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 610,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 171,
                "i": [
                    171
                ]
            }
        },
        {
            "name": "Jiayi Zhou",
            "value": 6,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 705,
            "x": -59.63080650561019,
            "y": 258.832314279903,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Rigel: Transforming Tabular Data by Declarative Mapping",
                "DOI": "10.1109/tvcg.2022.3209385",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209385",
                "FirstPage": 128,
                "LastPage": 138,
                "PaperType": "J",
                "Abstract": "We present Rigel, an interactive system for rapid transformation of tabular data. Rigel implements a new declarative mapping approach that formulates the data transformation procedure as direct mappings from data to the row, column, and cell channels of the target table. To construct such mappings, Rigel allows users to directly drag data attributes from input data to these three channels and indirectly drag or type data values in a spreadsheet, and possible mappings that do not contradict these interactions are recommended to achieve efficient and straightforward data transformation. The recommended mappings are generated by enumerating and composing data variables based on the row, column, and cell channels, thereby revealing the possibility of alternative tabular forms and facilitating open-ended exploration in many data transformation scenarios, such as designing tables for presentation. In contrast to existing systems that transform data by composing operations (like transposing and pivoting), Rigel requires less prior knowledge on these operations, and constructing tables from the channels is more efficient and results in less ambiguity than generating operation sequences as done by the traditional by-example approaches. User study results demonstrated that Rigel is significantly less demanding in terms of time and interactions and suits more scenarios compared to the state-of-the-art by-example approach. A gallery of diverse transformation cases is also presented to show the potential of Rigel's expressiveness.",
                "AuthorNamesDeduped": "Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu",
                "AuthorNames": "Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Zhejiang University of Technology, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2021.3114830;10.1109/vast47406.2019.8986909;10.1109/tvcg.2011.185;10.1109/vast.2011.6102441;10.1109/tvcg.2012.219;10.1109/tvcg.2020.3030462;10.1109/tvcg.2022.3209354;10.1109/tvcg.2019.2934593;10.1109/vast.2011.6102440;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467191;10.1109/tvcg.2022.3209470",
                "AuthorKeywords": "Data transformation,self-service data transformation,programming by example,declarative specification",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 610,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 171,
                "i": [
                    171
                ]
            }
        },
        {
            "name": "Guodao Sun",
            "value": 77,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 706,
            "x": -130.96153138541143,
            "y": -231.29867551974414,
            "vy": 0,
            "vx": 0,
            "r": 1.0886586067933217,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Rigel: Transforming Tabular Data by Declarative Mapping",
                "DOI": "10.1109/tvcg.2022.3209385",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209385",
                "FirstPage": 128,
                "LastPage": 138,
                "PaperType": "J",
                "Abstract": "We present Rigel, an interactive system for rapid transformation of tabular data. Rigel implements a new declarative mapping approach that formulates the data transformation procedure as direct mappings from data to the row, column, and cell channels of the target table. To construct such mappings, Rigel allows users to directly drag data attributes from input data to these three channels and indirectly drag or type data values in a spreadsheet, and possible mappings that do not contradict these interactions are recommended to achieve efficient and straightforward data transformation. The recommended mappings are generated by enumerating and composing data variables based on the row, column, and cell channels, thereby revealing the possibility of alternative tabular forms and facilitating open-ended exploration in many data transformation scenarios, such as designing tables for presentation. In contrast to existing systems that transform data by composing operations (like transposing and pivoting), Rigel requires less prior knowledge on these operations, and constructing tables from the channels is more efficient and results in less ambiguity than generating operation sequences as done by the traditional by-example approaches. User study results demonstrated that Rigel is significantly less demanding in terms of time and interactions and suits more scenarios compared to the state-of-the-art by-example approach. A gallery of diverse transformation cases is also presented to show the potential of Rigel's expressiveness.",
                "AuthorNamesDeduped": "Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu",
                "AuthorNames": "Ran Chen;Di Weng;Yanwei Huang;Xinhuan Shu;Jiayi Zhou;Guodao Sun;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Hong Kong University of Science and Technology, Hong Kong, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Zhejiang University of Technology, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2021.3114830;10.1109/vast47406.2019.8986909;10.1109/tvcg.2011.185;10.1109/vast.2011.6102441;10.1109/tvcg.2012.219;10.1109/tvcg.2020.3030462;10.1109/tvcg.2022.3209354;10.1109/tvcg.2019.2934593;10.1109/vast.2011.6102440;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467191;10.1109/tvcg.2022.3209470",
                "AuthorKeywords": "Data transformation,self-service data transformation,programming by example,declarative specification",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 610,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 171,
                "i": [
                    171
                ]
            }
        },
        {
            "name": "Abhraneel Sarma",
            "value": 19,
            "numPapers": 31,
            "cluster": "5",
            "visible": 1,
            "index": 707,
            "x": 252.98581964418995,
            "y": 82.1472766374966,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Evaluating the Use of Uncertainty Visualisations for Imputations of Data Missing At Random in Scatterplots",
                "DOI": "10.1109/tvcg.2022.3209348",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209348",
                "FirstPage": 602,
                "LastPage": 612,
                "PaperType": "J",
                "Abstract": "Most real-world datasets contain missing values yet most exploratory data analysis (EDA) systems only support visualising data points with complete cases. This omission may potentially lead the user to biased analyses and insights. Imputation techniques can help estimate the value of a missing data point, but introduces additional uncertainty. In this work, we investigate the effects of visualising imputed values in charts using different ways of representing data imputations and imputation uncertainty—no imputation, mean, 95% confidence intervals, probability density plots, gradient intervals, and hypothetical outcome plots. We focus on scatterplots, which is a commonly used chart type, and conduct a crowdsourced study with 202 participants. We measure users' bias and precision in performing two tasks—estimating average and detecting trend—and their self-reported confidence in performing these tasks. Our results suggest that, when estimating averages, uncertainty representations may reduce bias but at the cost of decreasing precision. When estimating trend, only hypothetical outcome plots may lead to a small probability of reducing bias while increasing precision. Participants in every uncertainty representation were less certain about their response when compared to the baseline. The findings point towards potential trade-offs in using uncertainty encodings for datasets with a large number of missing values. This paper and the associated analysis materials are available at: https://osf.io/q4y5r/",
                "AuthorNamesDeduped": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan A. Rossi;Fan Du;Eunyee Koh;Matthew Kay 0001",
                "AuthorNames": "Abhraneel Sarma;Shunan Guo;Jane Hoffswell;Ryan Rossi;Fan Du;Eunyee Koh;Matthew Kay",
                "AuthorAffiliation": "Northwestern University, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Adobe Research, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2021.3114813;10.1109/tvcg.2020.3029413;10.1109/tvcg.2011.175;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2012.279;10.1109/tvcg.2021.3114684;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864914",
                "AuthorKeywords": "Uncertainty visualisations,missing values,data imputation,multivariate data",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 533,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 172,
                "i": [
                    172
                ]
            }
        },
        {
            "name": "Sehi L'Yi",
            "value": 46,
            "numPapers": 67,
            "cluster": "5",
            "visible": 1,
            "index": 708,
            "x": -242.20454612186668,
            "y": 110.39455529101308,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Gosling: A Grammar-based Toolkit for Scalable and Interactive Genomics Data Visualization",
                "DOI": "10.1109/tvcg.2021.3114876",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114876",
                "FirstPage": 140,
                "LastPage": 150,
                "PaperType": "J",
                "Abstract": "The combination of diverse data types and analysis tasks in genomics has resulted in the development of a wide range of visualization techniques and tools. However, most existing tools are tailored to a specific problem or data type and offer limited customization, making it challenging to optimize visualizations for new analysis tasks or datasets. To address this challenge, we designed Gosling-a grammar for interactive and scalable genomics data visualization. Gosling balances expressiveness for comprehensive multi-scale genomics data visualizations with accessibility for domain scientists. Our accompanying JavaScript toolkit called Gosling.js provides scalable and interactive rendering. Gosling.js is built on top of an existing platform for web-based genomics data visualization to further simplify the visualization of common genomics data formats. We demonstrate the expressiveness of the grammar through a variety of real-world examples. Furthermore, we show how Gosling supports the design of novel genomics visualizations. An online editor and examples of Gosling.js, its source code, and documentation are available at <uri>https://gosling.js.org</uri>.",
                "AuthorNamesDeduped": "Sehi L'Yi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorNames": "Sehi LYi;Qianwen Wang;Fritz Lekschas;Nils Gehlenborg",
                "AuthorAffiliation": "Harvard Medical School, Boston, MA, USA;Harvard Medical School, Boston, MA, USA;Harvard School of Engineering and Applied Sciences, Boston, MA, USA;Harvard Medical School, Boston, MA, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2013.214;10.1109/tvcg.2018.2865141;10.1109/tvcg.2017.2745978;10.1109/tvcg.2013.179;10.1109/tvcg.2009.167;10.1109/tvcg.2010.163;10.1109/tvcg.2014.2346445;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2016.2598796;10.1109/tvcg.2020.3030372;10.1109/tvcg.2015.2467191;10.1109/tvcg.2019.2934555",
                "AuthorKeywords": "Genomics,declarative specification,visualization grammar",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 90,
                "DownloadsXplore": 1426,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 266,
                "i": [
                    266
                ]
            }
        },
        {
            "name": "Nicole Jardine",
            "value": 102,
            "numPapers": 18,
            "cluster": "5",
            "visible": 1,
            "index": 709,
            "x": 104.09702909671309,
            "y": -245.1811749160977,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "The Perceptual Proxies of Visual Comparison",
                "DOI": "10.1109/tvcg.2019.2934786",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934786",
                "FirstPage": 1012,
                "LastPage": 1021,
                "PaperType": "J",
                "Abstract": "Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., “biggest delta”, “biggest correlation”) varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the “biggest mean” and “biggest range” between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a “Mean length” proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a “Hull Area” proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison.",
                "AuthorNamesDeduped": "Nicole Jardine;Brian D. Ondov;Niklas Elmqvist;Steven Franconeri",
                "AuthorNames": "Nicole Jardine;Brian D. Ondov;Niklas Elmqvist;Steven Franconeri",
                "AuthorAffiliation": "Cook County Assessor's Office, Northwestern University, Chicago;National Institutes of Health, Bethesda, USA and University of Maryland, College Park, USA;University of Maryland, College Park, USA;Cook County Assessor's Office, Northwestern University, Evanston, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2466971;10.1109/tvcg.2017.2744199;10.1109/tvcg.2014.2346979;10.1109/tvcg.2010.162;10.1109/tvcg.2018.2864884;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Graphical perception,visual perception,visual comparison,crowdsourced evaluation",
                "AminerCitationCount": 27,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 1120,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 536,
                "i": [
                    536
                ]
            }
        },
        {
            "name": "Brian D. Ondov",
            "value": 106,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 710,
            "x": 88.92219366405382,
            "y": 251.2823978594054,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "The Perceptual Proxies of Visual Comparison",
                "DOI": "10.1109/tvcg.2019.2934786",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934786",
                "FirstPage": 1012,
                "LastPage": 1021,
                "PaperType": "J",
                "Abstract": "Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., “biggest delta”, “biggest correlation”) varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the “biggest mean” and “biggest range” between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a “Mean length” proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a “Hull Area” proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison.",
                "AuthorNamesDeduped": "Nicole Jardine;Brian D. Ondov;Niklas Elmqvist;Steven Franconeri",
                "AuthorNames": "Nicole Jardine;Brian D. Ondov;Niklas Elmqvist;Steven Franconeri",
                "AuthorAffiliation": "Cook County Assessor's Office, Northwestern University, Chicago;National Institutes of Health, Bethesda, USA and University of Maryland, College Park, USA;University of Maryland, College Park, USA;Cook County Assessor's Office, Northwestern University, Evanston, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2466971;10.1109/tvcg.2017.2744199;10.1109/tvcg.2014.2346979;10.1109/tvcg.2010.162;10.1109/tvcg.2018.2864884;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Graphical perception,visual perception,visual comparison,crowdsourced evaluation",
                "AminerCitationCount": 27,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 1120,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 536,
                "i": [
                    536
                ]
            }
        },
        {
            "name": "David H. Laidlaw",
            "value": 459,
            "numPapers": 71,
            "cluster": "11",
            "visible": 1,
            "index": 711,
            "x": -235.47281356265677,
            "y": -125.30983230731044,
            "vy": 0,
            "vx": 0,
            "r": 1.528497409326425,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Comparing 3D Vector field Visualization Methods: A User Study",
                "DOI": "10.1109/tvcg.2009.126",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.126",
                "FirstPage": 1219,
                "LastPage": 1226,
                "PaperType": "J",
                "Abstract": "In a user study comparing four visualization methods for three-dimensional vector data, participants used visualizations from each method to perform five simple but representative tasks: 1) determining whether a given point was a critical point, 2) determining the type of a critical point, 3) determining whether an integral curve would advect through two points, 4) determining whether swirling movement is present at a point, and 5) determining whether the vector field is moving faster at one point than another. The visualization methods were line and tube representations of integral curves with both monoscopic and stereoscopic viewing. While participants reported a preference for stereo lines, quantitative results showed performance among the tasks varied by method. Users performed all tasks better with methods that: 1) gave a clear representation with no perceived occlusion, 2) clearly visualized curve speed and direction information, and 3) provided fewer rich 3D cues (e.g., shading, polygonal arrows, overlap cues, and surface textures). These results provide quantitative support for anecdotal evidence on visualization methods. The tasks and testing framework also give a basis for comparing other visualization methods, for creating more effective methods, and for defining additional tasks to explore further the tradeoffs among the methods.",
                "AuthorNamesDeduped": "Andrew S. Forsberg;Jian Chen 0006;David H. Laidlaw",
                "AuthorNames": "Andrew Forsberg;Jian Chen;David Laidlaw",
                "AuthorAffiliation": "Computer Science Department, Brown University, USA;Computer Science Department, Brown University, USA;Computer Science Department, Brown University, USA",
                "InternalReferences": "0.1109/visual.1996.567777;10.1109/visual.2005.1532831;10.1109/visual.2004.59;10.1109/visual.2005.1532772;10.1109/tvcg.2009.141",
                "AuthorKeywords": "3D vector fields, visualization, user study, tubes, lines, stereoscopic and monoscopic viewing",
                "AminerCitationCount": 58,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 1012,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1930,
                "i": [
                    1930
                ]
            }
        },
        {
            "name": "Michelle X. Zhou",
            "value": 373,
            "numPapers": 45,
            "cluster": "1",
            "visible": 1,
            "index": 712,
            "x": 258.4573371082174,
            "y": -66.70685793027015,
            "vy": 0,
            "vx": 0,
            "r": 1.4294761082325849,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "An optimization-based approach to dynamic visual context management",
                "DOI": "10.1109/infvis.2005.1532146",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532146",
                "FirstPage": 187,
                "LastPage": 194,
                "PaperType": "C",
                "Abstract": "We are building an intelligent multimodal conversation system to aid users in exploring large and complex data sets. To tailor to diverse user queries introduced during a conversation, we automate the generation of system responses, including both spoken and visual outputs. In this paper, we focus on the problem of visual context management, a process that dynamically updates an existing visual display to effectively incorporate new information requested by subsequent user queries. Specifically, we develop an optimization based approach to visual context management. Compared to existing approaches, which normally handle predictable visual context updates, our work offers two unique contributions. First, we provide a general computational framework that can effectively manage a visual context for diverse, unanticipated situations encountered in a user system conversation. Moreover, we optimize the satisfaction of both semantic and visual constraints, which otherwise are difficult to balance using simple heuristics. Second, we present an extensible representation model that uses feature based metrics to uniformly define all constraints. We have applied our work to two different applications and our evaluation has shown the promise of this work.",
                "AuthorNamesDeduped": "Zhen Wen;Michelle X. Zhou;Vikram Aggarwal",
                "AuthorNames": "Zhen Wen;M.X. Zhou;V. Aggarwal",
                "AuthorAffiliation": "T.J. Watson Research Center, IBM, Hawthorne, NY, USA;T.J. Watson Research Center, IBM, Hawthorne, NY, USA;T.J. Watson Research Center, IBM, Hawthorne, NY, USA",
                "InternalReferences": "0.1109/infvis.2000.885091;10.1109/infvis.2000.885093;10.1109/infvis.1997.636718",
                "AuthorKeywords": "intelligent multimodal interfaces, visual context management, automated generation of visualization, visual momentum",
                "AminerCitationCount": 18,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 201,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2358,
                "i": [
                    2358
                ]
            }
        },
        {
            "name": "Kuno Kurzhals",
            "value": 65,
            "numPapers": 26,
            "cluster": "6",
            "visible": 1,
            "index": 713,
            "x": -145.6206440850045,
            "y": 223.9299623013152,
            "vy": 0,
            "vx": 0,
            "r": 1.0748416810592976,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visual Analytics for Mobile Eye Tracking",
                "DOI": "10.1109/tvcg.2016.2598695",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598695",
                "FirstPage": 301,
                "LastPage": 310,
                "PaperType": "J",
                "Abstract": "The analysis of eye tracking data often requires the annotation of areas of interest (AOIs) to derive semantic interpretations of human viewing behavior during experiments. This annotation is typically the most time-consuming step of the analysis process. Especially for data from wearable eye tracking glasses, every independently recorded video has to be annotated individually and corresponding AOIs between videos have to be identified. We provide a novel visual analytics approach to ease this annotation process by image-based, automatic clustering of eye tracking data integrated in an interactive labeling and analysis system. The annotation and analysis are tightly coupled by multiple linked views that allow for a direct interpretation of the labeled data in the context of the recorded video stimuli. The components of our analytics environment were developed with a user-centered design approach in close cooperation with an eye tracking expert. We demonstrate our approach with eye tracking data from a real experiment and compare it to an analysis of the data by manual annotation of dynamic AOIs. Furthermore, we conducted an expert user study with 6 external eye tracking researchers to collect feedback and identify analysis strategies they used while working with our application.",
                "AuthorNamesDeduped": "Kuno Kurzhals;Marcel Hlawatsch;Christof Seeger;Daniel Weiskopf",
                "AuthorNames": "Kuno Kurzhals;Marcel Hlawatsch;Christof Seeger;Daniel Weiskopf",
                "AuthorAffiliation": "University of Stuttgart;University of Stuttgart;Stuttgart Media University;University of Stuttgart",
                "InternalReferences": "0.1109/tvcg.2010.149;10.1109/tvcg.2015.2468091;10.1109/vast.2006.261433;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Eye tracking;visual analytics;video visualization",
                "AminerCitationCount": 65,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 3229,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 968,
                "i": [
                    968
                ]
            }
        },
        {
            "name": "Michael Burch",
            "value": 186,
            "numPapers": 23,
            "cluster": "6",
            "visible": 1,
            "index": 714,
            "x": -43.91712870592296,
            "y": -263.66889427125716,
            "vy": 0,
            "vx": 0,
            "r": 1.2141623488773747,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Parallel Edge Splatting for Scalable Dynamic Graph Visualization",
                "DOI": "10.1109/tvcg.2011.226",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.226",
                "FirstPage": 2344,
                "LastPage": 2353,
                "PaperType": "J",
                "Abstract": "We present a novel dynamic graph visualization technique based on node-link diagrams. The graphs are drawn side-byside from left to right as a sequence of narrow stripes that are placed perpendicular to the horizontal time line. The hierarchically organized vertices of the graphs are arranged on vertical, parallel lines that bound the stripes; directed edges connect these vertices from left to right. To address massive overplotting of edges in huge graphs, we employ a splatting approach that transforms the edges to a pixel-based scalar field. This field represents the edge densities in a scalable way and is depicted by non-linear color mapping. The visualization method is complemented by interaction techniques that support data exploration by aggregation, filtering, brushing, and selective data zooming. Furthermore, we formalize graph patterns so that they can be interactively highlighted on demand. A case study on software releases explores the evolution of call graphs extracted from the JUnit open source software project. In a second application, we demonstrate the scalability of our approach by applying it to a bibliography dataset containing more than 1.5 million paper titles from 60 years of research history producing a vast amount of relations between title words.",
                "AuthorNamesDeduped": "Michael Burch;Corinna Vehlow;Fabian Beck 0001;Stephan Diehl 0001;Daniel Weiskopf",
                "AuthorNames": "Michael Burch;Corinna Vehlow;Fabian Beck;Stephan Diehl;Daniel Weiskopf",
                "AuthorAffiliation": "VISUS, University of Stuttgart, Germany;VISUS, University of Stuttgart, Germany;Computer Science Department, University of Trier, Germany;Computer Science Department, University of Trier, Germany;VISUS, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/tvcg.2009.123;10.1109/tvcg.2008.131;10.1109/visual.1990.146402;10.1109/tvcg.2010.176;10.1109/infvis.2005.1532138;10.1109/tvcg.2006.147;10.1109/tvcg.2009.131;10.1109/infvis.1999.801866;10.1109/infvis.2002.1173160;10.1109/infvis.2004.68",
                "AuthorKeywords": "Dynamic graph visualization, graph splatting, software visualization, software evolution",
                "AminerCitationCount": 210,
                "CitationCountCrossRef": 132,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 2476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1540,
                "i": [
                    1540
                ]
            }
        },
        {
            "name": "Corinna Vehlow",
            "value": 130,
            "numPapers": 15,
            "cluster": "6",
            "visible": 1,
            "index": 715,
            "x": 210.636149441009,
            "y": 164.87089660902834,
            "vy": 0,
            "vx": 0,
            "r": 1.1496833621185953,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Parallel Edge Splatting for Scalable Dynamic Graph Visualization",
                "DOI": "10.1109/tvcg.2011.226",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.226",
                "FirstPage": 2344,
                "LastPage": 2353,
                "PaperType": "J",
                "Abstract": "We present a novel dynamic graph visualization technique based on node-link diagrams. The graphs are drawn side-byside from left to right as a sequence of narrow stripes that are placed perpendicular to the horizontal time line. The hierarchically organized vertices of the graphs are arranged on vertical, parallel lines that bound the stripes; directed edges connect these vertices from left to right. To address massive overplotting of edges in huge graphs, we employ a splatting approach that transforms the edges to a pixel-based scalar field. This field represents the edge densities in a scalable way and is depicted by non-linear color mapping. The visualization method is complemented by interaction techniques that support data exploration by aggregation, filtering, brushing, and selective data zooming. Furthermore, we formalize graph patterns so that they can be interactively highlighted on demand. A case study on software releases explores the evolution of call graphs extracted from the JUnit open source software project. In a second application, we demonstrate the scalability of our approach by applying it to a bibliography dataset containing more than 1.5 million paper titles from 60 years of research history producing a vast amount of relations between title words.",
                "AuthorNamesDeduped": "Michael Burch;Corinna Vehlow;Fabian Beck 0001;Stephan Diehl 0001;Daniel Weiskopf",
                "AuthorNames": "Michael Burch;Corinna Vehlow;Fabian Beck;Stephan Diehl;Daniel Weiskopf",
                "AuthorAffiliation": "VISUS, University of Stuttgart, Germany;VISUS, University of Stuttgart, Germany;Computer Science Department, University of Trier, Germany;Computer Science Department, University of Trier, Germany;VISUS, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/tvcg.2009.123;10.1109/tvcg.2008.131;10.1109/visual.1990.146402;10.1109/tvcg.2010.176;10.1109/infvis.2005.1532138;10.1109/tvcg.2006.147;10.1109/tvcg.2009.131;10.1109/infvis.1999.801866;10.1109/infvis.2002.1173160;10.1109/infvis.2004.68",
                "AuthorKeywords": "Dynamic graph visualization, graph splatting, software visualization, software evolution",
                "AminerCitationCount": 210,
                "CitationCountCrossRef": 132,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 2476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1540,
                "i": [
                    1540
                ]
            }
        },
        {
            "name": "Alex Bigelow",
            "value": 82,
            "numPapers": 46,
            "cluster": "5",
            "visible": 1,
            "index": 716,
            "x": -266.87152943614024,
            "y": 20.726475253050477,
            "vy": 0,
            "vx": 0,
            "r": 1.0944156591824985,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Iterating between Tools to Create and Edit Visualizations",
                "DOI": "10.1109/tvcg.2016.2598609",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598609",
                "FirstPage": 481,
                "LastPage": 490,
                "PaperType": "J",
                "Abstract": "A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.",
                "AuthorNamesDeduped": "Alex Bigelow;Steven Mark Drucker;Danyel Fisher;Miriah D. Meyer",
                "AuthorNames": "Alex Bigelow;Steven Drucker;Danyel Fisher;Miriah Meyer",
                "AuthorAffiliation": "University of Utah;Microsoft Research;Microsoft Research;University of Utah",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/tvcg.2015.2467191;10.1109/tvcg.2014.2346291;10.1109/tvcg.2015.2467091;10.1109/infvis.2004.12;10.1109/tvcg.2011.209;10.1109/tvcg.2007.70584;10.1109/tvcg.2011.185",
                "AuthorKeywords": "illustration;Visualization;iteration",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 1135,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 910,
                "i": [
                    910
                ]
            }
        },
        {
            "name": "Carlos Scheidegger",
            "value": 181,
            "numPapers": 89,
            "cluster": "5",
            "visible": 1,
            "index": 717,
            "x": 182.9097350569416,
            "y": -195.68860166447973,
            "vy": 0,
            "vx": 0,
            "r": 1.208405296488198,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Hashedcubes: Simple, Low Memory, Real-Time Visual Exploration of Big Data",
                "DOI": "10.1109/tvcg.2016.2598624",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598624",
                "FirstPage": 671,
                "LastPage": 680,
                "PaperType": "J",
                "Abstract": "We propose Hashedcubes, a data structure that enables real-time visual exploration of large datasets that improves the state of the art by virtue of its low memory requirements, low query latencies, and implementation simplicity. In some instances, Hashedcubes notably requires two orders of magnitude less space than recent data cube visualization proposals. In this paper, we describe the algorithms to build and query Hashedcubes, and how it can drive well-known interactive visualizations such as binned scatterplots, linked histograms and heatmaps. We report memory usage, build time and query latencies for a variety of synthetic and real-world datasets, and find that although sometimes Hashedcubes offers slightly slower querying times to the state of the art, the typical query is answered fast enough to easily sustain a interaction. In datasets with hundreds of millions of elements, only about 2% of the queries take longer than 40ms. Finally, we discuss the limitations of data structure, potential spacetime tradeoffs, and future research directions.",
                "AuthorNamesDeduped": "Cicero Augusto de Lara Pahins;Sean A. Stephens;Carlos Scheidegger;João Luiz Dihl Comba",
                "AuthorNames": "Cícero A. L. Pahins;Sean A. Stephens;Carlos Scheidegger;João L. D. Comba",
                "AuthorAffiliation": "Instituto de Informática, UFRGS;University of Arizona;University of Arizona;Instituto de Informática, UFRGS",
                "InternalReferences": "0.1109/tvcg.2013.179;10.1109/tvcg.2014.2346452;10.1109/tvcg.2014.2346574;10.1109/tvcg.2015.2467771",
                "AuthorKeywords": "Scalability;data cube;multidimensional data;interactive exploration",
                "AminerCitationCount": 116,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1839,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 896,
                "i": [
                    896
                ]
            }
        },
        {
            "name": "Katherine E. Isaacs",
            "value": 37,
            "numPapers": 32,
            "cluster": "5",
            "visible": 1,
            "index": 718,
            "x": -2.6880662676910267,
            "y": 268.03502438998623,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Traveler: Navigating Task Parallel Traces for Performance Analysis",
                "DOI": "10.1109/tvcg.2022.3209375",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209375",
                "FirstPage": 788,
                "LastPage": 797,
                "PaperType": "J",
                "Abstract": "Understanding the behavior of software in execution is a key step in identifying and fixing performance issues. This is especially important in high performance computing contexts where even minor performance tweaks can translate into large savings in terms of computational resource use. To aid performance analysis, developers may collect an execution trace—a chronological log of program activity during execution. As traces represent the full history, developers can discover a wide array of possibly previously unknown performance issues, making them an important artifact for exploratory performance analysis. However, interactive trace visualization is difficult due to issues of data size and complexity of meaning. Traces represent nanosecond-level events across many parallel processes, meaning the collected data is often large and difficult to explore. The rise of asynchronous task parallel programming paradigms complicates the relation between events and their probable cause. To address these challenges, we conduct a continuing design study in collaboration with high performance computing researchers. We develop diverse and hierarchical ways to navigate and represent execution trace data in support of their trace analysis tasks. Through an iterative design process, we developed Traveler, an integrated visualization platform for task parallel traces. Traveler provides multiple linked interfaces to help navigate trace data from multiple contexts. We evaluate the utility of Traveler through feedback from users and a case study, finding that integrating multiple modes of navigation in our design supported performance analysis tasks and led to the discovery of previously unknown behavior in a distributed array library.",
                "AuthorNamesDeduped": "Sayef Azad Sakin;Alex Bigelow;R. Tohid;Connor Scully-Allison;Carlos Scheidegger;Steven R. Brandt;Christopher Taylor;Kevin A. Huck;Hartmut Kaiser;Katherine E. Isaacs",
                "AuthorNames": "Sayef Azad Sakin;Alex Bigelow;R. Tohid;Connor Scully-Allison;Carlos Scheidegger;Steven R. Brandt;Christopher Taylor;Kevin A. Huck;Hartmut Kaiser;Katherine E. Isaacs",
                "AuthorAffiliation": "University of Arizona, USA;Stardog, USA;Louisiana State University, USA;University of Arizona, USA;RStudio, USA;Louisiana State University, USA;Tactical Computing Labs, USA;University of Arizona, USA;Louisiana State University, USA;University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2019.2934790;10.1109/tvcg.2014.2346456;10.1109/tvcg.2009.196;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934285;10.1109/tvcg.2018.2865026;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "software visualization,parallel computing,traces,performance analysis,event sequence visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 437,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 182,
                "i": [
                    182
                ]
            }
        },
        {
            "name": "Melissa A. Schoenlein",
            "value": 0,
            "numPapers": 10,
            "cluster": "8",
            "visible": 1,
            "index": 719,
            "x": -179.19753359931548,
            "y": -199.59520022265613,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Unifying Effects of Direct and Relational Associations for Visual Communication",
                "DOI": "10.1109/tvcg.2022.3209443",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209443",
                "FirstPage": 385,
                "LastPage": 395,
                "PaperType": "J",
                "Abstract": "People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations (inferred mappings) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.",
                "AuthorNamesDeduped": "Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss",
                "AuthorNames": "Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss",
                "AuthorAffiliation": "Psychology and Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA;Cognitive Science, University of California, Merced, USA;Philosophy, Centre for Vision Research, York University, USA;Mechanical and Industrial Engineering, Northeastern University, USA;Psychology, Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/visual.2002.1183788;10.1109/tvcg.2021.3114780;10.1109/tvcg.2016.2599106;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2020.3030434;10.1109/tvcg.2015.2467471;10.1109/tvcg.2019.2934284;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual reasoning,information visualization,colormap data visualizations,visual encoding,color cognition",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 372,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 184,
                "i": [
                    184
                ]
            }
        },
        {
            "name": "Karen B. Schloss",
            "value": 168,
            "numPapers": 46,
            "cluster": "8",
            "visible": 1,
            "index": 720,
            "x": 267.14475765795487,
            "y": 26.147245665128,
            "vy": 0,
            "vx": 0,
            "r": 1.1934369602763386,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Mapping Color to Meaning in Colormap Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865147",
                "FirstPage": 810,
                "LastPage": 819,
                "PaperType": "J",
                "Abstract": "To interpret data visualizations, people must determine how visual features map onto concepts. For example, to interpret colormaps, people must determine how dimensions of color (e.g., lightness, hue) map onto quantities of a given measure (e.g., brain activity, correlation magnitude). This process is easier when the encoded mappings in the visualization match people's predictions of how visual features will map onto concepts, their inferred mappings. To harness this principle in visualization design, it is necessary to understand what factors determine people's inferred mappings. In this study, we investigated how inferred color-quantity mappings for colormap data visualizations were influenced by the background color. Prior literature presents seemingly conflicting accounts of how the background color affects inferred color-quantity mappings. The present results help resolve those conflicts, demonstrating that sometimes the background has an effect and sometimes it does not, depending on whether the colormap appears to vary in opacity. When there is no apparent variation in opacity, participants infer that darker colors map to larger quantities (dark-is-more bias). As apparent variation in opacity increases, participants become biased toward inferring that more opaque colors map to larger quantities (opaque-is-more bias). These biases work together on light backgrounds and conflict on dark backgrounds. Under such conflicts, the opaque-is-more bias can negate, or even supersede the dark-is-more bias. The results suggest that if a design goal is to produce colormaps that match people's inferred mappings and are robust to changes in background color, it is beneficial to use colormaps that will not appear to vary in opacity on any background color, and to encode larger quantities in darker colors.",
                "AuthorNamesDeduped": "Karen B. Schloss;Connor Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorNames": "Karen B. Schloss;Connor C. Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorAffiliation": "University of Wisconsin Madison, Madison, WI, US;Brown University, Providence, RI, US;Brown University, Providence, RI, US;University of Wisconsin Madison, Madison, WI, US;California Institute of Technology, Pasadena, CA, US",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70583;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual Reasoning,Visual Communication,Colormaps,Color Perception,Visual Encoding,Visual Design",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2713,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 655,
                "i": [
                    655
                ]
            }
        },
        {
            "name": "Johnny Campos",
            "value": 0,
            "numPapers": 10,
            "cluster": "8",
            "visible": 1,
            "index": 721,
            "x": -214.79534584424275,
            "y": 161.28533536453997,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Unifying Effects of Direct and Relational Associations for Visual Communication",
                "DOI": "10.1109/tvcg.2022.3209443",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209443",
                "FirstPage": 385,
                "LastPage": 395,
                "PaperType": "J",
                "Abstract": "People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations (inferred mappings) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.",
                "AuthorNamesDeduped": "Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss",
                "AuthorNames": "Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss",
                "AuthorAffiliation": "Psychology and Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA;Cognitive Science, University of California, Merced, USA;Philosophy, Centre for Vision Research, York University, USA;Mechanical and Industrial Engineering, Northeastern University, USA;Psychology, Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/visual.2002.1183788;10.1109/tvcg.2021.3114780;10.1109/tvcg.2016.2599106;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2020.3030434;10.1109/tvcg.2015.2467471;10.1109/tvcg.2019.2934284;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual reasoning,information visualization,colormap data visualizations,visual encoding,color cognition",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 372,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 184,
                "i": [
                    184
                ]
            }
        },
        {
            "name": "Kevin J. Lande",
            "value": 0,
            "numPapers": 10,
            "cluster": "8",
            "visible": 1,
            "index": 722,
            "x": 49.470972357743115,
            "y": -264.20186012588823,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Unifying Effects of Direct and Relational Associations for Visual Communication",
                "DOI": "10.1109/tvcg.2022.3209443",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209443",
                "FirstPage": 385,
                "LastPage": 395,
                "PaperType": "J",
                "Abstract": "People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations (inferred mappings) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.",
                "AuthorNamesDeduped": "Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss",
                "AuthorNames": "Melissa A. Schoenlein;Johnny Campos;Kevin J. Lande;Laurent Lessard;Karen B. Schloss",
                "AuthorAffiliation": "Psychology and Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA;Cognitive Science, University of California, Merced, USA;Philosophy, Centre for Vision Research, York University, USA;Mechanical and Industrial Engineering, Northeastern University, USA;Psychology, Wisconsin Institute for Discovery, University of Wisconsin-Madison, USA",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/visual.2002.1183788;10.1109/tvcg.2021.3114780;10.1109/tvcg.2016.2599106;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2020.3030434;10.1109/tvcg.2015.2467471;10.1109/tvcg.2019.2934284;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual reasoning,information visualization,colormap data visualizations,visual encoding,color cognition",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 372,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 184,
                "i": [
                    184
                ]
            }
        },
        {
            "name": "Laurent Lessard",
            "value": 50,
            "numPapers": 26,
            "cluster": "8",
            "visible": 1,
            "index": 723,
            "x": 142.08566408107504,
            "y": 228.38928184711278,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Semantic Discriminability for Visual Communication",
                "DOI": "10.1109/tvcg.2020.3030434",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030434",
                "FirstPage": 1022,
                "LastPage": 1031,
                "PaperType": "J",
                "Abstract": "To interpret information visualizations, observers must determine how visual features map onto concepts. First and foremost, this ability depends on perceptual discriminability; observers must be able to see the difference between different colors for those colors to communicate different meanings. However, the ability to interpret visualizations also depends on semantic discriminability, the degree to which observers can infer a unique mapping between visual features and concepts, based on the visual features and concepts alone (i.e., without help from verbal cues such as legends or labels). Previous evidence suggested that observers were better at interpreting encoding systems that maximized semantic discriminability (maximizing association strength between assigned colors and concepts while minimizing association strength between unassigned colors and concepts), compared to a system that only maximized color-concept association strength. However, increasing semantic discriminability also resulted in increased perceptual distance, so it is unclear which factor was responsible for improved performance. In the present study, we conducted two experiments that tested for independent effects of semantic distance and perceptual distance on semantic discriminability of bar graph data visualizations. Perceptual distance was large enough to ensure colors were more than just noticeably different. We found that increasing semantic distance improved performance, independent of variation in perceptual distance, and when these two factors were uncorrelated, responses were dominated by semantic distance. These results have implications for navigating trade-offs in color palette design optimization for visual communication.",
                "AuthorNamesDeduped": "Karen B. Schloss;Zachary Leggon;Laurent Lessard",
                "AuthorNames": "Karen B. Schloss;Zachary Leggon;Laurent Lessard",
                "AuthorAffiliation": "Psychology and Wisconsin Institute for Discovery, University of Wisconsin-Madison;Zachary Leggon Biology and Wisconsin Institute for Discovery, University of Wisconsin-Madison;Mechanical and Industrial Engineering, Northeastern University",
                "InternalReferences": "0.1109/tvcg.2016.2598918;10.1109/tvcg.2014.2346983;10.1109/tvcg.2012.233;10.1109/visual.1996.568118;10.1109/infvis.2002.1173164;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2015.2467471;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual Reasoning,Information Visualization,Visual Communication,Visual Encoding,Color Perception,Color Cognition",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 730,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 372,
                "i": [
                    372
                ]
            }
        },
        {
            "name": "Connor Gramazio",
            "value": 118,
            "numPapers": 21,
            "cluster": "8",
            "visible": 1,
            "index": 724,
            "x": -259.2232497226844,
            "y": -72.47969924889881,
            "vy": 0,
            "vx": 0,
            "r": 1.1358664363845712,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Mapping Color to Meaning in Colormap Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865147",
                "FirstPage": 810,
                "LastPage": 819,
                "PaperType": "J",
                "Abstract": "To interpret data visualizations, people must determine how visual features map onto concepts. For example, to interpret colormaps, people must determine how dimensions of color (e.g., lightness, hue) map onto quantities of a given measure (e.g., brain activity, correlation magnitude). This process is easier when the encoded mappings in the visualization match people's predictions of how visual features will map onto concepts, their inferred mappings. To harness this principle in visualization design, it is necessary to understand what factors determine people's inferred mappings. In this study, we investigated how inferred color-quantity mappings for colormap data visualizations were influenced by the background color. Prior literature presents seemingly conflicting accounts of how the background color affects inferred color-quantity mappings. The present results help resolve those conflicts, demonstrating that sometimes the background has an effect and sometimes it does not, depending on whether the colormap appears to vary in opacity. When there is no apparent variation in opacity, participants infer that darker colors map to larger quantities (dark-is-more bias). As apparent variation in opacity increases, participants become biased toward inferring that more opaque colors map to larger quantities (opaque-is-more bias). These biases work together on light backgrounds and conflict on dark backgrounds. Under such conflicts, the opaque-is-more bias can negate, or even supersede the dark-is-more bias. The results suggest that if a design goal is to produce colormaps that match people's inferred mappings and are robust to changes in background color, it is beneficial to use colormaps that will not appear to vary in opacity on any background color, and to encode larger quantities in darker colors.",
                "AuthorNamesDeduped": "Karen B. Schloss;Connor Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorNames": "Karen B. Schloss;Connor C. Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorAffiliation": "University of Wisconsin Madison, Madison, WI, US;Brown University, Providence, RI, US;Brown University, Providence, RI, US;University of Wisconsin Madison, Madison, WI, US;California Institute of Technology, Pasadena, CA, US",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70583;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual Reasoning,Visual Communication,Colormaps,Color Perception,Visual Encoding,Visual Design",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2713,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 655,
                "i": [
                    655
                ]
            }
        },
        {
            "name": "Zachary Leggon",
            "value": 40,
            "numPapers": 12,
            "cluster": "8",
            "visible": 1,
            "index": 725,
            "x": 240.26813525641234,
            "y": -121.74244609176519,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Semantic Discriminability for Visual Communication",
                "DOI": "10.1109/tvcg.2020.3030434",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030434",
                "FirstPage": 1022,
                "LastPage": 1031,
                "PaperType": "J",
                "Abstract": "To interpret information visualizations, observers must determine how visual features map onto concepts. First and foremost, this ability depends on perceptual discriminability; observers must be able to see the difference between different colors for those colors to communicate different meanings. However, the ability to interpret visualizations also depends on semantic discriminability, the degree to which observers can infer a unique mapping between visual features and concepts, based on the visual features and concepts alone (i.e., without help from verbal cues such as legends or labels). Previous evidence suggested that observers were better at interpreting encoding systems that maximized semantic discriminability (maximizing association strength between assigned colors and concepts while minimizing association strength between unassigned colors and concepts), compared to a system that only maximized color-concept association strength. However, increasing semantic discriminability also resulted in increased perceptual distance, so it is unclear which factor was responsible for improved performance. In the present study, we conducted two experiments that tested for independent effects of semantic distance and perceptual distance on semantic discriminability of bar graph data visualizations. Perceptual distance was large enough to ensure colors were more than just noticeably different. We found that increasing semantic distance improved performance, independent of variation in perceptual distance, and when these two factors were uncorrelated, responses were dominated by semantic distance. These results have implications for navigating trade-offs in color palette design optimization for visual communication.",
                "AuthorNamesDeduped": "Karen B. Schloss;Zachary Leggon;Laurent Lessard",
                "AuthorNames": "Karen B. Schloss;Zachary Leggon;Laurent Lessard",
                "AuthorAffiliation": "Psychology and Wisconsin Institute for Discovery, University of Wisconsin-Madison;Zachary Leggon Biology and Wisconsin Institute for Discovery, University of Wisconsin-Madison;Mechanical and Industrial Engineering, Northeastern University",
                "InternalReferences": "0.1109/tvcg.2016.2598918;10.1109/tvcg.2014.2346983;10.1109/tvcg.2012.233;10.1109/visual.1996.568118;10.1109/infvis.2002.1173164;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2015.2467471;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual Reasoning,Information Visualization,Visual Communication,Visual Encoding,Color Perception,Color Cognition",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 730,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 372,
                "i": [
                    372
                ]
            }
        },
        {
            "name": "Allison T. Silverman",
            "value": 49,
            "numPapers": 4,
            "cluster": "8",
            "visible": 1,
            "index": 726,
            "x": -94.99580619284004,
            "y": 252.241544567449,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Mapping Color to Meaning in Colormap Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865147",
                "FirstPage": 810,
                "LastPage": 819,
                "PaperType": "J",
                "Abstract": "To interpret data visualizations, people must determine how visual features map onto concepts. For example, to interpret colormaps, people must determine how dimensions of color (e.g., lightness, hue) map onto quantities of a given measure (e.g., brain activity, correlation magnitude). This process is easier when the encoded mappings in the visualization match people's predictions of how visual features will map onto concepts, their inferred mappings. To harness this principle in visualization design, it is necessary to understand what factors determine people's inferred mappings. In this study, we investigated how inferred color-quantity mappings for colormap data visualizations were influenced by the background color. Prior literature presents seemingly conflicting accounts of how the background color affects inferred color-quantity mappings. The present results help resolve those conflicts, demonstrating that sometimes the background has an effect and sometimes it does not, depending on whether the colormap appears to vary in opacity. When there is no apparent variation in opacity, participants infer that darker colors map to larger quantities (dark-is-more bias). As apparent variation in opacity increases, participants become biased toward inferring that more opaque colors map to larger quantities (opaque-is-more bias). These biases work together on light backgrounds and conflict on dark backgrounds. Under such conflicts, the opaque-is-more bias can negate, or even supersede the dark-is-more bias. The results suggest that if a design goal is to produce colormaps that match people's inferred mappings and are robust to changes in background color, it is beneficial to use colormaps that will not appear to vary in opacity on any background color, and to encode larger quantities in darker colors.",
                "AuthorNamesDeduped": "Karen B. Schloss;Connor Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorNames": "Karen B. Schloss;Connor C. Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorAffiliation": "University of Wisconsin Madison, Madison, WI, US;Brown University, Providence, RI, US;Brown University, Providence, RI, US;University of Wisconsin Madison, Madison, WI, US;California Institute of Technology, Pasadena, CA, US",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70583;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual Reasoning,Visual Communication,Colormaps,Color Perception,Visual Encoding,Visual Design",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2713,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 655,
                "i": [
                    655
                ]
            }
        },
        {
            "name": "Madeline L. Parker",
            "value": 49,
            "numPapers": 4,
            "cluster": "8",
            "visible": 1,
            "index": 727,
            "x": -100.40879734390907,
            "y": -250.33592114586713,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Mapping Color to Meaning in Colormap Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865147",
                "FirstPage": 810,
                "LastPage": 819,
                "PaperType": "J",
                "Abstract": "To interpret data visualizations, people must determine how visual features map onto concepts. For example, to interpret colormaps, people must determine how dimensions of color (e.g., lightness, hue) map onto quantities of a given measure (e.g., brain activity, correlation magnitude). This process is easier when the encoded mappings in the visualization match people's predictions of how visual features will map onto concepts, their inferred mappings. To harness this principle in visualization design, it is necessary to understand what factors determine people's inferred mappings. In this study, we investigated how inferred color-quantity mappings for colormap data visualizations were influenced by the background color. Prior literature presents seemingly conflicting accounts of how the background color affects inferred color-quantity mappings. The present results help resolve those conflicts, demonstrating that sometimes the background has an effect and sometimes it does not, depending on whether the colormap appears to vary in opacity. When there is no apparent variation in opacity, participants infer that darker colors map to larger quantities (dark-is-more bias). As apparent variation in opacity increases, participants become biased toward inferring that more opaque colors map to larger quantities (opaque-is-more bias). These biases work together on light backgrounds and conflict on dark backgrounds. Under such conflicts, the opaque-is-more bias can negate, or even supersede the dark-is-more bias. The results suggest that if a design goal is to produce colormaps that match people's inferred mappings and are robust to changes in background color, it is beneficial to use colormaps that will not appear to vary in opacity on any background color, and to encode larger quantities in darker colors.",
                "AuthorNamesDeduped": "Karen B. Schloss;Connor Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorNames": "Karen B. Schloss;Connor C. Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorAffiliation": "University of Wisconsin Madison, Madison, WI, US;Brown University, Providence, RI, US;Brown University, Providence, RI, US;University of Wisconsin Madison, Madison, WI, US;California Institute of Technology, Pasadena, CA, US",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70583;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual Reasoning,Visual Communication,Colormaps,Color Perception,Visual Encoding,Visual Design",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2713,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 655,
                "i": [
                    655
                ]
            }
        },
        {
            "name": "Audrey S. Wang",
            "value": 49,
            "numPapers": 4,
            "cluster": "8",
            "visible": 1,
            "index": 728,
            "x": 243.30485492293906,
            "y": 116.84497238190255,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Mapping Color to Meaning in Colormap Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865147",
                "FirstPage": 810,
                "LastPage": 819,
                "PaperType": "J",
                "Abstract": "To interpret data visualizations, people must determine how visual features map onto concepts. For example, to interpret colormaps, people must determine how dimensions of color (e.g., lightness, hue) map onto quantities of a given measure (e.g., brain activity, correlation magnitude). This process is easier when the encoded mappings in the visualization match people's predictions of how visual features will map onto concepts, their inferred mappings. To harness this principle in visualization design, it is necessary to understand what factors determine people's inferred mappings. In this study, we investigated how inferred color-quantity mappings for colormap data visualizations were influenced by the background color. Prior literature presents seemingly conflicting accounts of how the background color affects inferred color-quantity mappings. The present results help resolve those conflicts, demonstrating that sometimes the background has an effect and sometimes it does not, depending on whether the colormap appears to vary in opacity. When there is no apparent variation in opacity, participants infer that darker colors map to larger quantities (dark-is-more bias). As apparent variation in opacity increases, participants become biased toward inferring that more opaque colors map to larger quantities (opaque-is-more bias). These biases work together on light backgrounds and conflict on dark backgrounds. Under such conflicts, the opaque-is-more bias can negate, or even supersede the dark-is-more bias. The results suggest that if a design goal is to produce colormaps that match people's inferred mappings and are robust to changes in background color, it is beneficial to use colormaps that will not appear to vary in opacity on any background color, and to encode larger quantities in darker colors.",
                "AuthorNamesDeduped": "Karen B. Schloss;Connor Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorNames": "Karen B. Schloss;Connor C. Gramazio;Allison T. Silverman;Madeline L. Parker;Audrey S. Wang",
                "AuthorAffiliation": "University of Wisconsin Madison, Madison, WI, US;Brown University, Providence, RI, US;Brown University, Providence, RI, US;University of Wisconsin Madison, Madison, WI, US;California Institute of Technology, Pasadena, CA, US",
                "InternalReferences": "0.1109/tvcg.2017.2743978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70583;10.1109/tvcg.2017.2744359",
                "AuthorKeywords": "Visual Reasoning,Visual Communication,Colormaps,Color Perception,Visual Encoding,Visual Design",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 2713,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 655,
                "i": [
                    655
                ]
            }
        },
        {
            "name": "Maureen C. Stone",
            "value": 64,
            "numPapers": 1,
            "cluster": "8",
            "visible": 1,
            "index": 729,
            "x": -258.51031666429327,
            "y": 78.2458700387875,
            "vy": 0,
            "vx": 0,
            "r": 1.0736902705814624,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "A Linguistic Approach to Categorical Color Assignment for Data Visualization",
                "DOI": "10.1109/tvcg.2015.2467471",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467471",
                "FirstPage": 698,
                "LastPage": 707,
                "PaperType": "J",
                "Abstract": "When data categories have strong color associations, it is useful to use these semantically meaningful concept-color associations in data visualizations. In this paper, we explore how linguistic information about the terms defining the data can be used to generate semantically meaningful colors. To do this effectively, we need first to establish that a term has a strong semantic color association, then discover which color or colors express it. Using co-occurrence measures of color name frequencies from Google n-grams, we define a measure for colorability that describes how strongly associated a given term is to any of a set of basic color terms. We then show how this colorability score can be used with additional semantic analysis to rank and retrieve a representative color from Google Images. Alternatively, we use symbolic relationships defined by WordNet to select identity colors for categories such as countries or brands. To create visually distinct color palettes, we use k-means clustering to create visually distinct sets, iteratively reassigning terms with multiple basic color associations as needed. This can be additionally constrained to use colors only in a predefined palette.",
                "AuthorNamesDeduped": "Vidya Setlur;Maureen C. Stone",
                "AuthorNames": "Vidya Setlur;Maureen C. Stone",
                "AuthorAffiliation": "Tableau Research;Tableau Research",
                "InternalReferences": null,
                "AuthorKeywords": "linguistics, natural language processing, semantics, color names, categorical color, Google n-grams, WordNet, XKCD",
                "AminerCitationCount": 80,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1909,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1012,
                "i": [
                    1012
                ]
            }
        },
        {
            "name": "Guozheng Li 0002",
            "value": 24,
            "numPapers": 32,
            "cluster": "4",
            "visible": 1,
            "index": 730,
            "x": 137.8575270424053,
            "y": -232.47645523311058,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "HiTailor: Interactive Transformation and Visualization for Hierarchical Tabular Data",
                "DOI": "10.1109/tvcg.2022.3209354",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209354",
                "FirstPage": 139,
                "LastPage": 148,
                "PaperType": "J",
                "Abstract": "Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.",
                "AuthorNamesDeduped": "Guozheng Li 0002;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu 0002;Guoren Wang",
                "AuthorNames": "Guozheng Li;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu;Guoren Wang",
                "AuthorAffiliation": "Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Shenzhen University, China;Beijing Institute of Technology, China",
                "InternalReferences": "0.1109/tvcg.2022.3209385;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2011.250;10.1109/tvcg.2019.2934535;10.1109/tvcg.2017.2745298;10.1109/tvcg.2014.2346279;10.1109/tvcg.2021.3114773;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744458",
                "AuthorKeywords": "data transformation,tabular data,hierarchical tabular data,tabular visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 727,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 187,
                "i": [
                    187
                ]
            }
        },
        {
            "name": "Runfei Li",
            "value": 13,
            "numPapers": 9,
            "cluster": "4",
            "visible": 1,
            "index": 731,
            "x": 55.421634243335355,
            "y": 264.7233319105023,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "HiTailor: Interactive Transformation and Visualization for Hierarchical Tabular Data",
                "DOI": "10.1109/tvcg.2022.3209354",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209354",
                "FirstPage": 139,
                "LastPage": 148,
                "PaperType": "J",
                "Abstract": "Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.",
                "AuthorNamesDeduped": "Guozheng Li 0002;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu 0002;Guoren Wang",
                "AuthorNames": "Guozheng Li;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu;Guoren Wang",
                "AuthorAffiliation": "Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Shenzhen University, China;Beijing Institute of Technology, China",
                "InternalReferences": "0.1109/tvcg.2022.3209385;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2011.250;10.1109/tvcg.2019.2934535;10.1109/tvcg.2017.2745298;10.1109/tvcg.2014.2346279;10.1109/tvcg.2021.3114773;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744458",
                "AuthorKeywords": "data transformation,tabular data,hierarchical tabular data,tabular visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 727,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 187,
                "i": [
                    187
                ]
            }
        },
        {
            "name": "Zicheng Wang",
            "value": 13,
            "numPapers": 9,
            "cluster": "4",
            "visible": 1,
            "index": 732,
            "x": -219.8343384514756,
            "y": -157.86976796588408,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "HiTailor: Interactive Transformation and Visualization for Hierarchical Tabular Data",
                "DOI": "10.1109/tvcg.2022.3209354",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209354",
                "FirstPage": 139,
                "LastPage": 148,
                "PaperType": "J",
                "Abstract": "Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.",
                "AuthorNamesDeduped": "Guozheng Li 0002;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu 0002;Guoren Wang",
                "AuthorNames": "Guozheng Li;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu;Guoren Wang",
                "AuthorAffiliation": "Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Shenzhen University, China;Beijing Institute of Technology, China",
                "InternalReferences": "0.1109/tvcg.2022.3209385;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2011.250;10.1109/tvcg.2019.2934535;10.1109/tvcg.2017.2745298;10.1109/tvcg.2014.2346279;10.1109/tvcg.2021.3114773;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744458",
                "AuthorKeywords": "data transformation,tabular data,hierarchical tabular data,tabular visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 727,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 187,
                "i": [
                    187
                ]
            }
        },
        {
            "name": "Chi Harold Liu",
            "value": 13,
            "numPapers": 9,
            "cluster": "4",
            "visible": 1,
            "index": 733,
            "x": 268.9218722068678,
            "y": -32.109603684148034,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "HiTailor: Interactive Transformation and Visualization for Hierarchical Tabular Data",
                "DOI": "10.1109/tvcg.2022.3209354",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209354",
                "FirstPage": 139,
                "LastPage": 148,
                "PaperType": "J",
                "Abstract": "Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.",
                "AuthorNamesDeduped": "Guozheng Li 0002;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu 0002;Guoren Wang",
                "AuthorNames": "Guozheng Li;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu;Guoren Wang",
                "AuthorAffiliation": "Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Shenzhen University, China;Beijing Institute of Technology, China",
                "InternalReferences": "0.1109/tvcg.2022.3209385;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2011.250;10.1109/tvcg.2019.2934535;10.1109/tvcg.2017.2745298;10.1109/tvcg.2014.2346279;10.1109/tvcg.2021.3114773;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744458",
                "AuthorKeywords": "data transformation,tabular data,hierarchical tabular data,tabular visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 727,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 187,
                "i": [
                    187
                ]
            }
        },
        {
            "name": "Guoren Wang",
            "value": 13,
            "numPapers": 9,
            "cluster": "4",
            "visible": 1,
            "index": 734,
            "x": -176.72523762276455,
            "y": 205.47065578125114,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "HiTailor: Interactive Transformation and Visualization for Hierarchical Tabular Data",
                "DOI": "10.1109/tvcg.2022.3209354",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209354",
                "FirstPage": 139,
                "LastPage": 148,
                "PaperType": "J",
                "Abstract": "Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.",
                "AuthorNamesDeduped": "Guozheng Li 0002;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu 0002;Guoren Wang",
                "AuthorNames": "Guozheng Li;Runfei Li;Zicheng Wang;Chi Harold Liu;Min Lu;Guoren Wang",
                "AuthorAffiliation": "Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Beijing Institute of Technology, China;Shenzhen University, China;Beijing Institute of Technology, China",
                "InternalReferences": "0.1109/tvcg.2022.3209385;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2011.250;10.1109/tvcg.2019.2934535;10.1109/tvcg.2017.2745298;10.1109/tvcg.2014.2346279;10.1109/tvcg.2021.3114773;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744458",
                "AuthorKeywords": "data transformation,tabular data,hierarchical tabular data,tabular visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 727,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 187,
                "i": [
                    187
                ]
            }
        },
        {
            "name": "Houda Lamqaddam",
            "value": 6,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 735,
            "x": -8.487515399213038,
            "y": -271.0681871455006,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Communicating Uncertainty in Digital Humanities Visualization Research",
                "DOI": "10.1109/tvcg.2022.3209436",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209436",
                "FirstPage": 635,
                "LastPage": 645,
                "PaperType": "J",
                "Abstract": "Due to their historical nature, humanistic data encompass multiple sources of uncertainty. While humanists are accustomed to handling such uncertainty with their established methods, they are cautious of visualizations that appear overly objective and fail to communicate this uncertainty. To design more trustworthy visualizations for humanistic research, therefore, a deeper understanding of its relation to uncertainty is needed. We systematically reviewed 126 publications from digital humanities literature that use visualization as part of their research process, and examined how uncertainty was handled and represented in their visualizations. Crossing these dimensions with the visualization type and use, we identified that uncertainty originated from multiple steps in the research process from the source artifacts to their datafication. We also noted how besides known uncertainty coping strategies, such as excluding data and evaluating its effects, humanists also embraced uncertainty as a separate dimension important to retain. By mapping how the visualizations encoded uncertainty, we identified four approaches that varied in terms of explicitness and customization. This work contributes with two empirical taxonomies of uncertainty and it's corresponding coping strategies, as well as with the foundation of a research agenda for uncertainty visualization in the digital humanities. Our findings further the synergy among humanists and visualization researchers, and ultimately contribute to the development of more trustworthy, uncertainty-aware visualizations.",
                "AuthorNamesDeduped": "Georgia Panagiotidou;Houda Lamqaddam;Jeroen Poblome;Koenraad Brosens;Katrien Verbert;Andrew Vande Moere",
                "AuthorNames": "Georgia Panagiotidou;Houda Lamqaddam;Jeroen Poblome;Koenraad Brosens;Katrien Verbert;Andrew Vande Moere",
                "AuthorAffiliation": "KU Leuven & UCL Interaction Center, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2014.2346298;10.1109/tvcg.2015.2467452;10.1109/tvcg.2019.2934287;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865241;10.1109/tvcg.2012.279;10.1109/tvcg.2017.2744459;10.1109/tvcg.2018.2864913;10.1109/tvcg.2015.2467591;10.1109/tvcg.2018.2864914;10.1109/tvcg.2018.2864889;10.1109/tvcg.2018.2865193",
                "AuthorKeywords": "digital humanities,data visualization,uncertainty,review",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 125,
                "DownloadsXplore": 688,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 188,
                "i": [
                    188
                ]
            }
        },
        {
            "name": "Koenraad Brosens",
            "value": 6,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 736,
            "x": 189.491045840852,
            "y": 194.27594690578692,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Communicating Uncertainty in Digital Humanities Visualization Research",
                "DOI": "10.1109/tvcg.2022.3209436",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209436",
                "FirstPage": 635,
                "LastPage": 645,
                "PaperType": "J",
                "Abstract": "Due to their historical nature, humanistic data encompass multiple sources of uncertainty. While humanists are accustomed to handling such uncertainty with their established methods, they are cautious of visualizations that appear overly objective and fail to communicate this uncertainty. To design more trustworthy visualizations for humanistic research, therefore, a deeper understanding of its relation to uncertainty is needed. We systematically reviewed 126 publications from digital humanities literature that use visualization as part of their research process, and examined how uncertainty was handled and represented in their visualizations. Crossing these dimensions with the visualization type and use, we identified that uncertainty originated from multiple steps in the research process from the source artifacts to their datafication. We also noted how besides known uncertainty coping strategies, such as excluding data and evaluating its effects, humanists also embraced uncertainty as a separate dimension important to retain. By mapping how the visualizations encoded uncertainty, we identified four approaches that varied in terms of explicitness and customization. This work contributes with two empirical taxonomies of uncertainty and it's corresponding coping strategies, as well as with the foundation of a research agenda for uncertainty visualization in the digital humanities. Our findings further the synergy among humanists and visualization researchers, and ultimately contribute to the development of more trustworthy, uncertainty-aware visualizations.",
                "AuthorNamesDeduped": "Georgia Panagiotidou;Houda Lamqaddam;Jeroen Poblome;Koenraad Brosens;Katrien Verbert;Andrew Vande Moere",
                "AuthorNames": "Georgia Panagiotidou;Houda Lamqaddam;Jeroen Poblome;Koenraad Brosens;Katrien Verbert;Andrew Vande Moere",
                "AuthorAffiliation": "KU Leuven & UCL Interaction Center, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2014.2346298;10.1109/tvcg.2015.2467452;10.1109/tvcg.2019.2934287;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865241;10.1109/tvcg.2012.279;10.1109/tvcg.2017.2744459;10.1109/tvcg.2018.2864913;10.1109/tvcg.2015.2467591;10.1109/tvcg.2018.2864914;10.1109/tvcg.2018.2864889;10.1109/tvcg.2018.2865193",
                "AuthorKeywords": "digital humanities,data visualization,uncertainty,review",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 125,
                "DownloadsXplore": 688,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 188,
                "i": [
                    188
                ]
            }
        },
        {
            "name": "Katrien Verbert",
            "value": 6,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 737,
            "x": -271.14020264253804,
            "y": -15.264026695581762,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Communicating Uncertainty in Digital Humanities Visualization Research",
                "DOI": "10.1109/tvcg.2022.3209436",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209436",
                "FirstPage": 635,
                "LastPage": 645,
                "PaperType": "J",
                "Abstract": "Due to their historical nature, humanistic data encompass multiple sources of uncertainty. While humanists are accustomed to handling such uncertainty with their established methods, they are cautious of visualizations that appear overly objective and fail to communicate this uncertainty. To design more trustworthy visualizations for humanistic research, therefore, a deeper understanding of its relation to uncertainty is needed. We systematically reviewed 126 publications from digital humanities literature that use visualization as part of their research process, and examined how uncertainty was handled and represented in their visualizations. Crossing these dimensions with the visualization type and use, we identified that uncertainty originated from multiple steps in the research process from the source artifacts to their datafication. We also noted how besides known uncertainty coping strategies, such as excluding data and evaluating its effects, humanists also embraced uncertainty as a separate dimension important to retain. By mapping how the visualizations encoded uncertainty, we identified four approaches that varied in terms of explicitness and customization. This work contributes with two empirical taxonomies of uncertainty and it's corresponding coping strategies, as well as with the foundation of a research agenda for uncertainty visualization in the digital humanities. Our findings further the synergy among humanists and visualization researchers, and ultimately contribute to the development of more trustworthy, uncertainty-aware visualizations.",
                "AuthorNamesDeduped": "Georgia Panagiotidou;Houda Lamqaddam;Jeroen Poblome;Koenraad Brosens;Katrien Verbert;Andrew Vande Moere",
                "AuthorNames": "Georgia Panagiotidou;Houda Lamqaddam;Jeroen Poblome;Koenraad Brosens;Katrien Verbert;Andrew Vande Moere",
                "AuthorAffiliation": "KU Leuven & UCL Interaction Center, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium;KU Leuven, Belgium",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2014.2346298;10.1109/tvcg.2015.2467452;10.1109/tvcg.2019.2934287;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865241;10.1109/tvcg.2012.279;10.1109/tvcg.2017.2744459;10.1109/tvcg.2018.2864913;10.1109/tvcg.2015.2467591;10.1109/tvcg.2018.2864914;10.1109/tvcg.2018.2864889;10.1109/tvcg.2018.2865193",
                "AuthorKeywords": "digital humanities,data visualization,uncertainty,review",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 125,
                "DownloadsXplore": 688,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 188,
                "i": [
                    188
                ]
            }
        },
        {
            "name": "Andrew Vande Moere",
            "value": 69,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 738,
            "x": 210.38353692835454,
            "y": -172.013858132174,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Evaluating the Effect of Style in Information Visualization",
                "DOI": "10.1109/tvcg.2012.221",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.221",
                "FirstPage": 2739,
                "LastPage": 2748,
                "PaperType": "J",
                "Abstract": "This paper reports on a between-subject, comparative online study of three information visualization demonstrators that each displayed the same dataset by way of an identical scatterplot technique, yet were different in style in terms of visual and interactive embellishment. We validated stylistic adherence and integrity through a separate experiment in which a small cohort of participants assigned our three demonstrators to predefined groups of stylistic examples, after which they described the styles with their own words. From the online study, we discovered significant differences in how participants execute specific interaction operations, and the types of insights that followed from them. However, in spite of significant differences in apparent usability, enjoyability and usefulness between the style demonstrators, no variation was found on the self-reported depth, expert-rated depth, confidence or difficulty of the resulting insights. Three different methods of insight analysis have been applied, revealing how style impacts the creation of insights, ranging from higher-level pattern seeking to a more reflective and interpretative engagement with content, which is what underlies the patterns. As this study only forms the first step in determining how the impact of style in information visualization could be best evaluated, we propose several guidelines and tips on how to gather, compare and categorize insights through an online evaluation study, particularly in terms of analyzing the concise, yet wide variety of insights and observations in a trustworthy and reproducable manner.",
                "AuthorNamesDeduped": "Andrew Vande Moere;Martin Tomitsch;Christoph Wimmer;Christoph M. Bösch;Thomas Grechenig",
                "AuthorNames": "Andrew Vande Moere;Martin Tomitsch;Christoph Wimmer;Boesch Christoph;Thomas Grechenig",
                "AuthorAffiliation": "KU Leuven, Belgium;University of Sydney, Australia;TU Wein, Austria and T. U. Wien;TU Wien;TU Wein, Austria and T. U. Wien",
                "InternalReferences": "0.1109/tvcg.2007.70541;10.1109/tvcg.2007.70577;10.1109/tvcg.2009.122",
                "AuthorKeywords": "Visualization, design, style, aesthetics, evaluation, online study, user experience",
                "AminerCitationCount": 135,
                "CitationCountCrossRef": 59,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 2407,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1412,
                "i": [
                    1412
                ]
            }
        },
        {
            "name": "Michelle A. Borkin",
            "value": 126,
            "numPapers": 59,
            "cluster": "5",
            "visible": 1,
            "index": 739,
            "x": -38.962933963132365,
            "y": 269.13173312893554,
            "vy": 0,
            "vx": 0,
            "r": 1.145077720207254,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Beyond Memorability: Visualization Recognition and Recall",
                "DOI": "10.1109/tvcg.2015.2467732",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467732",
                "FirstPage": 519,
                "LastPage": 528,
                "PaperType": "J",
                "Abstract": "In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.",
                "AuthorNamesDeduped": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorNames": "Michelle A. Borkin;Zoya Bylinskii;Nam Wook Kim;Constance May Bainbridge;Chelsea S. Yeh;Daniel Borkin;Hanspeter Pfister;Aude Oliva",
                "AuthorAffiliation": "University of British Columbia, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT);School of Engineering & Applied Sciences, Harvard University;University of Michigan;School of Engineering & Applied Sciences, Harvard University;Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT)",
                "InternalReferences": "0.1109/tvcg.2012.197;10.1109/tvcg.2013.234;10.1109/tvcg.2011.193;10.1109/tvcg.2012.233;10.1109/tvcg.2011.175;10.1109/tvcg.2013.234;10.1109/tvcg.2012.215;10.1109/vast.2010.5653598;10.1109/tvcg.2012.245;10.1109/tvcg.2012.221",
                "AuthorKeywords": "Information visualization, memorability, recognition, recall, eye-tracking study",
                "AminerCitationCount": 295,
                "CitationCountCrossRef": 188,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 5067,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1004,
                "i": [
                    1004
                ]
            }
        },
        {
            "name": "Daniel F. Keefe",
            "value": 223,
            "numPapers": 67,
            "cluster": "6",
            "visible": 1,
            "index": 740,
            "x": -153.1692765175911,
            "y": -224.92036975578202,
            "vy": 0,
            "vx": 0,
            "r": 1.2567645365572826,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "A Lightweight Tangible 3D Interface for Interactive Visualization of Thin fiber Structures",
                "DOI": "10.1109/tvcg.2013.121",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.121",
                "FirstPage": 2802,
                "LastPage": 2809,
                "PaperType": "J",
                "Abstract": "We present a prop-based, tangible interface for 3D interactive visualization of thin fiber structures. These data are commonly found in current bioimaging datasets, for example second-harmonic generation microscopy of collagen fibers in tissue. Our approach uses commodity visualization technologies such as a depth sensing camera and low-cost 3D display. Unlike most current uses of these emerging technologies in the games and graphics communities, we employ the depth sensing camera to create a fish-tank sterePoscopic virtual reality system at the scientist's desk that supports tracking of small-scale gestures with objects already found in the work space. We apply the new interface to the problem of interactive exploratory visualization of three-dimensional thin fiber data. A critical task for the visual analysis of these data is understanding patterns in fiber orientation throughout a volume.The interface enables a new, fluid style of data exploration and fiber orientation analysis by using props to provide needed passive-haptic feedback, making 3D interactions with these fiber structures more controlled. We also contribute a low-level algorithm for extracting fiber centerlines from volumetric imaging. The system was designed and evaluated with two biophotonic experts who currently use it in their lab. As compared to typical practice within their field, the new visualization system provides a more effective way to examine and understand the 3D bioimaging datasets they collect.",
                "AuthorNamesDeduped": "Bret Jackson;Tung Yuen Lau;David Schroeder;Kimani C. Toussaint;Daniel F. Keefe",
                "AuthorNames": "Bret Jackson;Tung Yuen Lau;David Schroeder;Kimani C. Toussaint;Daniel F. Keefe",
                "AuthorAffiliation": "University of Minnesota, USA;University of Illinois, Urbana-Champaign, USA;University of Minnesota, USA;University of Illinois, Urbana-Champaign, USA;University of Minnesota, USA",
                "InternalReferences": "0.1109/tvcg.2009.138;10.1109/visual.2005.1532846;10.1109/visual.2002.1183753;10.1109/visual.1997.663912",
                "AuthorKeywords": "Scientific visualization, 3D interaction, tangible interaction, microscopy visualization",
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 1382,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1342,
                "i": [
                    1342
                ]
            }
        },
        {
            "name": "Caitlyn M. McColeman",
            "value": 42,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 741,
            "x": 265.0525719235261,
            "y": 62.42703033721871,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Rethinking the Ranks of Visual Channels",
                "DOI": "10.1109/tvcg.2021.3114684",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114684",
                "FirstPage": 707,
                "LastPage": 717,
                "PaperType": "J",
                "Abstract": "Data can be visually represented using visual channels like position, length or luminance. An existing ranking of these visual channels is based on how accurately participants could report the ratio between two depicted values. There is an assumption that this ranking should hold for different tasks and for different numbers of marks. However, there is surprisingly little existing work that tests this assumption, especially given that visually computing ratios is relatively unimportant in real-world visualizations, compared to seeing, remembering, and comparing trends and motifs, across displays that almost universally depict more than two values. To simulate the information extracted from a glance at a visualization, we instead asked participants to immediately reproduce a set of values from memory after they were shown the visualization. These values could be shown in a bar graph (position (bar)), line graph (position (line)), heat map (luminance), bubble chart (area), misaligned bar graph (length), or ‘wind map’ (angle). With a Bayesian multilevel modeling approach, we show how the rank positions of visual channels shift across different numbers of marks (2, 4 or 8) and for bias, precision, and error measures. The ranking did not hold, even for reproductions of only 2 marks, and the new probabilistic ranking was highly inconsistent for reproductions of different numbers of marks. Other factors besides channel choice had an order of magnitude more influence on performance, such as the number of values in the series (e.g., more marks led to larger errors), or the value of each mark (e.g., small values were systematically overestimated). Every visual channel was worse for displays with 8 marks than 4, consistent with established limits on visual memory. These results point to the need for a body of empirical studies that move beyond two-value ratio judgments as a baseline for reliably ranking the quality of a visual channel, including testing new tasks (detection of trends or motifs), timescales (immediate computation, or later comparison), and the number of values (from a handful, to thousands).",
                "AuthorNamesDeduped": "Caitlyn M. McColeman;Fumeng Yang;Timothy F. Brady;Steven Franconeri",
                "AuthorNames": "Caitlyn M. McColeman;Fumeng Yang;Timothy F. Brady;Steven Franconeri",
                "AuthorAffiliation": "Northwestern University, USA;Brown University, USA;University of San Diego, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467732;10.1109/tvcg.2020.3030422;10.1109/tvcg.2010.132;10.1109/tvcg.2014.2346979;10.1109/tvcg.2019.2934786;10.1109/tvcg.2020.3030335;10.1109/tvcg.2015.2467671;10.1109/tvcg.2020.3030345;10.1109/tvcg.2018.2865240;10.1109/tvcg.2019.2934801;10.1109/tvcg.2018.2864884;10.1109/tvcg.2020.3030429;10.1109/tvcg.2015.2467758;10.1109/tvcg.2020.3030421;10.1109/tvcg.2018.2865264;10.1109/tvcg.2017.2744359;10.1109/tvcg.2014.2346320",
                "AuthorKeywords": "DataType Agnostic,Human-Subjects Quantitative Studies,Perception & Cognition,Charts, Diagrams, and Plots",
                "AminerCitationCount": 10,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 87,
                "DownloadsXplore": 878,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 291,
                "i": [
                    291
                ]
            }
        },
        {
            "name": "Cristina R. Ceja",
            "value": 38,
            "numPapers": 6,
            "cluster": "5",
            "visible": 1,
            "index": 742,
            "x": -237.77053947183512,
            "y": 133.09834919814935,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Biased Average Position Estimates in Line and Bar Graphs: Underestimation, Overestimation, and Perceptual Pull",
                "DOI": "10.1109/tvcg.2019.2934400",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934400",
                "FirstPage": 301,
                "LastPage": 310,
                "PaperType": "J",
                "Abstract": "In visual depictions of data, position (i.e., the vertical height of a line or a bar) is believed to be the most precise way to encode information compared to other encodings (e.g., hue). Not only are other encodings less precise than position, but they can also be prone to systematic biases (e.g., color category boundaries can distort perceived differences between hues). By comparison, position's high level of precision may seem to protect it from such biases. In contrast, across three empirical studies, we show that while position may be a precise form of data encoding, it can also produce systematic biases in how values are visually encoded, at least for reports of average position across a short delay. In displays with a single line or a single set of bars, reports of average positions were significantly biased, such that line positions were underestimated and bar positions were overestimated. In displays with multiple data series (i.e., multiple lines and/or sets of bars), this systematic bias still persisted. We also observed an effect of “perceptual pull”, where the average position estimate for each series was ‘pulled’ toward the other. These findings suggest that, although position may still be the most precise form of visual data encoding, it can also be systematically biased.",
                "AuthorNamesDeduped": "Cindy Xiong;Cristina R. Ceja;Casimir J. H. Ludwig;Steven Franconeri",
                "AuthorNames": "Cindy Xiong;Cristina R. Ceja;Casimir J.H. Ludwig;Steven Franconeri",
                "AuthorAffiliation": "Northwestern University;Northwestern University;University of Bristol;Northwestern University",
                "InternalReferences": "0.1109/tvcg.2017.2744138",
                "AuthorKeywords": "Perceptual biases,perception and cognition,cue combination,bar graphs,line graphs,position estimation",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 896,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 538,
                "i": [
                    538
                ]
            }
        },
        {
            "name": "Casimir J. H. Ludwig",
            "value": 30,
            "numPapers": 0,
            "cluster": "5",
            "visible": 1,
            "index": 743,
            "x": 85.47545401768004,
            "y": -258.92845876896473,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Biased Average Position Estimates in Line and Bar Graphs: Underestimation, Overestimation, and Perceptual Pull",
                "DOI": "10.1109/tvcg.2019.2934400",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934400",
                "FirstPage": 301,
                "LastPage": 310,
                "PaperType": "J",
                "Abstract": "In visual depictions of data, position (i.e., the vertical height of a line or a bar) is believed to be the most precise way to encode information compared to other encodings (e.g., hue). Not only are other encodings less precise than position, but they can also be prone to systematic biases (e.g., color category boundaries can distort perceived differences between hues). By comparison, position's high level of precision may seem to protect it from such biases. In contrast, across three empirical studies, we show that while position may be a precise form of data encoding, it can also produce systematic biases in how values are visually encoded, at least for reports of average position across a short delay. In displays with a single line or a single set of bars, reports of average positions were significantly biased, such that line positions were underestimated and bar positions were overestimated. In displays with multiple data series (i.e., multiple lines and/or sets of bars), this systematic bias still persisted. We also observed an effect of “perceptual pull”, where the average position estimate for each series was ‘pulled’ toward the other. These findings suggest that, although position may still be the most precise form of visual data encoding, it can also be systematically biased.",
                "AuthorNamesDeduped": "Cindy Xiong;Cristina R. Ceja;Casimir J. H. Ludwig;Steven Franconeri",
                "AuthorNames": "Cindy Xiong;Cristina R. Ceja;Casimir J.H. Ludwig;Steven Franconeri",
                "AuthorAffiliation": "Northwestern University;Northwestern University;University of Bristol;Northwestern University",
                "InternalReferences": "0.1109/tvcg.2017.2744138",
                "AuthorKeywords": "Perceptual biases,perception and cognition,cue combination,bar graphs,line graphs,position estimation",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 896,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 538,
                "i": [
                    538
                ]
            }
        },
        {
            "name": "Ji Ma",
            "value": 6,
            "numPapers": 15,
            "cluster": "3",
            "visible": 1,
            "index": 744,
            "x": 111.95193242389911,
            "y": 248.83079557513526,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Tac-Trainer: A Visual Analytics System for IoT-based Racket Sports Training",
                "DOI": "10.1109/tvcg.2022.3209352",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209352",
                "FirstPage": 951,
                "LastPage": 961,
                "PaperType": "J",
                "Abstract": "Conventional racket sports training highly relies on coaches' knowledge and experience, leading to biases in the guidance. To solve this problem, smart wearable devices based on Internet of Things technology (IoT) have been extensively investigated to support data-driven training. Considerable studies introduced methods to extract valuable information from the sensor data collected by IoT devices. However, the information cannot provide actionable insights for coaches due to the large data volume and high data dimensions. We proposed an IoT + VA framework, Tac-Trainer, to integrate the sensor data, the information, and coaches' knowledge to facilitate racket sports training. Tac-Trainer consists of four components: device configuration, data interpretation, training optimization, and result visualization. These components collect trainees' kinematic data through IoT devices, transform the data into attributes and indicators, generate training suggestions, and provide an interactive visualization interface for exploration, respectively. We further discuss new research opportunities and challenges inspired by our work from two perspectives, VA for IoT and IoT for VA.",
                "AuthorNamesDeduped": "Jiachen Wang;Ji Ma;Kangping Hu;Zheng Zhou;Hui Zhang 0051;Xiao Xie;Yingcai Wu",
                "AuthorNames": "Jiachen Wang;Ji Ma;Kangping Hu;Zheng Zhou;Hui Zhang;Xiao Xie;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2013.178;10.1109/tvcg.2019.2934280;10.1109/tvcg.2021.3114806;10.1109/tvcg.2020.3030342;10.1109/tvcg.2021.3114861;10.1109/tvcg.2015.2468292;10.1109/tvcg.2011.208;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934630;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2012.263",
                "AuthorKeywords": "IoT,racket sports,training,sensor data,visual analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 944,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 194,
                "i": [
                    194
                ]
            }
        },
        {
            "name": "Kangping Hu",
            "value": 6,
            "numPapers": 15,
            "cluster": "3",
            "visible": 1,
            "index": 745,
            "x": -250.80092445631817,
            "y": -107.93005277426764,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Tac-Trainer: A Visual Analytics System for IoT-based Racket Sports Training",
                "DOI": "10.1109/tvcg.2022.3209352",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209352",
                "FirstPage": 951,
                "LastPage": 961,
                "PaperType": "J",
                "Abstract": "Conventional racket sports training highly relies on coaches' knowledge and experience, leading to biases in the guidance. To solve this problem, smart wearable devices based on Internet of Things technology (IoT) have been extensively investigated to support data-driven training. Considerable studies introduced methods to extract valuable information from the sensor data collected by IoT devices. However, the information cannot provide actionable insights for coaches due to the large data volume and high data dimensions. We proposed an IoT + VA framework, Tac-Trainer, to integrate the sensor data, the information, and coaches' knowledge to facilitate racket sports training. Tac-Trainer consists of four components: device configuration, data interpretation, training optimization, and result visualization. These components collect trainees' kinematic data through IoT devices, transform the data into attributes and indicators, generate training suggestions, and provide an interactive visualization interface for exploration, respectively. We further discuss new research opportunities and challenges inspired by our work from two perspectives, VA for IoT and IoT for VA.",
                "AuthorNamesDeduped": "Jiachen Wang;Ji Ma;Kangping Hu;Zheng Zhou;Hui Zhang 0051;Xiao Xie;Yingcai Wu",
                "AuthorNames": "Jiachen Wang;Ji Ma;Kangping Hu;Zheng Zhou;Hui Zhang;Xiao Xie;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2013.178;10.1109/tvcg.2019.2934280;10.1109/tvcg.2021.3114806;10.1109/tvcg.2020.3030342;10.1109/tvcg.2021.3114861;10.1109/tvcg.2015.2468292;10.1109/tvcg.2011.208;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934630;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2012.263",
                "AuthorKeywords": "IoT,racket sports,training,sensor data,visual analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 91,
                "DownloadsXplore": 944,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 194,
                "i": [
                    194
                ]
            }
        },
        {
            "name": "Kasper Hornbæk",
            "value": 50,
            "numPapers": 19,
            "cluster": "3",
            "visible": 1,
            "index": 746,
            "x": 258.0113713555275,
            "y": -89.88955585183447,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Investigating the Use of a Dynamic Physical Bar Chart for Data Exploration and Presentation",
                "DOI": "10.1109/tvcg.2016.2598498",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598498",
                "FirstPage": 451,
                "LastPage": 460,
                "PaperType": "J",
                "Abstract": "Physical data representations, or data physicalizations, are a promising new medium to represent and communicate data. Previous work mostly studied passive physicalizations which require humans to perform all interactions manually. Dynamic shape-changing displays address this limitation and facilitate data exploration tasks such as sorting, navigating in data sets which exceed the fixed size of a given physical display, or preparing “views” to communicate insights about data. However, it is currently unclear how people approach and interact with such data representations. We ran an exploratory study to investigate how non-experts made use of a dynamic physical bar chart for an open-ended data exploration and presentation task. We asked 16 participants to explore a data set on European values and to prepare a short presentation of their insights using a physical display. We analyze: (1) users' body movements to understand how they approach and react to the physicalization, (2) their hand-gestures to understand how they interact with physical data, (3) system interactions to understand which subsets of the data they explored and which features they used in the process, and (4) strategies used to explore the data and present observations. We discuss the implications of our findings for the use of dynamic data physicalizations and avenues for future work.",
                "AuthorNamesDeduped": "Faisal Taher;Yvonne Jansen;Jonathan Woodruff;John Hardy;Kasper Hornbæk;Jason Alexander",
                "AuthorNames": "Faisal Taher;Yvonne Jansen;Jonathan Woodruff;John Hardy;Kasper Hornbæk;Jason Alexander",
                "AuthorAffiliation": "Lancaster University;University of Copenhagen;Lancaster University;Lancaster University;University of Copenhagen;Lancaster University",
                "InternalReferences": "0.1109/tvcg.2014.2346292;10.1109/tvcg.2014.2352953;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467951",
                "AuthorKeywords": "Shape-changing displays;physicalization;physical visualization;bar charts;user behaviour;data presentation",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 1909,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 906,
                "i": [
                    906
                ]
            }
        },
        {
            "name": "Stephen Ingram",
            "value": 103,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 747,
            "x": -129.6167620859157,
            "y": 240.72701341221165,
            "vy": 0,
            "vx": 0,
            "r": 1.1185952792170408,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "Overview: The Design, Adoption, and Analysis of a Visual Document Mining Tool for Investigative Journalists",
                "DOI": "10.1109/tvcg.2014.2346431",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346431",
                "FirstPage": 2271,
                "LastPage": 2280,
                "PaperType": "J",
                "Abstract": "For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview, an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system “in the wild”, and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of “exploring” a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology.",
                "AuthorNamesDeduped": "Matthew Brehmer;Stephen Ingram;Jonathan Stray;Tamara Munzner",
                "AuthorNames": "Matthew Brehmer;Stephen Ingram;Jonathan Stray;Tamara Munzner",
                "AuthorAffiliation": "University of British Columbia;University of British Columbia;Columbia Journalism School and the Associated Press;University of British Columbia",
                "InternalReferences": "0.1109/tvcg.2009.127;10.1109/infvis.2004.19;10.1109/tvcg.2012.224;10.1109/tvcg.2012.213;10.1109/tvcg.2012.260;10.1109/tvcg.2009.140;10.1109/tvcg.2013.162;10.1109/tvcg.2013.153;10.1109/tvcg.2009.148;10.1109/tvcg.2013.124;10.1109/tvcg.2011.239;10.1109/vast.2010.5652940;10.1109/tvcg.2011.209",
                "AuthorKeywords": "Design study, investigative journalism, task and requirements analysis, text and document data, text analysis",
                "AminerCitationCount": 128,
                "CitationCountCrossRef": 58,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 2608,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1182,
                "i": [
                    1182
                ]
            }
        },
        {
            "name": "Veronika Irvine",
            "value": 66,
            "numPapers": 5,
            "cluster": "5",
            "visible": 1,
            "index": 748,
            "x": -67.07821809207599,
            "y": -265.2367106144847,
            "vy": 0,
            "vx": 0,
            "r": 1.075993091537133,
            "node": {
                "Conference": "VAST",
                "Year": 2010,
                "Title": "DimStiller: Workflows for dimensional analysis and reduction",
                "DOI": "10.1109/vast.2010.5652392",
                "Link": "http://dx.doi.org/10.1109/VAST.2010.5652392",
                "FirstPage": 3,
                "LastPage": 10,
                "PaperType": "C",
                "Abstract": "DimStiller is a system for dimensionality reduction and analysis. It frames the task of understanding and transforming input dimensions as a series of analysis steps where users transform data tables by chaining together different techniques, called operators, into pipelines of expressions. The individual operators have controls and views that are linked together based on the structure of the expression. Users interact with the operator controls to tune parameter choices, with immediate visual feedback guiding the exploration of local neighborhoods of the space of possible data tables. DimStiller also provides global guidance for navigating data-table space through expression templates called workflows, which permit re-use of common patterns of analysis.",
                "AuthorNamesDeduped": "Stephen Ingram;Tamara Munzner;Veronika Irvine;Melanie Tory;Steven Bergner;Torsten Möller",
                "AuthorNames": "Stephen Ingram;Tamara Munzner;Veronika Irvine;Melanie Tory;Steven Bergner;Torsten Möller",
                "AuthorAffiliation": "University of British Columbia, Canada;University of British Columbia, Canada;University of Victoria, Canada and University of Victoria;University of Victoria, Canada and University of Victoria;Simon Fraser University, Canada;Simon Fraser University, Canada",
                "InternalReferences": "0.1109/infvis.2003.1249013;10.1109/visual.1994.346302;10.1109/tvcg.2006.178;10.1109/infvis.2003.1249015;10.1109/tvcg.2009.153;10.1109/infvis.2004.71",
                "AuthorKeywords": null,
                "AminerCitationCount": 131,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 829,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1723,
                "i": [
                    1723
                ]
            }
        },
        {
            "name": "Steven Bergner",
            "value": 174,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 749,
            "x": 228.7788860662935,
            "y": 150.36695544655376,
            "vy": 0,
            "vx": 0,
            "r": 1.2003454231433506,
            "node": {
                "Conference": "VAST",
                "Year": 2010,
                "Title": "DimStiller: Workflows for dimensional analysis and reduction",
                "DOI": "10.1109/vast.2010.5652392",
                "Link": "http://dx.doi.org/10.1109/VAST.2010.5652392",
                "FirstPage": 3,
                "LastPage": 10,
                "PaperType": "C",
                "Abstract": "DimStiller is a system for dimensionality reduction and analysis. It frames the task of understanding and transforming input dimensions as a series of analysis steps where users transform data tables by chaining together different techniques, called operators, into pipelines of expressions. The individual operators have controls and views that are linked together based on the structure of the expression. Users interact with the operator controls to tune parameter choices, with immediate visual feedback guiding the exploration of local neighborhoods of the space of possible data tables. DimStiller also provides global guidance for navigating data-table space through expression templates called workflows, which permit re-use of common patterns of analysis.",
                "AuthorNamesDeduped": "Stephen Ingram;Tamara Munzner;Veronika Irvine;Melanie Tory;Steven Bergner;Torsten Möller",
                "AuthorNames": "Stephen Ingram;Tamara Munzner;Veronika Irvine;Melanie Tory;Steven Bergner;Torsten Möller",
                "AuthorAffiliation": "University of British Columbia, Canada;University of British Columbia, Canada;University of Victoria, Canada and University of Victoria;University of Victoria, Canada and University of Victoria;Simon Fraser University, Canada;Simon Fraser University, Canada",
                "InternalReferences": "0.1109/infvis.2003.1249013;10.1109/visual.1994.346302;10.1109/tvcg.2006.178;10.1109/infvis.2003.1249015;10.1109/tvcg.2009.153;10.1109/infvis.2004.71",
                "AuthorKeywords": null,
                "AminerCitationCount": 131,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 829,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1723,
                "i": [
                    1723
                ]
            }
        },
        {
            "name": "Leland Wilkinson",
            "value": 245,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 750,
            "x": -270.44608686066886,
            "y": 43.691121543759515,
            "vy": 0,
            "vx": 0,
            "r": 1.2820955670696603,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Transforming Scagnostics to Reveal Hidden Features",
                "DOI": "10.1109/tvcg.2014.2346572",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346572",
                "FirstPage": 1624,
                "LastPage": 1632,
                "PaperType": "J",
                "Abstract": "Scagnostics (Scatterplot Diagnostics) were developed by Wilkinson et al. based on an idea of Paul and John Tukey, in order to discern meaningful patterns in large collections of scatterplots. The Tukeys' original idea was intended to overcome the impediments involved in examining large scatterplot matrices (multiplicity of plots and lack of detail). Wilkinson's implementation enabled for the first time scagnostics computations on many points as well as many plots. Unfortunately, scagnostics are sensitive to scale transformations. We illustrate the extent of this sensitivity and show how it is possible to pair statistical transformations with scagnostics to enable discovery of hidden structures in data that are not discernible in untransformed visualizations.",
                "AuthorNamesDeduped": "Dang Tuan Nhon;Leland Wilkinson",
                "AuthorNames": "Tuan Nhon Dang;Leland Wilkinson",
                "AuthorAffiliation": "Department of Computer Science, University of Illinois at Chicago;Department of Computer Science, University of Illinois at Chicago",
                "InternalReferences": "0.1109/tvcg.2006.163;10.1109/infvis.2005.1532142;10.1109/tvcg.2013.187;10.1109/tvcg.2011.167;10.1109/vast.2006.261423;10.1109/tvcg.2010.184;10.1109/vast.2011.6102437;10.1109/vast.2007.4389006",
                "AuthorKeywords": "Scagnostics, Scatterplot matrix, Transformation, High-Dimensional Visual Analytics",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 754,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1273,
                "i": [
                    1273
                ]
            }
        },
        {
            "name": "Robert L. Grossman",
            "value": 143,
            "numPapers": 2,
            "cluster": "5",
            "visible": 1,
            "index": 751,
            "x": 170.01875635561575,
            "y": -215.04330374901184,
            "vy": 0,
            "vx": 0,
            "r": 1.164651698330455,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "Graph-theoretic scagnostics",
                "DOI": "10.1109/infvis.2005.1532142",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532142",
                "FirstPage": 157,
                "LastPage": 164,
                "PaperType": "C",
                "Abstract": "We introduce Tukey and Tukey scagnostics and develop graph-theoretic methods for implementing their procedure on large datasets.",
                "AuthorNamesDeduped": "Leland Wilkinson;Anushka Anand;Robert L. Grossman",
                "AuthorNames": "L. Wilkinson;A. Anand;R. Grossman",
                "AuthorAffiliation": "Northwestern University, USA;University of Illinois, Chicago, Chicago, USA;University of Illinois, Chicago, Chicago, USA",
                "InternalReferences": "0.1109/infvis.2003.1249006;10.1109/infvis.2004.3;10.1109/infvis.2004.15",
                "AuthorKeywords": "visualization, statistical graphics",
                "AminerCitationCount": 389,
                "CitationCountCrossRef": 90,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 1388,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2332,
                "i": [
                    2332
                ]
            }
        },
        {
            "name": "Qisen Yang",
            "value": 0,
            "numPapers": 14,
            "cluster": "3",
            "visible": 1,
            "index": 752,
            "x": 19.906356114084645,
            "y": 273.59410992610793,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Sporthesia: Augmenting Sports Videos Using Natural Language",
                "DOI": "10.1109/tvcg.2022.3209497",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209497",
                "FirstPage": 918,
                "LastPage": 928,
                "PaperType": "J",
                "Abstract": "Augmented sports videos, which combine visualizations and video effects to present data in actual scenes, can communicate insights engagingly and thus have been increasingly popular for sports enthusiasts around the world. Yet, creating augmented sports videos remains a challenging task, requiring considerable time and video editing skills. On the other hand, sports insights are often communicated using natural language, such as in commentaries, oral presentations, and articles, but usually lack visual cues. Thus, this work aims to facilitate the creation of augmented sports videos by enabling analysts to directly create visualizations embedded in videos using insights expressed in natural language. To achieve this goal, we propose a three-step approach – 1) detecting visualizable entities in the text, 2) mapping these entities into visualizations, and 3) scheduling these visualizations to play with the video – and analyzed 155 sports video clips and the accompanying commentaries for accomplishing these steps. Informed by our analysis, we have designed and implemented Sporthesia, a proof-of-concept system that takes racket-based sports videos and textual commentaries as the input and outputs augmented videos. We demonstrate Sporthesia's applicability in two exemplar scenarios, i.e., authoring augmented sports videos using text and augmenting historical sports videos based on auditory comments. A technical evaluation shows that Sporthesia achieves high accuracy (F1-score of 0.9) in detecting visualizable entities in the text. An expert evaluation with eight sports analysts suggests high utility, effectiveness, and satisfaction with our language-driven authoring method and provides insights for future improvement and opportunities.",
                "AuthorNamesDeduped": "Zhutian Chen;Qisen Yang;Xiao Xie;Johanna Beyer;Haijun Xia;Yingcai Wu;Hanspeter Pfister",
                "AuthorNames": "Zhutian Chen;Qisen Yang;Xiao Xie;Johanna Beyer;Haijun Xia;Yingcai Wu;Hanspeter Pfister",
                "AuthorAffiliation": "John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;Zhejiang University, China;Zhejiang University, China;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA;UC San Diego, USA;Zhejiang University, China;John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2021.3114835;10.1109/tvcg.2019.2934810;10.1109/tvcg.2021.3114806;10.1109/tvcg.2021.3114861;10.1109/tvcg.2019.2934785;10.1109/tvcg.2021.3114775;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030378;10.1109/tvcg.2013.192;10.1109/tvcg.2017.2745181;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030392;10.1109/tvcg.2020.3030396",
                "AuthorKeywords": "Augmented Sports Videos,Language-driven Authoring Tool,Video-based Visualization,Sports Visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 78,
                "DownloadsXplore": 738,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 197,
                "i": [
                    197
                ]
            }
        },
        {
            "name": "Ligan Cai",
            "value": 6,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 753,
            "x": -199.62099978308797,
            "y": -188.4183017798441,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Erato: Cooperative Data Story Editing via Fact Interpolation",
                "DOI": "10.1109/tvcg.2022.3209428",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209428",
                "FirstPage": 983,
                "LastPage": 993,
                "PaperType": "J",
                "Abstract": "As an effective form of narrative visualization, visual data stories are widely used in data-driven storytelling to communicate complex insights and support data understanding. Although important, they are difficult to create, as a variety of interdisciplinary skills, such as data analysis and design, are required. In this work, we introduce Erato, a human-machine cooperative data story editing system, which allows users to generate insightful and fluent data stories together with the computer. Specifically, Erato only requires a number of keyframes provided by the user to briefly describe the topic and structure of a data story. Meanwhile, our system leverages a novel interpolation algorithm to help users insert intermediate frames between the keyframes to smooth the transition. We evaluated the effectiveness and usefulness of the Erato system via a series of evaluations including a Turing test, a controlled user study, a performance validation, and interviews with three expert users. The evaluation results showed that the proposed interpolation technique was able to generate coherent story content and help users create data stories more efficiently.",
                "AuthorNamesDeduped": "Mengdi Sun;Ligan Cai;Weiwei Cui;Yanqiu Wu 0001;Yang Shi 0007;Nan Cao 0001",
                "AuthorNames": "Mengdi Sun;Ligan Cai;Weiwei Cui;Yanqiu Wu;Yang Shi;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Microsoft Research Asia, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China;Intelligent Big Data Visualization Lab, Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2015.2467732;10.1109/tvcg.2016.2598876;10.1109/tvcg.2021.3114804;10.1109/tvcg.2019.2934785;10.1109/tvcg.2015.2467531;10.1109/tvcg.2013.119;10.1109/tvcg.2020.3030360;10.1109/tvcg.2021.3114775;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2012.249;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030403;10.1109/visual.2005.1532849;10.1109/tvcg.2018.2865232;10.1109/tvcg.2019.2934398;10.1109/visual.1995.480798;10.1109/tvcg.2015.2467191;10.1109/tvcg.2021.3114774",
                "AuthorKeywords": "Interpolation,visual storytelling,human-machine cooperation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 721,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 198,
                "i": [
                    198
                ]
            }
        },
        {
            "name": "Xu-Meng Wang",
            "value": 67,
            "numPapers": 3,
            "cluster": "3",
            "visible": 1,
            "index": 754,
            "x": 274.6511158038866,
            "y": 4.094458166849837,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "GraphProtector: A Visual Interface for Employing and Assessing Multiple Privacy Preserving Graph Algorithms",
                "DOI": "10.1109/tvcg.2018.2865021",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865021",
                "FirstPage": 193,
                "LastPage": 203,
                "PaperType": "J",
                "Abstract": "Analyzing social networks reveals the relationships between individuals and groups in the data. However, such analysis can also lead to privacy exposure (whether intentionally or inadvertently): leaking the real-world identity of ostensibly anonymous individuals. Most sanitization strategies modify the graph's structure based on hypothesized tactics that an adversary would employ. While combining multiple anonymization schemes provides a more comprehensive privacy protection, deciding the appropriate set of techniques-along with evaluating how applying the strategies will affect the utility of the anonymized results-remains a significant challenge. To address this problem, we introduce GraphProtector, a visual interface that guides a user through a privacy preservation pipeline. GraphProtector enables multiple privacy protection schemes which can be simultaneously combined together as a hybrid approach. To demonstrate the effectiveness of GraphPro tector, we report several case studies and feedback collected from interviews with expert users in various scenarios.",
                "AuthorNamesDeduped": "Xu-Meng Wang;Wei Chen 0001;Jia-Kai Chou;Chris Bryan;Huihua Guan;Wenlong Chen;Rusheng Pan;Kwan-Liu Ma",
                "AuthorNames": "Xumeng Wang;Wei Chen;Jia-Kai Chou;Chris Bryan;Huihua Guan;Wenlong Chen;Rusheng Pan;Kwan-Liu Ma",
                "AuthorAffiliation": "Zhejiang University;Zhejiang University;University of California, Davis;University of California, Davis;Zhejiang University, Alibaba Group;Zhejiang University;Zhejiang University;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2011.163;10.1109/tvcg.2017.2745139;10.1109/tvcg.2014.2346920",
                "AuthorKeywords": "Graph privacy,k-anonymity,structural features,privacy preservation",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 1145,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 751,
                "i": [
                    751
                ]
            }
        },
        {
            "name": "Huihua Guan",
            "value": 67,
            "numPapers": 3,
            "cluster": "3",
            "visible": 1,
            "index": 755,
            "x": -205.42094722097548,
            "y": 182.62594131951025,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "GraphProtector: A Visual Interface for Employing and Assessing Multiple Privacy Preserving Graph Algorithms",
                "DOI": "10.1109/tvcg.2018.2865021",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865021",
                "FirstPage": 193,
                "LastPage": 203,
                "PaperType": "J",
                "Abstract": "Analyzing social networks reveals the relationships between individuals and groups in the data. However, such analysis can also lead to privacy exposure (whether intentionally or inadvertently): leaking the real-world identity of ostensibly anonymous individuals. Most sanitization strategies modify the graph's structure based on hypothesized tactics that an adversary would employ. While combining multiple anonymization schemes provides a more comprehensive privacy protection, deciding the appropriate set of techniques-along with evaluating how applying the strategies will affect the utility of the anonymized results-remains a significant challenge. To address this problem, we introduce GraphProtector, a visual interface that guides a user through a privacy preservation pipeline. GraphProtector enables multiple privacy protection schemes which can be simultaneously combined together as a hybrid approach. To demonstrate the effectiveness of GraphPro tector, we report several case studies and feedback collected from interviews with expert users in various scenarios.",
                "AuthorNamesDeduped": "Xu-Meng Wang;Wei Chen 0001;Jia-Kai Chou;Chris Bryan;Huihua Guan;Wenlong Chen;Rusheng Pan;Kwan-Liu Ma",
                "AuthorNames": "Xumeng Wang;Wei Chen;Jia-Kai Chou;Chris Bryan;Huihua Guan;Wenlong Chen;Rusheng Pan;Kwan-Liu Ma",
                "AuthorAffiliation": "Zhejiang University;Zhejiang University;University of California, Davis;University of California, Davis;Zhejiang University, Alibaba Group;Zhejiang University;Zhejiang University;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2011.163;10.1109/tvcg.2017.2745139;10.1109/tvcg.2014.2346920",
                "AuthorKeywords": "Graph privacy,k-anonymity,structural features,privacy preservation",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 1145,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 751,
                "i": [
                    751
                ]
            }
        },
        {
            "name": "Wenlong Chen",
            "value": 67,
            "numPapers": 3,
            "cluster": "3",
            "visible": 1,
            "index": 756,
            "x": 28.127559279854406,
            "y": -273.60343639829944,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "GraphProtector: A Visual Interface for Employing and Assessing Multiple Privacy Preserving Graph Algorithms",
                "DOI": "10.1109/tvcg.2018.2865021",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865021",
                "FirstPage": 193,
                "LastPage": 203,
                "PaperType": "J",
                "Abstract": "Analyzing social networks reveals the relationships between individuals and groups in the data. However, such analysis can also lead to privacy exposure (whether intentionally or inadvertently): leaking the real-world identity of ostensibly anonymous individuals. Most sanitization strategies modify the graph's structure based on hypothesized tactics that an adversary would employ. While combining multiple anonymization schemes provides a more comprehensive privacy protection, deciding the appropriate set of techniques-along with evaluating how applying the strategies will affect the utility of the anonymized results-remains a significant challenge. To address this problem, we introduce GraphProtector, a visual interface that guides a user through a privacy preservation pipeline. GraphProtector enables multiple privacy protection schemes which can be simultaneously combined together as a hybrid approach. To demonstrate the effectiveness of GraphPro tector, we report several case studies and feedback collected from interviews with expert users in various scenarios.",
                "AuthorNamesDeduped": "Xu-Meng Wang;Wei Chen 0001;Jia-Kai Chou;Chris Bryan;Huihua Guan;Wenlong Chen;Rusheng Pan;Kwan-Liu Ma",
                "AuthorNames": "Xumeng Wang;Wei Chen;Jia-Kai Chou;Chris Bryan;Huihua Guan;Wenlong Chen;Rusheng Pan;Kwan-Liu Ma",
                "AuthorAffiliation": "Zhejiang University;Zhejiang University;University of California, Davis;University of California, Davis;Zhejiang University, Alibaba Group;Zhejiang University;Zhejiang University;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2011.163;10.1109/tvcg.2017.2745139;10.1109/tvcg.2014.2346920",
                "AuthorKeywords": "Graph privacy,k-anonymity,structural features,privacy preservation",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 1145,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 751,
                "i": [
                    751
                ]
            }
        },
        {
            "name": "Tianyi Lao",
            "value": 38,
            "numPapers": 1,
            "cluster": "3",
            "visible": 1,
            "index": 757,
            "x": 164.18448732343865,
            "y": 220.89240394893534,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "A Utility-Aware Visual Approach for Anonymizing Multi-Attribute Tabular Data",
                "DOI": "10.1109/tvcg.2017.2745139",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745139",
                "FirstPage": 351,
                "LastPage": 360,
                "PaperType": "J",
                "Abstract": "Sharing data for public usage requires sanitization to prevent sensitive information from leaking. Previous studies have presented methods for creating privacy preserving visualizations. However, few of them provide sufficient feedback to users on how much utility is reduced (or preserved) during such a process. To address this, we design a visual interface along with a data manipulation pipeline that allows users to gauge utility loss while interactively and iteratively handling privacy issues in their data. Widely known and discussed types of privacy models, i.e., syntactic anonymity and differential privacy, are integrated and compared under different use case scenarios. Case study results on a variety of examples demonstrate the effectiveness of our approach.",
                "AuthorNamesDeduped": "Xu-Meng Wang;Jia-Kai Chou;Wei Chen 0001;Huihua Guan;Wenlong Chen;Tianyi Lao;Kwan-Liu Ma",
                "AuthorNames": "Xumeng Wang;Jia-Kai Chou;Wei Chen;Huihua Guan;Wenlong Chen;Tianyi Lao;Kwan-Liu Ma",
                "AuthorAffiliation": "Zhejiang University;University of California, Davis;Zhejiang University;Zhejiang University;Zhejiang University;Zhejiang University;University of California, Davis",
                "InternalReferences": "0.1109/tvcg.2011.163;10.1109/tvcg.2015.2467671",
                "AuthorKeywords": "Privacy preserving visualization,utility aware anonymization,syntactic anonymity,differential privacy",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 1194,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 867,
                "i": [
                    867
                ]
            }
        },
        {
            "name": "Yanyan Wang",
            "value": 112,
            "numPapers": 25,
            "cluster": "2",
            "visible": 1,
            "index": 758,
            "x": -270.4535467962786,
            "y": -52.008451479669645,
            "vy": 0,
            "vx": 0,
            "r": 1.128957973517559,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Revisiting Stress Majorization as a Unified Framework for Interactive Constrained Graph Visualization",
                "DOI": "10.1109/tvcg.2017.2745919",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745919",
                "FirstPage": 489,
                "LastPage": 499,
                "PaperType": "J",
                "Abstract": "We present an improved stress majorization method that incorporates various constraints, including directional constraints without the necessity of solving a constraint optimization problem. This is achieved by reformulating the stress function to impose constraints on both the edge vectors and lengths instead of just on the edge lengths (node distances). This is a unified framework for both constrained and unconstrained graph visualizations, where we can model most existing layout constraints, as well as develop new ones such as the star shapes and cluster separation constraints within stress majorization. This improvement also allows us to parallelize computation with an efficient GPU conjugant gradient solver, which yields fast and stable solutions, even for large graphs. As a result, we allow the constraint-based exploration of large graphs with 10K nodes - an approach which previous methods cannot support.",
                "AuthorNamesDeduped": "Yunhai Wang;Yanyan Wang;Yinqi Sun;Lifeng Zhu;Kecheng Lu;Chi-Wing Fu;Michael Sedlmair;Oliver Deussen;Baoquan Chen",
                "AuthorNames": "Yunhai Wang;Yanyan Wang;Yinqi Sun;Lifeng Zhu;Kecheng Lu;Chi-Wing Fu;Michael Sedlmair;Oliver Deussen;Baoquan Chen",
                "AuthorAffiliation": "Shandong University;Shandong University;Shandong University;Southeast University;Shandong University;VRHIT SIAT, Chinese University of Hong Kong;University of Vienna, Austria;Konstanz University, VCC SIAT, China;Shandong University",
                "InternalReferences": "0.1109/infvis.2005.1532130;10.1109/tvcg.2006.156;10.1109/tvcg.2009.109;10.1109/tvcg.2008.130;10.1109/infvis.2004.66;10.1109/tvcg.2009.108;10.1109/tvcg.2012.236",
                "AuthorKeywords": "Graph visualization,stress majorization,constraints",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 1264,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 792,
                "i": [
                    792
                ]
            }
        },
        {
            "name": "Yinqi Sun",
            "value": 100,
            "numPapers": 16,
            "cluster": "2",
            "visible": 1,
            "index": 759,
            "x": 234.70979945917617,
            "y": -144.43444893041723,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Revisiting Stress Majorization as a Unified Framework for Interactive Constrained Graph Visualization",
                "DOI": "10.1109/tvcg.2017.2745919",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745919",
                "FirstPage": 489,
                "LastPage": 499,
                "PaperType": "J",
                "Abstract": "We present an improved stress majorization method that incorporates various constraints, including directional constraints without the necessity of solving a constraint optimization problem. This is achieved by reformulating the stress function to impose constraints on both the edge vectors and lengths instead of just on the edge lengths (node distances). This is a unified framework for both constrained and unconstrained graph visualizations, where we can model most existing layout constraints, as well as develop new ones such as the star shapes and cluster separation constraints within stress majorization. This improvement also allows us to parallelize computation with an efficient GPU conjugant gradient solver, which yields fast and stable solutions, even for large graphs. As a result, we allow the constraint-based exploration of large graphs with 10K nodes - an approach which previous methods cannot support.",
                "AuthorNamesDeduped": "Yunhai Wang;Yanyan Wang;Yinqi Sun;Lifeng Zhu;Kecheng Lu;Chi-Wing Fu;Michael Sedlmair;Oliver Deussen;Baoquan Chen",
                "AuthorNames": "Yunhai Wang;Yanyan Wang;Yinqi Sun;Lifeng Zhu;Kecheng Lu;Chi-Wing Fu;Michael Sedlmair;Oliver Deussen;Baoquan Chen",
                "AuthorAffiliation": "Shandong University;Shandong University;Shandong University;Southeast University;Shandong University;VRHIT SIAT, Chinese University of Hong Kong;University of Vienna, Austria;Konstanz University, VCC SIAT, China;Shandong University",
                "InternalReferences": "0.1109/infvis.2005.1532130;10.1109/tvcg.2006.156;10.1109/tvcg.2009.109;10.1109/tvcg.2008.130;10.1109/infvis.2004.66;10.1109/tvcg.2009.108;10.1109/tvcg.2012.236",
                "AuthorKeywords": "Graph visualization,stress majorization,constraints",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 1264,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 792,
                "i": [
                    792
                ]
            }
        },
        {
            "name": "Lifeng Zhu",
            "value": 85,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 760,
            "x": -75.55332288521922,
            "y": 265.2200886075597,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Revisiting Stress Majorization as a Unified Framework for Interactive Constrained Graph Visualization",
                "DOI": "10.1109/tvcg.2017.2745919",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745919",
                "FirstPage": 489,
                "LastPage": 499,
                "PaperType": "J",
                "Abstract": "We present an improved stress majorization method that incorporates various constraints, including directional constraints without the necessity of solving a constraint optimization problem. This is achieved by reformulating the stress function to impose constraints on both the edge vectors and lengths instead of just on the edge lengths (node distances). This is a unified framework for both constrained and unconstrained graph visualizations, where we can model most existing layout constraints, as well as develop new ones such as the star shapes and cluster separation constraints within stress majorization. This improvement also allows us to parallelize computation with an efficient GPU conjugant gradient solver, which yields fast and stable solutions, even for large graphs. As a result, we allow the constraint-based exploration of large graphs with 10K nodes - an approach which previous methods cannot support.",
                "AuthorNamesDeduped": "Yunhai Wang;Yanyan Wang;Yinqi Sun;Lifeng Zhu;Kecheng Lu;Chi-Wing Fu;Michael Sedlmair;Oliver Deussen;Baoquan Chen",
                "AuthorNames": "Yunhai Wang;Yanyan Wang;Yinqi Sun;Lifeng Zhu;Kecheng Lu;Chi-Wing Fu;Michael Sedlmair;Oliver Deussen;Baoquan Chen",
                "AuthorAffiliation": "Shandong University;Shandong University;Shandong University;Southeast University;Shandong University;VRHIT SIAT, Chinese University of Hong Kong;University of Vienna, Austria;Konstanz University, VCC SIAT, China;Shandong University",
                "InternalReferences": "0.1109/infvis.2005.1532130;10.1109/tvcg.2006.156;10.1109/tvcg.2009.109;10.1109/tvcg.2008.130;10.1109/infvis.2004.66;10.1109/tvcg.2009.108;10.1109/tvcg.2012.236",
                "AuthorKeywords": "Graph visualization,stress majorization,constraints",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 1264,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 792,
                "i": [
                    792
                ]
            }
        },
        {
            "name": "Barbora Kozlíková",
            "value": 29,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 761,
            "x": -123.52405916234767,
            "y": -246.76265278209507,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Visualization of Large Molecular Trajectories",
                "DOI": "10.1109/tvcg.2018.2864851",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864851",
                "FirstPage": 987,
                "LastPage": 996,
                "PaperType": "J",
                "Abstract": "The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.",
                "AuthorNamesDeduped": "David Duran;Pedro Hermosilla;Timo Ropinski;Barbora Kozlíková;Àlvar Vinacua;Pere-Pau Vázquez",
                "AuthorNames": "David Duran;Pedro Hermosilla;Timo Ropinski;Barbora Kozlíková;Álvar Vinacua;Pere-Pau Vázquez",
                "AuthorAffiliation": "Institut de Robotica i Informatica Industrial, Barcelona, Catalunya, ES;Visual Computing Group, U. Ulm.;Visual Computing Group, U. Ulm.;Masarykova univerzita, Brno, Jihomoravský, CZ;Institut de Robotica i Informatica Industrial, Barcelona, Catalunya, ES;Institut de Robotica i Informatica Industrial, Barcelona, Catalunya, ES",
                "InternalReferences": "0.1109/tvcg.2015.2467434;10.1109/visual.2005.1532792;10.1109/tvcg.2016.2598825;10.1109/tvcg.2016.2598797;10.1109/tvcg.2014.2346574;10.1109/tvcg.2012.225",
                "AuthorKeywords": "Molecular visualization,simulation inspection,long trajectories",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 712,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 706,
                "i": [
                    706
                ]
            }
        },
        {
            "name": "Jan Byska",
            "value": 9,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 762,
            "x": 257.9377688732991,
            "y": 98.58046149549389,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "sMolBoxes: Dataflow Model for Molecular Dynamics Exploration",
                "DOI": "10.1109/tvcg.2022.3209411",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209411",
                "FirstPage": 581,
                "LastPage": 590,
                "PaperType": "J",
                "Abstract": "We present sMolBoxes, a dataflow representation for the exploration and analysis of long molecular dynamics (MD) simulations. When MD simulations reach millions of snapshots, a frame-by-frame observation is not feasible anymore. Thus, biochemists rely to a large extent only on quantitative analysis of geometric and physico-chemical properties. However, the usage of abstract methods to study inherently spatial data hinders the exploration and poses a considerable workload. sMolBoxes link quantitative analysis of a user-defined set of properties with interactive 3D visualizations. They enable visual explanations of molecular behaviors, which lead to an efficient discovery of biochemically significant parts of the MD simulation. sMolBoxes follow a node-based model for flexible definition, combination, and immediate evaluation of properties to be investigated. Progressive analytics enable fluid switching between multiple properties, which facilitates hypothesis generation. Each sMolBox provides quick insight to an observed property or function, available in more detail in the bigBox View. The case studies illustrate that even with relatively few sMolBoxes, it is possible to express complex analytical tasks, and their use in exploratory analysis is perceived as more efficient than traditional scripting-based methods.",
                "AuthorNamesDeduped": "Pavol Ulbrich;Manuela Waldner;Katarína Furmanová;Sérgio M. Marques;David Bednár;Barbora Kozlíková;Jan Byska",
                "AuthorNames": "Pavol Ulbrich;Manuela Waldner;Katarína Furmanová;Sérgio M. Marques;David Bednář;Barbora Kozlíková;Jan Byška",
                "AuthorAffiliation": "Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;TU Wien, Vienna, Austria;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;Department of Experimental Biology, Loschmidt Laboratories, Faculty of Science, RECETOX, Masaryk University, Brno, Czech Republic;Department of Experimental Biology, Loschmidt Laboratories, Faculty of Science, RECETOX, Masaryk University, Brno, Czech Republic;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic;Visitlab, Faculty of Informatics, Masaryk University, Czech Republic",
                "InternalReferences": "0.1109/tvcg.2018.2864851;10.1109/vast.2007.4389013;10.1109/tvcg.2012.213;10.1109/tvcg.2011.225;10.1109/tvcg.2016.2598497;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Molecular dynamics,structure,node-based visualization,progressive analytics",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 407,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 203,
                "i": [
                    203
                ]
            }
        },
        {
            "name": "Brian Bollen",
            "value": 9,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 763,
            "x": -256.9537569078371,
            "y": 101.61085970971881,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Computing a Stable Distance on Merge Trees",
                "DOI": "10.1109/tvcg.2022.3209395",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209395",
                "FirstPage": 1168,
                "LastPage": 1177,
                "PaperType": "J",
                "Abstract": "Distances on merge trees facilitate visual comparison of collections of scalar fields. Two desirable properties for these distances to exhibit are 1) the ability to discern between scalar fields which other, less complex topological summaries cannot and 2) to still be robust to perturbations in the dataset. The combination of these two properties, known respectively as stability and discriminativity, has led to theoretical distances which are either thought to be or shown to be computationally complex and thus their implementations have been scarce. In order to design similarity measures on merge trees which are computationally feasible for more complex merge trees, many researchers have elected to loosen the restrictions on at least one of these two properties. The question still remains, however, if there are practical situations where trading these desirable properties is necessary. Here we construct a distance between merge trees which is designed to retain both discriminativity and stability. While our approach can be expensive for large merge trees, we illustrate its use in a setting where the number of nodes is small. This setting can be made more practical since we also provide a proof that persistence simplification increases the outputted distance by at most half of the simplified value. We demonstrate our distance measure on applications in shape comparison and on detection of periodicity in the von Kármán vortex street.",
                "AuthorNamesDeduped": "Brian Bollen;Pasindu Tennakoon;Joshua A. Levine",
                "AuthorNames": "Brian Bollen;Pasindu Tennakoon;Joshua A. Levine",
                "AuthorAffiliation": "Department of Mathematics, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA",
                "InternalReferences": "0.1109/tvcg.2012.287;10.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2007.70603;10.1109/tvcg.2006.186;10.1109/tvcg.2011.236;10.1109/tvcg.2017.2743938;10.1109/tvcg.2009.163;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Merge trees,scalar fields,distance measure,stability,edit distance,persistence",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 284,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 204,
                "i": [
                    204
                ]
            }
        },
        {
            "name": "Pasindu Tennakoon",
            "value": 9,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 764,
            "x": 120.91165871374976,
            "y": -248.65713500137036,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Computing a Stable Distance on Merge Trees",
                "DOI": "10.1109/tvcg.2022.3209395",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209395",
                "FirstPage": 1168,
                "LastPage": 1177,
                "PaperType": "J",
                "Abstract": "Distances on merge trees facilitate visual comparison of collections of scalar fields. Two desirable properties for these distances to exhibit are 1) the ability to discern between scalar fields which other, less complex topological summaries cannot and 2) to still be robust to perturbations in the dataset. The combination of these two properties, known respectively as stability and discriminativity, has led to theoretical distances which are either thought to be or shown to be computationally complex and thus their implementations have been scarce. In order to design similarity measures on merge trees which are computationally feasible for more complex merge trees, many researchers have elected to loosen the restrictions on at least one of these two properties. The question still remains, however, if there are practical situations where trading these desirable properties is necessary. Here we construct a distance between merge trees which is designed to retain both discriminativity and stability. While our approach can be expensive for large merge trees, we illustrate its use in a setting where the number of nodes is small. This setting can be made more practical since we also provide a proof that persistence simplification increases the outputted distance by at most half of the simplified value. We demonstrate our distance measure on applications in shape comparison and on detection of periodicity in the von Kármán vortex street.",
                "AuthorNamesDeduped": "Brian Bollen;Pasindu Tennakoon;Joshua A. Levine",
                "AuthorNames": "Brian Bollen;Pasindu Tennakoon;Joshua A. Levine",
                "AuthorAffiliation": "Department of Mathematics, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA;Department of Computer Science, The University of Arizona, USA",
                "InternalReferences": "0.1109/tvcg.2012.287;10.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2007.70603;10.1109/tvcg.2006.186;10.1109/tvcg.2011.236;10.1109/tvcg.2017.2743938;10.1109/tvcg.2009.163;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Merge trees,scalar fields,distance measure,stability,edit distance,persistence",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 284,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 204,
                "i": [
                    204
                ]
            }
        },
        {
            "name": "Bernd Hamann",
            "value": 574,
            "numPapers": 82,
            "cluster": "11",
            "visible": 1,
            "index": 765,
            "x": 78.86051314595109,
            "y": 265.1999612864928,
            "vy": 0,
            "vx": 0,
            "r": 1.66090961427749,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "A Practical Approach to Morse-Smale Complex Computation: Scalability and Generality",
                "DOI": "10.1109/tvcg.2008.110",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.110",
                "FirstPage": 1619,
                "LastPage": 1626,
                "PaperType": "J",
                "Abstract": "The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, efficient computation of the MS complex for large scale data remains a challenging problem. We describe a new algorithm and easily extensible framework for computing MS complexes for large scale data of any dimension where scalar values are given at the vertices of a closure-finite and weak topology (CW) complex, therefore enabling computation on a wide variety of meshes such as regular grids, simplicial meshes, and adaptive multiresolution (AMR) meshes. A new divide-and-conquer strategy allows for memory-efficient computation of the MS complex and simplification on-the-fly to control the size of the output. In addition to being able to handle various data formats, the framework supports implementation-specific optimizations, for example, for regular data. We present the complete characterization of critical point cancellations in all dimensions. This technique enables the topology based analysis of large data on off-the-shelf computers. In particular we demonstrate the first full computation of the MS complex for a 1 billion/1024<sup>3</sup>node grid on a laptop computer with 2 Gb memory.",
                "AuthorNamesDeduped": "Attila Gyulassy;Peer-Timo Bremer;Bernd Hamann;Valerio Pascucci",
                "AuthorNames": "Attila Gyulassy;Peer-Timo Bremer;Bernd Hamann;Valerio Pascucci",
                "AuthorAffiliation": "UC Davis and Lawrence Livermore National Laboratory;Lawrence Livemore National Laboratory;University of California, Davis, USA;University of Utah, USA",
                "InternalReferences": "0.1109/visual.2005.1532839;10.1109/visual.1998.745329;10.1109/visual.2004.96;10.1109/tvcg.2007.70552;10.1109/visual.1998.745312;10.1109/visual.2000.885680;10.1109/visual.2000.885703;10.1109/tvcg.2006.186",
                "AuthorKeywords": "Topology-based analysis, Morse-Smale complex, large scale data",
                "AminerCitationCount": 237,
                "CitationCountCrossRef": 142,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 805,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2035,
                "i": [
                    2035
                ]
            }
        },
        {
            "name": "Mark A. Duchaineau",
            "value": 172,
            "numPapers": 32,
            "cluster": "11",
            "visible": 1,
            "index": 766,
            "x": -237.44422714396504,
            "y": -142.37358953122285,
            "vy": 0,
            "vx": 0,
            "r": 1.19804260218768,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Topologically Clean Distance fields",
                "DOI": "10.1109/tvcg.2007.70603",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70603",
                "FirstPage": 1432,
                "LastPage": 1439,
                "PaperType": "J",
                "Abstract": "Analysis of the results obtained from material simulations is important in the physical sciences. Our research was motivated by the need to investigate the properties of a simulated porous solid as it is hit by a projectile. This paper describes two techniques for the generation of distance fields containing a minimal number of topological features, and we use them to identify features of the material. We focus on distance fields defined on a volumetric domain considering the distance to a given surface embedded within the domain. Topological features of the field are characterized by its critical points. Our first method begins with a distance field that is computed using a standard approach, and simplifies this field using ideas from Morse theory. We present a procedure for identifying and extracting a feature set through analysis of the MS complex, and apply it to find the invariants in the clean distance field. Our second method proceeds by advancing a front, beginning at the surface, and locally controlling the creation of new critical points. We demonstrate the value of topologically clean distance fields for the analysis of filament structures in porous solids. Our methods produce a curved skeleton representation of the filaments that helps material scientists to perform a detailed qualitative and quantitative analysis of pores, and hence infer important material properties. Furthermore, we provide a set of criteria for finding the \"difference\" between two skeletal structures, and use this to examine how the structure of the porous solid changes over several timesteps in the simulation of the particle impact.",
                "AuthorNamesDeduped": "Attila Gyulassy;Mark A. Duchaineau;Vijay Natarajan;Valerio Pascucci;Eduardo M. Bringa;Andrew Higginbotham;Bernd Hamann",
                "AuthorNames": "Attila Gyulassy;Mark Duchaineau;Vijay Natarajan;Valerio Pascucci;Eduardo Bringa;Andrew Higginbotham;Bernd Hamann",
                "AuthorAffiliation": "Institute for Data Analysis and Visualization, Department of Computer Science, University of California, Davis, USA;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, USA;Department of Computer Science and Automation, Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore, India;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, USA;Material Science and Technology Division, Lawrence Livemore National Laboratory, USA;Department of Physics, Clarendon Laboratory, University of Oxford, Oxford, UK;Institute for Data Analysis and Visualization, Department of Computer Science, University of California, Davis, USA",
                "InternalReferences": "0.1109/visual.2005.1532839;10.1109/visual.2005.1532783;10.1109/visual.2003.1250356;10.1109/visual.2004.96;10.1109/visual.2000.885680;10.1109/visual.2000.885703",
                "AuthorKeywords": "Morse theory, Morse-Smale complex, distance field, topological simplification, wavefront, critical point, porous solid, material science",
                "AminerCitationCount": 134,
                "CitationCountCrossRef": 83,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 436,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2164,
                "i": [
                    2164
                ]
            }
        },
        {
            "name": "Eduardo M. Bringa",
            "value": 105,
            "numPapers": 5,
            "cluster": "11",
            "visible": 1,
            "index": 767,
            "x": 271.4328481164964,
            "y": -55.44554953616203,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Topologically Clean Distance fields",
                "DOI": "10.1109/tvcg.2007.70603",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70603",
                "FirstPage": 1432,
                "LastPage": 1439,
                "PaperType": "J",
                "Abstract": "Analysis of the results obtained from material simulations is important in the physical sciences. Our research was motivated by the need to investigate the properties of a simulated porous solid as it is hit by a projectile. This paper describes two techniques for the generation of distance fields containing a minimal number of topological features, and we use them to identify features of the material. We focus on distance fields defined on a volumetric domain considering the distance to a given surface embedded within the domain. Topological features of the field are characterized by its critical points. Our first method begins with a distance field that is computed using a standard approach, and simplifies this field using ideas from Morse theory. We present a procedure for identifying and extracting a feature set through analysis of the MS complex, and apply it to find the invariants in the clean distance field. Our second method proceeds by advancing a front, beginning at the surface, and locally controlling the creation of new critical points. We demonstrate the value of topologically clean distance fields for the analysis of filament structures in porous solids. Our methods produce a curved skeleton representation of the filaments that helps material scientists to perform a detailed qualitative and quantitative analysis of pores, and hence infer important material properties. Furthermore, we provide a set of criteria for finding the \"difference\" between two skeletal structures, and use this to examine how the structure of the porous solid changes over several timesteps in the simulation of the particle impact.",
                "AuthorNamesDeduped": "Attila Gyulassy;Mark A. Duchaineau;Vijay Natarajan;Valerio Pascucci;Eduardo M. Bringa;Andrew Higginbotham;Bernd Hamann",
                "AuthorNames": "Attila Gyulassy;Mark Duchaineau;Vijay Natarajan;Valerio Pascucci;Eduardo Bringa;Andrew Higginbotham;Bernd Hamann",
                "AuthorAffiliation": "Institute for Data Analysis and Visualization, Department of Computer Science, University of California, Davis, USA;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, USA;Department of Computer Science and Automation, Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore, India;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, USA;Material Science and Technology Division, Lawrence Livemore National Laboratory, USA;Department of Physics, Clarendon Laboratory, University of Oxford, Oxford, UK;Institute for Data Analysis and Visualization, Department of Computer Science, University of California, Davis, USA",
                "InternalReferences": "0.1109/visual.2005.1532839;10.1109/visual.2005.1532783;10.1109/visual.2003.1250356;10.1109/visual.2004.96;10.1109/visual.2000.885680;10.1109/visual.2000.885703",
                "AuthorKeywords": "Morse theory, Morse-Smale complex, distance field, topological simplification, wavefront, critical point, porous solid, material science",
                "AminerCitationCount": 134,
                "CitationCountCrossRef": 83,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 436,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2164,
                "i": [
                    2164
                ]
            }
        },
        {
            "name": "Andrew Higginbotham",
            "value": 105,
            "numPapers": 5,
            "cluster": "11",
            "visible": 1,
            "index": 768,
            "x": -162.7991586739463,
            "y": 224.3801103820373,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Topologically Clean Distance fields",
                "DOI": "10.1109/tvcg.2007.70603",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70603",
                "FirstPage": 1432,
                "LastPage": 1439,
                "PaperType": "J",
                "Abstract": "Analysis of the results obtained from material simulations is important in the physical sciences. Our research was motivated by the need to investigate the properties of a simulated porous solid as it is hit by a projectile. This paper describes two techniques for the generation of distance fields containing a minimal number of topological features, and we use them to identify features of the material. We focus on distance fields defined on a volumetric domain considering the distance to a given surface embedded within the domain. Topological features of the field are characterized by its critical points. Our first method begins with a distance field that is computed using a standard approach, and simplifies this field using ideas from Morse theory. We present a procedure for identifying and extracting a feature set through analysis of the MS complex, and apply it to find the invariants in the clean distance field. Our second method proceeds by advancing a front, beginning at the surface, and locally controlling the creation of new critical points. We demonstrate the value of topologically clean distance fields for the analysis of filament structures in porous solids. Our methods produce a curved skeleton representation of the filaments that helps material scientists to perform a detailed qualitative and quantitative analysis of pores, and hence infer important material properties. Furthermore, we provide a set of criteria for finding the \"difference\" between two skeletal structures, and use this to examine how the structure of the porous solid changes over several timesteps in the simulation of the particle impact.",
                "AuthorNamesDeduped": "Attila Gyulassy;Mark A. Duchaineau;Vijay Natarajan;Valerio Pascucci;Eduardo M. Bringa;Andrew Higginbotham;Bernd Hamann",
                "AuthorNames": "Attila Gyulassy;Mark Duchaineau;Vijay Natarajan;Valerio Pascucci;Eduardo Bringa;Andrew Higginbotham;Bernd Hamann",
                "AuthorAffiliation": "Institute for Data Analysis and Visualization, Department of Computer Science, University of California, Davis, USA;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, USA;Department of Computer Science and Automation, Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore, India;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, USA;Material Science and Technology Division, Lawrence Livemore National Laboratory, USA;Department of Physics, Clarendon Laboratory, University of Oxford, Oxford, UK;Institute for Data Analysis and Visualization, Department of Computer Science, University of California, Davis, USA",
                "InternalReferences": "0.1109/visual.2005.1532839;10.1109/visual.2005.1532783;10.1109/visual.2003.1250356;10.1109/visual.2004.96;10.1109/visual.2000.885680;10.1109/visual.2000.885703",
                "AuthorKeywords": "Morse theory, Morse-Smale complex, distance field, topological simplification, wavefront, critical point, porous solid, material science",
                "AminerCitationCount": 134,
                "CitationCountCrossRef": 83,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 436,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2164,
                "i": [
                    2164
                ]
            }
        },
        {
            "name": "David E. Laney",
            "value": 110,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 769,
            "x": -31.544056931486427,
            "y": -275.59929693724393,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Understanding the Structure of the Turbulent Mixing Layer in Hydrodynamic Instabilities",
                "DOI": "10.1109/tvcg.2006.186",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.186",
                "FirstPage": 1053,
                "LastPage": 1060,
                "PaperType": "J",
                "Abstract": "When a heavy fluid is placed above a light fluid, tiny vertical perturbations in the interface create a characteristic structure of rising bubbles and falling spikes known as Rayleigh-Taylor instability. Rayleigh-Taylor instabilities have received much attention over the past half-century because of their importance in understanding many natural and man-made phenomena, ranging from the rate of formation of heavy elements in supernovae to the design of capsules for Inertial Confinement Fusion. We present a new approach to analyze Rayleigh-Taylor instabilities in which we extract a hierarchical segmentation of the mixing envelope surface to identify bubbles and analyze analogous segmentations of fields on the original interface plane. We compute meaningful statistical information that reveals the evolution of topological features and corroborates the observations made by scientists. We also use geometric tracking to follow the evolution of single bubbles and highlight merge/split events leading to the formation of the large and complex structures characteristic of the later stages. In particular we (i) Provide a formal definition of a bubble; (ii) Segment the envelope surface to identify bubbles; (iii) Provide a multi-scale analysis technique to produce statistical measures of bubble growth; (iv) Correlate bubble measurements with analysis of fields on the interface plane; (v) Track the evolution of individual bubbles over time. Our approach is based on the rigorous mathematical foundations of Morse theory and can be applied to a more general class of applications",
                "AuthorNamesDeduped": "David E. Laney;Peer-Timo Bremer;Ajith Mascarenhas;Paul L. Miller;Valerio Pascucci",
                "AuthorNames": "D. Laney;P.-t. Bremer;A. Mascarenhas;P. Miller;V. Pascucci",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA",
                "InternalReferences": "0.1109/visual.2003.1250376;10.1109/visual.2002.1183772;10.1109/visual.2005.1532842;10.1109/visual.2000.885716;10.1109/visual.2004.96;10.1109/visual.2003.1250408;10.1109/visual.2004.107;10.1109/visual.1999.809907;10.1109/visual.1998.745288;10.1109/visual.2005.1532839",
                "AuthorKeywords": "topology, multi-resolution, Morse theory",
                "AminerCitationCount": 216,
                "CitationCountCrossRef": 113,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 570,
                "Award": "BA",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2271,
                "i": [
                    2271
                ]
            }
        },
        {
            "name": "Ajith Mascarenhas",
            "value": 110,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 770,
            "x": 209.5602900138854,
            "y": 182.02880225199587,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Understanding the Structure of the Turbulent Mixing Layer in Hydrodynamic Instabilities",
                "DOI": "10.1109/tvcg.2006.186",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.186",
                "FirstPage": 1053,
                "LastPage": 1060,
                "PaperType": "J",
                "Abstract": "When a heavy fluid is placed above a light fluid, tiny vertical perturbations in the interface create a characteristic structure of rising bubbles and falling spikes known as Rayleigh-Taylor instability. Rayleigh-Taylor instabilities have received much attention over the past half-century because of their importance in understanding many natural and man-made phenomena, ranging from the rate of formation of heavy elements in supernovae to the design of capsules for Inertial Confinement Fusion. We present a new approach to analyze Rayleigh-Taylor instabilities in which we extract a hierarchical segmentation of the mixing envelope surface to identify bubbles and analyze analogous segmentations of fields on the original interface plane. We compute meaningful statistical information that reveals the evolution of topological features and corroborates the observations made by scientists. We also use geometric tracking to follow the evolution of single bubbles and highlight merge/split events leading to the formation of the large and complex structures characteristic of the later stages. In particular we (i) Provide a formal definition of a bubble; (ii) Segment the envelope surface to identify bubbles; (iii) Provide a multi-scale analysis technique to produce statistical measures of bubble growth; (iv) Correlate bubble measurements with analysis of fields on the interface plane; (v) Track the evolution of individual bubbles over time. Our approach is based on the rigorous mathematical foundations of Morse theory and can be applied to a more general class of applications",
                "AuthorNamesDeduped": "David E. Laney;Peer-Timo Bremer;Ajith Mascarenhas;Paul L. Miller;Valerio Pascucci",
                "AuthorNames": "D. Laney;P.-t. Bremer;A. Mascarenhas;P. Miller;V. Pascucci",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA",
                "InternalReferences": "0.1109/visual.2003.1250376;10.1109/visual.2002.1183772;10.1109/visual.2005.1532842;10.1109/visual.2000.885716;10.1109/visual.2004.96;10.1109/visual.2003.1250408;10.1109/visual.2004.107;10.1109/visual.1999.809907;10.1109/visual.1998.745288;10.1109/visual.2005.1532839",
                "AuthorKeywords": "topology, multi-resolution, Morse theory",
                "AminerCitationCount": 216,
                "CitationCountCrossRef": 113,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 570,
                "Award": "BA",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2271,
                "i": [
                    2271
                ]
            }
        },
        {
            "name": "Paul L. Miller",
            "value": 110,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 771,
            "x": -277.6619328895705,
            "y": 7.338325696480764,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Understanding the Structure of the Turbulent Mixing Layer in Hydrodynamic Instabilities",
                "DOI": "10.1109/tvcg.2006.186",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.186",
                "FirstPage": 1053,
                "LastPage": 1060,
                "PaperType": "J",
                "Abstract": "When a heavy fluid is placed above a light fluid, tiny vertical perturbations in the interface create a characteristic structure of rising bubbles and falling spikes known as Rayleigh-Taylor instability. Rayleigh-Taylor instabilities have received much attention over the past half-century because of their importance in understanding many natural and man-made phenomena, ranging from the rate of formation of heavy elements in supernovae to the design of capsules for Inertial Confinement Fusion. We present a new approach to analyze Rayleigh-Taylor instabilities in which we extract a hierarchical segmentation of the mixing envelope surface to identify bubbles and analyze analogous segmentations of fields on the original interface plane. We compute meaningful statistical information that reveals the evolution of topological features and corroborates the observations made by scientists. We also use geometric tracking to follow the evolution of single bubbles and highlight merge/split events leading to the formation of the large and complex structures characteristic of the later stages. In particular we (i) Provide a formal definition of a bubble; (ii) Segment the envelope surface to identify bubbles; (iii) Provide a multi-scale analysis technique to produce statistical measures of bubble growth; (iv) Correlate bubble measurements with analysis of fields on the interface plane; (v) Track the evolution of individual bubbles over time. Our approach is based on the rigorous mathematical foundations of Morse theory and can be applied to a more general class of applications",
                "AuthorNamesDeduped": "David E. Laney;Peer-Timo Bremer;Ajith Mascarenhas;Paul L. Miller;Valerio Pascucci",
                "AuthorNames": "D. Laney;P.-t. Bremer;A. Mascarenhas;P. Miller;V. Pascucci",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA",
                "InternalReferences": "0.1109/visual.2003.1250376;10.1109/visual.2002.1183772;10.1109/visual.2005.1532842;10.1109/visual.2000.885716;10.1109/visual.2004.96;10.1109/visual.2003.1250408;10.1109/visual.2004.107;10.1109/visual.1999.809907;10.1109/visual.1998.745288;10.1109/visual.2005.1532839",
                "AuthorKeywords": "topology, multi-resolution, Morse theory",
                "AminerCitationCount": 216,
                "CitationCountCrossRef": 113,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 570,
                "Award": "BA",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2271,
                "i": [
                    2271
                ]
            }
        },
        {
            "name": "Jun Yuan 0003",
            "value": 41,
            "numPapers": 41,
            "cluster": "1",
            "visible": 1,
            "index": 772,
            "x": 199.9117347661128,
            "y": -193.09401415580797,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Analysis of Neural Architecture Spaces for Summarizing Design Principles",
                "DOI": "10.1109/tvcg.2022.3209404",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209404",
                "FirstPage": 288,
                "LastPage": 298,
                "PaperType": "J",
                "Abstract": "Recent advances in artificial intelligence largely benefit from better neural network architectures. These architectures are a product of a costly process of trial-and-error. To ease this process, we develop ArchExplorer, a visual analysis method for understanding a neural architecture space and summarizing design principles. The key idea behind our method is to make the architecture space explainable by exploiting structural distances between architectures. We formulate the pairwise distance calculation as solving an all-pairs shortest path problem. To improve efficiency, we decompose this problem into a set of single-source shortest path problems. The time complexity is reduced from O(kn2N) to O(knN). Architectures are hierarchically clustered according to the distances between them. A circle-packing-based architecture visualization has been developed to convey both the global relationships between clusters and local neighborhoods of the architectures in each cluster. Two case studies and a post-analysis are presented to demonstrate the effectiveness of ArchExplorer in summarizing design principles and selecting better-performing architectures.",
                "AuthorNamesDeduped": "Jun Yuan 0003;Mengchen Liu;Fengyuan Tian;Shixia Liu",
                "AuthorNames": "Jun Yuan;Mengchen Liu;Fengyuan Tian;Shixia Liu",
                "AuthorAffiliation": "BNRist, Tsinghua University, China;Microsoft, USA;BNRist, Tsinghua University, China;BNRist, Tsinghua University, China",
                "InternalReferences": "0.1109/tvcg.2019.2934261;10.1109/tvcg.2020.3028976;10.1109/tvcg.2021.3114683;10.1109/tvcg.2015.2466992;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/tvcg.2020.3028888;10.1109/vast.2017.8585721;10.1109/tvcg.2020.3030361;10.1109/tvcg.2020.3030380;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2018.2864838;10.1109/tvcg.2017.2744158;10.1109/tvcg.2020.3030471;10.1109/tvcg.2020.3030418;10.1109/vast50239.2020.00007;10.1109/tvcg.2020.3030432;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Machine learning,visual analytics,neural architecture search,design principle,knowledge discovery",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 77,
                "DownloadsXplore": 936,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 207,
                "i": [
                    207
                ]
            }
        },
        {
            "name": "Xiting Wang",
            "value": 158,
            "numPapers": 48,
            "cluster": "1",
            "visible": 1,
            "index": 773,
            "x": -16.986543000352654,
            "y": 277.59945489301157,
            "vy": 0,
            "vx": 0,
            "r": 1.181922855497985,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Visual Diagnosis of Tree Boosting Methods",
                "DOI": "10.1109/tvcg.2017.2744378",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744378",
                "FirstPage": 163,
                "LastPage": 173,
                "PaperType": "J",
                "Abstract": "Tree boosting, which combines weak learners (typically decision trees) to generate a strong learner, is a highly effective and widely used machine learning method. However, the development of a high performance tree boosting model is a time-consuming process that requires numerous trial-and-error experiments. To tackle this issue, we have developed a visual diagnosis tool, BOOSTVis, to help experts quickly analyze and diagnose the training process of tree boosting. In particular, we have designed a temporal confusion matrix visualization, and combined it with a t-SNE projection and a tree visualization. These visualization components work together to provide a comprehensive overview of a tree boosting model, and enable an effective diagnosis of an unsatisfactory training process. Two case studies that were conducted on the Otto Group Product Classification Challenge dataset demonstrate that BOOSTVis can provide informative feedback and guidance to improve understanding and diagnosis of tree boosting algorithms.",
                "AuthorNamesDeduped": "Shixia Liu;Jiannan Xiao;Junlin Liu;Xiting Wang;Jing Wu 0004;Jun Zhu 0001",
                "AuthorNames": "Shixia Liu;Jiannan Xiao;Junlin Liu;Xiting Wang;Jing Wu;Jun Zhu",
                "AuthorAffiliation": "Tsinghua University and National Engineering Lab for Big Data Software;Tsinghua University and National Engineering Lab for Big Data Software;Tsinghua University and National Engineering Lab for Big Data Software;Microsoft Research;Cardiff University;Tsinghua University and National Engineering Lab for Big Data Software",
                "InternalReferences": "0.1109/tvcg.2014.2346660;10.1109/vast.2010.5652443;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/visual.2000.885740;10.1109/visual.2005.1532820;10.1109/vast.2011.6102453",
                "AuthorKeywords": "tree boosting,model analysis,temporal confusion matrix,tree visualization",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 5848,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 850,
                "i": [
                    850
                ]
            }
        },
        {
            "name": "Shouxing Xiang",
            "value": 105,
            "numPapers": 33,
            "cluster": "1",
            "visible": 1,
            "index": 774,
            "x": -175.10346874123445,
            "y": -216.30713172428582,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Interactive Correction of Mislabeled Training Data",
                "DOI": "10.1109/vast47406.2019.8986943",
                "Link": "http://dx.doi.org/10.1109/VAST47406.2019.8986943",
                "FirstPage": 57,
                "LastPage": 68,
                "PaperType": "C",
                "Abstract": "In this paper, we develop a visual analysis method for interactively improving the quality of labeled data, which is essential to the success of supervised and semi-supervised learning. The quality improvement is achieved through the use of user-selected trusted items. We employ a bi-level optimization model to accurately match the labels of the trusted items and to minimize the training loss. Based on this model, a scalable data correction algorithm is developed to handle tens of thousands of labeled data efficiently. The selection of the trusted items is facilitated by an incremental tSNE with improved computational efficiency and layout stability to ensure a smooth transition between different levels. We evaluated our method on real-world datasets through quantitative evaluation and case studies, and the results were generally favorable.",
                "AuthorNamesDeduped": "Shouxing Xiang;Xi Ye;Jiazhi Xia;Jing Wu 0004;Yang Chen;Shixia Liu",
                "AuthorNames": "Shouxing Xiang;Xi Ye;Jiazhi Xia;Jing Wu;Yang Chen;Shixia Liu",
                "AuthorAffiliation": "School of Software, BNRist, Tsinghua University;School of Software, BNRist, Tsinghua University;School of Computer Science and Engineering, Central South University;School of Computer Science and Informatics, Cardiff University;School of Software, BNRist, Tsinghua University;School of Software, BNRist, Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2016.2598592;10.1109/tvcg.2017.2744818;10.1109/tvcg.2017.2744419;10.1109/tvcg.2014.2346594;10.1109/vast.2012.6400492;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/tvcg.2014.2346574;10.1109/tvcg.2017.2744685;10.1109/tvcg.2018.2865026",
                "AuthorKeywords": "Labeled data debugging,trusted item,tSNE",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 604,
                "i": [
                    604
                ]
            }
        },
        {
            "name": "Yuan Tian",
            "value": 8,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 775,
            "x": 275.4068409057329,
            "y": 41.244053902645625,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "ECoalVis: Visual Analysis of Control Strategies in Coal-fired Power Plants",
                "DOI": "10.1109/tvcg.2022.3209430",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209430",
                "FirstPage": 1091,
                "LastPage": 1101,
                "PaperType": "J",
                "Abstract": "Improving the efficiency of coal-fired power plants has numerous benefits. The control strategy is one of the major factors affecting such efficiency. However, due to the complex and dynamic environment inside the power plants, it is hard to extract and evaluate control strategies and their cascading impact across massive sensors. Existing manual and data-driven approaches cannot well support the analysis of control strategies because these approaches are time-consuming and do not scale with the complexity of the power plant systems. Three challenges were identified: a) interactive extraction of control strategies from large-scale dynamic sensor data, b) intuitive visual representation of cascading impact among the sensors in a complex power plant system, and c) time-lag-aware analysis of the impact of control strategies on electricity generation efficiency. By collaborating with energy domain experts, we addressed these challenges with ECoalVis, a novel interactive system for experts to visually analyze the control strategies of coal-fired power plants extracted from historical sensor data. The effectiveness of the proposed system is evaluated with two usage scenarios on a real-world historical dataset and received positive feedback from experts.",
                "AuthorNamesDeduped": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorNames": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2021.3114792;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114875;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2017.2745105;10.1109/tvcg.2014.2346454;10.1109/tvcg.2015.2467622;10.1109/tvcg.2018.2864886;10.1109/tvcg.2009.200;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114878;10.1109/tvcg.2009.117;10.1109/vast.2009.5332595;10.1109/tvcg.2016.2598664;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Power plant visual analytics,energy data visualization,spatiotemporal visualization,smart factory",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 208,
                "i": [
                    208
                ]
            }
        },
        {
            "name": "Haoran Xu",
            "value": 8,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 776,
            "x": -231.08523859105085,
            "y": 155.72287084855935,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "ECoalVis: Visual Analysis of Control Strategies in Coal-fired Power Plants",
                "DOI": "10.1109/tvcg.2022.3209430",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209430",
                "FirstPage": 1091,
                "LastPage": 1101,
                "PaperType": "J",
                "Abstract": "Improving the efficiency of coal-fired power plants has numerous benefits. The control strategy is one of the major factors affecting such efficiency. However, due to the complex and dynamic environment inside the power plants, it is hard to extract and evaluate control strategies and their cascading impact across massive sensors. Existing manual and data-driven approaches cannot well support the analysis of control strategies because these approaches are time-consuming and do not scale with the complexity of the power plant systems. Three challenges were identified: a) interactive extraction of control strategies from large-scale dynamic sensor data, b) intuitive visual representation of cascading impact among the sensors in a complex power plant system, and c) time-lag-aware analysis of the impact of control strategies on electricity generation efficiency. By collaborating with energy domain experts, we addressed these challenges with ECoalVis, a novel interactive system for experts to visually analyze the control strategies of coal-fired power plants extracted from historical sensor data. The effectiveness of the proposed system is evaluated with two usage scenarios on a real-world historical dataset and received positive feedback from experts.",
                "AuthorNamesDeduped": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorNames": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2021.3114792;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114875;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2017.2745105;10.1109/tvcg.2014.2346454;10.1109/tvcg.2015.2467622;10.1109/tvcg.2018.2864886;10.1109/tvcg.2009.200;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114878;10.1109/tvcg.2009.117;10.1109/vast.2009.5332595;10.1109/tvcg.2016.2598664;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Power plant visual analytics,energy data visualization,spatiotemporal visualization,smart factory",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 208,
                "i": [
                    208
                ]
            }
        },
        {
            "name": "Xiangyu Zhu",
            "value": 8,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 777,
            "x": 65.2477487352686,
            "y": -271.09542837344054,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "ECoalVis: Visual Analysis of Control Strategies in Coal-fired Power Plants",
                "DOI": "10.1109/tvcg.2022.3209430",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209430",
                "FirstPage": 1091,
                "LastPage": 1101,
                "PaperType": "J",
                "Abstract": "Improving the efficiency of coal-fired power plants has numerous benefits. The control strategy is one of the major factors affecting such efficiency. However, due to the complex and dynamic environment inside the power plants, it is hard to extract and evaluate control strategies and their cascading impact across massive sensors. Existing manual and data-driven approaches cannot well support the analysis of control strategies because these approaches are time-consuming and do not scale with the complexity of the power plant systems. Three challenges were identified: a) interactive extraction of control strategies from large-scale dynamic sensor data, b) intuitive visual representation of cascading impact among the sensors in a complex power plant system, and c) time-lag-aware analysis of the impact of control strategies on electricity generation efficiency. By collaborating with energy domain experts, we addressed these challenges with ECoalVis, a novel interactive system for experts to visually analyze the control strategies of coal-fired power plants extracted from historical sensor data. The effectiveness of the proposed system is evaluated with two usage scenarios on a real-world historical dataset and received positive feedback from experts.",
                "AuthorNamesDeduped": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorNames": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2021.3114792;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114875;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2017.2745105;10.1109/tvcg.2014.2346454;10.1109/tvcg.2015.2467622;10.1109/tvcg.2018.2864886;10.1109/tvcg.2009.200;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114878;10.1109/tvcg.2009.117;10.1109/vast.2009.5332595;10.1109/tvcg.2016.2598664;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Power plant visual analytics,energy data visualization,spatiotemporal visualization,smart factory",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 208,
                "i": [
                    208
                ]
            }
        },
        {
            "name": "Honglei Yin",
            "value": 8,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 778,
            "x": 135.09746708615154,
            "y": 244.1283973381757,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "ECoalVis: Visual Analysis of Control Strategies in Coal-fired Power Plants",
                "DOI": "10.1109/tvcg.2022.3209430",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209430",
                "FirstPage": 1091,
                "LastPage": 1101,
                "PaperType": "J",
                "Abstract": "Improving the efficiency of coal-fired power plants has numerous benefits. The control strategy is one of the major factors affecting such efficiency. However, due to the complex and dynamic environment inside the power plants, it is hard to extract and evaluate control strategies and their cascading impact across massive sensors. Existing manual and data-driven approaches cannot well support the analysis of control strategies because these approaches are time-consuming and do not scale with the complexity of the power plant systems. Three challenges were identified: a) interactive extraction of control strategies from large-scale dynamic sensor data, b) intuitive visual representation of cascading impact among the sensors in a complex power plant system, and c) time-lag-aware analysis of the impact of control strategies on electricity generation efficiency. By collaborating with energy domain experts, we addressed these challenges with ECoalVis, a novel interactive system for experts to visually analyze the control strategies of coal-fired power plants extracted from historical sensor data. The effectiveness of the proposed system is evaluated with two usage scenarios on a real-world historical dataset and received positive feedback from experts.",
                "AuthorNamesDeduped": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorNames": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2021.3114792;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114875;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2017.2745105;10.1109/tvcg.2014.2346454;10.1109/tvcg.2015.2467622;10.1109/tvcg.2018.2864886;10.1109/tvcg.2009.200;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114878;10.1109/tvcg.2009.117;10.1109/vast.2009.5332595;10.1109/tvcg.2016.2598664;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Power plant visual analytics,energy data visualization,spatiotemporal visualization,smart factory",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 208,
                "i": [
                    208
                ]
            }
        },
        {
            "name": "Xianyuan Zhan",
            "value": 8,
            "numPapers": 18,
            "cluster": "3",
            "visible": 1,
            "index": 779,
            "x": -264.6928689315475,
            "y": -88.81264063626658,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "ECoalVis: Visual Analysis of Control Strategies in Coal-fired Power Plants",
                "DOI": "10.1109/tvcg.2022.3209430",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209430",
                "FirstPage": 1091,
                "LastPage": 1101,
                "PaperType": "J",
                "Abstract": "Improving the efficiency of coal-fired power plants has numerous benefits. The control strategy is one of the major factors affecting such efficiency. However, due to the complex and dynamic environment inside the power plants, it is hard to extract and evaluate control strategies and their cascading impact across massive sensors. Existing manual and data-driven approaches cannot well support the analysis of control strategies because these approaches are time-consuming and do not scale with the complexity of the power plant systems. Three challenges were identified: a) interactive extraction of control strategies from large-scale dynamic sensor data, b) intuitive visual representation of cascading impact among the sensors in a complex power plant system, and c) time-lag-aware analysis of the impact of control strategies on electricity generation efficiency. By collaborating with energy domain experts, we addressed these challenges with ECoalVis, a novel interactive system for experts to visually analyze the control strategies of coal-fired power plants extracted from historical sensor data. The effectiveness of the proposed system is evaluated with two usage scenarios on a real-world historical dataset and received positive feedback from experts.",
                "AuthorNamesDeduped": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorNames": "Shuhan Liu;Di Weng;Yuan Tian;Zikun Deng;Haoran Xu;Xiangyu Zhu;Honglei Yin;Xianyuan Zhan;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Microsoft Research Asia, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;JD iCity, JD Technology, Beijing, China;Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2015.2467851;10.1109/tvcg.2021.3114792;10.1109/tvcg.2017.2745083;10.1109/tvcg.2021.3114875;10.1109/vast.2006.261421;10.1109/tvcg.2013.173;10.1109/tvcg.2017.2745105;10.1109/tvcg.2014.2346454;10.1109/tvcg.2015.2467622;10.1109/tvcg.2018.2864886;10.1109/tvcg.2009.200;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114878;10.1109/tvcg.2009.117;10.1109/vast.2009.5332595;10.1109/tvcg.2016.2598664;10.1109/tvcg.2021.3114877;10.1109/tvcg.2022.3209360",
                "AuthorKeywords": "Power plant visual analytics,energy data visualization,spatiotemporal visualization,smart factory",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 887,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 208,
                "i": [
                    208
                ]
            }
        },
        {
            "name": "Jerry Alan Fails",
            "value": 69,
            "numPapers": 0,
            "cluster": "3",
            "visible": 1,
            "index": 780,
            "x": 255.33198247854952,
            "y": -113.38244451225101,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "VAST",
                "Year": 2006,
                "Title": "A Visual Interface for Multivariate Temporal Data: Finding Patterns of Events across Multiple Histories",
                "DOI": "10.1109/vast.2006.261421",
                "Link": "http://dx.doi.org/10.1109/VAST.2006.261421",
                "FirstPage": 167,
                "LastPage": 174,
                "PaperType": "C",
                "Abstract": "Finding patterns of events over time is important in searching patient histories, Web logs, news stories, and criminal activities. This paper presents PatternFinder, an integrated interface for query and result-set visualization for search and discovery of temporal patterns within multivariate and categorical data sets. We define temporal patterns as sequences of events with inter-event time spans. PatternFinder allows users to specify the attributes of events and time spans to produce powerful pattern queries that are difficult to express with other formalisms. We characterize the range of queries PatternFinder supports as users vary the specificity at which events and time spans are defined. Pattern Finder's query capabilities together with coupled ball-and-chain and tabular visualizations enable users to effectively query, explore and analyze event patterns both within and across data entities (e.g. patient histories, terrorist groups, Web logs, etc.)",
                "AuthorNamesDeduped": "Jerry Alan Fails;Amy K. Karlson;Layla Shahamat;Ben Shneiderman",
                "AuthorNames": "Jerry Alan Fails;Amy Karlson;Layla Shahamat;Ben Shneiderman",
                "AuthorAffiliation": "Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland",
                "InternalReferences": "0.1109/infvis.2001.963273",
                "AuthorKeywords": "Temporal query, information visualization, user interface",
                "AminerCitationCount": 168,
                "CitationCountCrossRef": 82,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1142,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2240,
                "i": [
                    2240
                ]
            }
        },
        {
            "name": "Amy K. Karlson",
            "value": 132,
            "numPapers": 3,
            "cluster": "3",
            "visible": 1,
            "index": 781,
            "x": -111.75664090755775,
            "y": 256.2429573921188,
            "vy": 0,
            "vx": 0,
            "r": 1.151986183074266,
            "node": {
                "Conference": "VAST",
                "Year": 2006,
                "Title": "A Visual Interface for Multivariate Temporal Data: Finding Patterns of Events across Multiple Histories",
                "DOI": "10.1109/vast.2006.261421",
                "Link": "http://dx.doi.org/10.1109/VAST.2006.261421",
                "FirstPage": 167,
                "LastPage": 174,
                "PaperType": "C",
                "Abstract": "Finding patterns of events over time is important in searching patient histories, Web logs, news stories, and criminal activities. This paper presents PatternFinder, an integrated interface for query and result-set visualization for search and discovery of temporal patterns within multivariate and categorical data sets. We define temporal patterns as sequences of events with inter-event time spans. PatternFinder allows users to specify the attributes of events and time spans to produce powerful pattern queries that are difficult to express with other formalisms. We characterize the range of queries PatternFinder supports as users vary the specificity at which events and time spans are defined. Pattern Finder's query capabilities together with coupled ball-and-chain and tabular visualizations enable users to effectively query, explore and analyze event patterns both within and across data entities (e.g. patient histories, terrorist groups, Web logs, etc.)",
                "AuthorNamesDeduped": "Jerry Alan Fails;Amy K. Karlson;Layla Shahamat;Ben Shneiderman",
                "AuthorNames": "Jerry Alan Fails;Amy Karlson;Layla Shahamat;Ben Shneiderman",
                "AuthorAffiliation": "Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland",
                "InternalReferences": "0.1109/infvis.2001.963273",
                "AuthorKeywords": "Temporal query, information visualization, user interface",
                "AminerCitationCount": 168,
                "CitationCountCrossRef": 82,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1142,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2240,
                "i": [
                    2240
                ]
            }
        },
        {
            "name": "Layla Shahamat",
            "value": 69,
            "numPapers": 0,
            "cluster": "3",
            "visible": 1,
            "index": 782,
            "x": -90.74176226058643,
            "y": -264.6052391428469,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "VAST",
                "Year": 2006,
                "Title": "A Visual Interface for Multivariate Temporal Data: Finding Patterns of Events across Multiple Histories",
                "DOI": "10.1109/vast.2006.261421",
                "Link": "http://dx.doi.org/10.1109/VAST.2006.261421",
                "FirstPage": 167,
                "LastPage": 174,
                "PaperType": "C",
                "Abstract": "Finding patterns of events over time is important in searching patient histories, Web logs, news stories, and criminal activities. This paper presents PatternFinder, an integrated interface for query and result-set visualization for search and discovery of temporal patterns within multivariate and categorical data sets. We define temporal patterns as sequences of events with inter-event time spans. PatternFinder allows users to specify the attributes of events and time spans to produce powerful pattern queries that are difficult to express with other formalisms. We characterize the range of queries PatternFinder supports as users vary the specificity at which events and time spans are defined. Pattern Finder's query capabilities together with coupled ball-and-chain and tabular visualizations enable users to effectively query, explore and analyze event patterns both within and across data entities (e.g. patient histories, terrorist groups, Web logs, etc.)",
                "AuthorNamesDeduped": "Jerry Alan Fails;Amy K. Karlson;Layla Shahamat;Ben Shneiderman",
                "AuthorNames": "Jerry Alan Fails;Amy Karlson;Layla Shahamat;Ben Shneiderman",
                "AuthorAffiliation": "Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland, USA;Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, Maryland",
                "InternalReferences": "0.1109/infvis.2001.963273",
                "AuthorKeywords": "Temporal query, information visualization, user interface",
                "AminerCitationCount": 168,
                "CitationCountCrossRef": 82,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1142,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2240,
                "i": [
                    2240
                ]
            }
        },
        {
            "name": "Krist Wongsuphasawat",
            "value": 218,
            "numPapers": 34,
            "cluster": "1",
            "visible": 1,
            "index": 783,
            "x": 245.8053360166094,
            "y": 133.90196707204015,
            "vy": 0,
            "vx": 0,
            "r": 1.251007484168106,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "Finding comparable temporal categorical records: A similarity measure with an interactive visualization",
                "DOI": "10.1109/vast.2009.5332595",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5332595",
                "FirstPage": 27,
                "LastPage": 34,
                "PaperType": "C",
                "Abstract": "An increasing number of temporal categorical databases are being collected: Electronic Health Records in healthcare organizations, traffic incident logs in transportation systems, or student records in universities. Finding similar records within these large databases requires effective similarity measures that capture the searcher's intent. Many similarity measures exist for numerical time series, but temporal categorical records are different. We propose a temporal categorical similarity measure, the M&amp;M (Match &amp; Mismatch) measure, which is based on the concept of aligning records by sentinel events, then matching events between the target and the compared records. The M&amp;M measure combines the time differences between pairs of events and the number of mismatches. To accom-modate customization of parameters in the M&amp;M measure and results interpretation, we implemented Similan, an interactive search and visualization tool for temporal categorical records. A usability study with 8 participants demonstrated that Similan was easy to learn and enabled them to find similar records, but users had difficulty understanding the M&amp;M measure. The usability study feedback, led to an improved version with a continuous timeline, which was tested in a pilot study with 5 participants.",
                "AuthorNamesDeduped": "Krist Wongsuphasawat;Ben Shneiderman",
                "AuthorNames": "Krist Wongsuphasawat;Ben Shneiderman",
                "AuthorAffiliation": "Department of Computer Science & Human-Computer Interaction Lab, University of Maryland, College Park, MD, USA;University of Maryland at College Park, College Park, MD, US",
                "InternalReferences": "0.1109/vast.2006.261421",
                "AuthorKeywords": "Similan, M&M Measure, Similarity Search, Temporal Categorical Records",
                "AminerCitationCount": 115,
                "CitationCountCrossRef": 66,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1020,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1854,
                "i": [
                    1854
                ]
            }
        },
        {
            "name": "Aidan Slingsby",
            "value": 418,
            "numPapers": 81,
            "cluster": "5",
            "visible": 1,
            "index": 784,
            "x": -271.8720164953213,
            "y": 67.34691267435883,
            "vy": 0,
            "vx": 0,
            "r": 1.4812895797351757,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Small Multiples with Gaps",
                "DOI": "10.1109/tvcg.2016.2598542",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598542",
                "FirstPage": 381,
                "LastPage": 390,
                "PaperType": "J",
                "Abstract": "Small multiples enable comparison by providing different views of a single data set in a dense and aligned manner. A common frame defines each view, which varies based upon values of a conditioning variable. An increasingly popular use of this technique is to project two-dimensional locations into a gridded space (e.g. grid maps), using the underlying distribution both as the conditioning variable and to determine the grid layout. Using whitespace in this layout has the potential to carry information, especially in a geographic context. Yet, the effects of doing so on the spatial properties of the original units are not understood. We explore the design space offered by such small multiples with gaps. We do so by constructing a comprehensive suite of metrics that capture properties of the layout used to arrange the small multiples for comparison (e.g. compactness and alignment) and the preservation of the original data (e.g. distance, topology and shape). We study these metrics in geographic data sets with varying properties and numbers of gaps. We use simulated annealing to optimize for each metric and measure the effects on the others. To explore these effects systematically, we take a new approach, developing a system to visualize this design space using a set of interactive matrices. We find that adding small amounts of whitespace to small multiple arrays improves some of the characteristics of 2D layouts, such as shape, distance and direction. This comes at the cost of other metrics, such as the retention of topology. Effects vary according to the input maps, with degree of variation in size of input regions found to be a factor. Optima exist for particular metrics in many cases, but at different amounts of whitespace for different maps. We suggest multiple metrics be used in optimized layouts, finding topology to be a primary factor in existing manually-crafted solutions, followed by a trade-off between shape and displacement. But the rich range of possible optimized layouts leads us to challenge single-solution thinking; we suggest to consider alternative optimized layouts for small multiples with gaps. Key to our work is the systematic, quantified and visual approach to exploring design spaces when facing a trade-off between many competing criteria-an approach likely to be of value to the analysis of other design spaces.",
                "AuthorNamesDeduped": "Wouter Meulemans;Jason Dykes;Aidan Slingsby;Cagatay Turkay;Jo Wood",
                "AuthorNames": "Wouter Meulemans;Jason Dykes;Aidan Slingsby;Cagatay Turkay;Jo Wood",
                "AuthorAffiliation": "giCentre, City University, London;giCentre, City University, London;giCentre, City University, London;giCentre, City University, London;giCentre, City University, London",
                "InternalReferences": "0.1109/tvcg.2014.2346276;10.1109/tvcg.2011.174;10.1109/tvcg.2016.2598862;10.1109/tvcg.2008.165",
                "AuthorKeywords": "Geographic visualization;small multiples;whitespace;design space;metrics;optimization",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1021,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 913,
                "i": [
                    913
                ]
            }
        },
        {
            "name": "Ruizhen Hu",
            "value": 30,
            "numPapers": 13,
            "cluster": "8",
            "visible": 1,
            "index": 785,
            "x": 155.0765213414626,
            "y": -233.45507603954744,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Self-Supervised Color-Concept Association via Image Colorization",
                "DOI": "10.1109/tvcg.2022.3209481",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209481",
                "FirstPage": 247,
                "LastPage": 256,
                "PaperType": "J",
                "Abstract": "The interpretation of colors in visualizations is facilitated when the assignments between colors and concepts in the visualizations match human's expectations, implying that the colors can be interpreted in a semantic manner. However, manually creating a dataset of suitable associations between colors and concepts for use in visualizations is costly, as such associations would have to be collected from humans for a large variety of concepts. To address the challenge of collecting this data, we introduce a method to extract color-concept associations automatically from a set of concept images. While the state-of-the-art method extracts associations from data with supervised learning, we developed a self-supervised method based on colorization that does not require the preparation of ground truth color-concept associations. Our key insight is that a set of images of a concept should be sufficient for learning color-concept associations, since humans also learn to associate colors to concepts mainly from past visual input. Thus, we propose to use an automatic colorization method to extract statistical models of the color-concept associations that appear in concept images. Specifically, we take a colorization model pre-trained on ImageNet and fine-tune it on the set of images associated with a given concept, to predict pixel-wise probability distributions in Lab color space for the images. Then, we convert the predicted probability distributions into color ratings for a given color library and aggregate them for all the images of a concept to obtain the final color-concept associations. We evaluate our method using four different evaluation metrics and via a user study. Experiments show that, although the state-of-the-art method based on supervised learning with user-provided ratings is more effective at capturing relative associations, our self-supervised method obtains overall better results according to metrics like Earth Mover's Distance (EMD) and Entropy Difference (ED), which are closer to human perception of color distributions.",
                "AuthorNamesDeduped": "Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui Huang 0004",
                "AuthorNames": "Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui Huang",
                "AuthorAffiliation": "Shenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Carleton University, School of Computer Science, Canada;Shenzhen University, Visual Computing Research Center, China",
                "InternalReferences": "0.1109/tvcg.2016.2598604;10.1109/tvcg.2021.3114780;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2020.3030434;10.1109/tvcg.2015.2467471",
                "AuthorKeywords": "Color-concept association,colorization,EMD",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 619,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 210,
                "i": [
                    210
                ]
            }
        },
        {
            "name": "Ziqi Ye",
            "value": 0,
            "numPapers": 5,
            "cluster": "8",
            "visible": 1,
            "index": 786,
            "x": 43.37562131667922,
            "y": 277.0713905754112,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Self-Supervised Color-Concept Association via Image Colorization",
                "DOI": "10.1109/tvcg.2022.3209481",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209481",
                "FirstPage": 247,
                "LastPage": 256,
                "PaperType": "J",
                "Abstract": "The interpretation of colors in visualizations is facilitated when the assignments between colors and concepts in the visualizations match human's expectations, implying that the colors can be interpreted in a semantic manner. However, manually creating a dataset of suitable associations between colors and concepts for use in visualizations is costly, as such associations would have to be collected from humans for a large variety of concepts. To address the challenge of collecting this data, we introduce a method to extract color-concept associations automatically from a set of concept images. While the state-of-the-art method extracts associations from data with supervised learning, we developed a self-supervised method based on colorization that does not require the preparation of ground truth color-concept associations. Our key insight is that a set of images of a concept should be sufficient for learning color-concept associations, since humans also learn to associate colors to concepts mainly from past visual input. Thus, we propose to use an automatic colorization method to extract statistical models of the color-concept associations that appear in concept images. Specifically, we take a colorization model pre-trained on ImageNet and fine-tune it on the set of images associated with a given concept, to predict pixel-wise probability distributions in Lab color space for the images. Then, we convert the predicted probability distributions into color ratings for a given color library and aggregate them for all the images of a concept to obtain the final color-concept associations. We evaluate our method using four different evaluation metrics and via a user study. Experiments show that, although the state-of-the-art method based on supervised learning with user-provided ratings is more effective at capturing relative associations, our self-supervised method obtains overall better results according to metrics like Earth Mover's Distance (EMD) and Entropy Difference (ED), which are closer to human perception of color distributions.",
                "AuthorNamesDeduped": "Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui Huang 0004",
                "AuthorNames": "Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui Huang",
                "AuthorAffiliation": "Shenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Carleton University, School of Computer Science, Canada;Shenzhen University, Visual Computing Research Center, China",
                "InternalReferences": "0.1109/tvcg.2016.2598604;10.1109/tvcg.2021.3114780;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2020.3030434;10.1109/tvcg.2015.2467471",
                "AuthorKeywords": "Color-concept association,colorization,EMD",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 619,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 210,
                "i": [
                    210
                ]
            }
        },
        {
            "name": "Bin Chen",
            "value": 0,
            "numPapers": 28,
            "cluster": "8",
            "visible": 1,
            "index": 787,
            "x": -219.28213935966753,
            "y": -175.11522880048827,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Reducing Ambiguities in Line-Based Density Plots by Image-Space Colorization",
                "DOI": "10.1109/tvcg.2023.3327149",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327149",
                "FirstPage": 825,
                "LastPage": 835,
                "PaperType": "J",
                "Abstract": "Line-based density plots are used to reduce visual clutter in line charts with a multitude of individual lines. However, these traditional density plots are often perceived ambiguously, which obstructs the user's identification of underlying trends in complex datasets. Thus, we propose a novel image space coloring method for line-based density plots that enhances their interpretability. Our method employs color not only to visually communicate data density but also to highlight similar regions in the plot, allowing users to identify and distinguish trends easily. We achieve this by performing hierarchical clustering based on the lines passing through each region and mapping the identified clusters to the hue circle using circular MDS. Additionally, we propose a heuristic approach to assign each line to the most probable cluster, enabling users to analyze density and individual lines. We motivate our method by conducting a small-scale user study, demonstrating the effectiveness of our method using synthetic and real-world datasets, and providing an interactive online tool for generating colored line-based density plots.",
                "AuthorNamesDeduped": "Yumeng Xue;Patrick Paetzold;Rebecca Kehlbeck;Bin Chen;Kin Chung Kwan;Yunhai Wang;Oliver Deussen",
                "AuthorNames": "Yumeng Xue;Patrick Paetzold;Rebecca Kehlbeck;Bin Chen;Kin Chung Kwan;Yunhai Wang;Oliver Deussen",
                "AuthorAffiliation": "University of Konstanz, Germany and Shandong University, China;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;California State University Sacramento, United States;Shandong University, China;University of Konstanz, Germany",
                "InternalReferences": "10.1109/infvis.2004.68;10.1109/visual.1995.480803;10.1109/tvcg.2007.70595;10.1109/tvcg.2010.176;10.1109/tvcg.2015.2467204;10.1109/visual.1996.568118;10.1109/tvcg.2006.147;10.1109/tvcg.2021.3114783;10.1109/tvcg.2009.145;10.1109/tvcg.2010.162;10.1109/visual.2002.1183788;10.1109/tvcg.2014.2346325;10.1109/tvcg.2020.3030406;10.1109/tvcg.2014.2346455;10.1109/tvcg.2006.170;10.1109/visual.1990.146383;10.1109/visual.2001.964510;10.1109/tvcg.2011.181;10.1109/tvcg.2014.2346277;10.1109/tvcg.2021.3114795;10.1109/tvcg.2013.143;10.1109/tvcg.2021.3114865;10.1109/tvcg.2012.238",
                "AuthorKeywords": "Trajectory data,times series,density-based visualization,clustering,coloring",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 320,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 75,
                "i": [
                    75
                ]
            }
        },
        {
            "name": "Oliver van Kaick",
            "value": 30,
            "numPapers": 13,
            "cluster": "8",
            "visible": 1,
            "index": 788,
            "x": 280.1581714897433,
            "y": -19.010495720090386,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Self-Supervised Color-Concept Association via Image Colorization",
                "DOI": "10.1109/tvcg.2022.3209481",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209481",
                "FirstPage": 247,
                "LastPage": 256,
                "PaperType": "J",
                "Abstract": "The interpretation of colors in visualizations is facilitated when the assignments between colors and concepts in the visualizations match human's expectations, implying that the colors can be interpreted in a semantic manner. However, manually creating a dataset of suitable associations between colors and concepts for use in visualizations is costly, as such associations would have to be collected from humans for a large variety of concepts. To address the challenge of collecting this data, we introduce a method to extract color-concept associations automatically from a set of concept images. While the state-of-the-art method extracts associations from data with supervised learning, we developed a self-supervised method based on colorization that does not require the preparation of ground truth color-concept associations. Our key insight is that a set of images of a concept should be sufficient for learning color-concept associations, since humans also learn to associate colors to concepts mainly from past visual input. Thus, we propose to use an automatic colorization method to extract statistical models of the color-concept associations that appear in concept images. Specifically, we take a colorization model pre-trained on ImageNet and fine-tune it on the set of images associated with a given concept, to predict pixel-wise probability distributions in Lab color space for the images. Then, we convert the predicted probability distributions into color ratings for a given color library and aggregate them for all the images of a concept to obtain the final color-concept associations. We evaluate our method using four different evaluation metrics and via a user study. Experiments show that, although the state-of-the-art method based on supervised learning with user-provided ratings is more effective at capturing relative associations, our self-supervised method obtains overall better results according to metrics like Earth Mover's Distance (EMD) and Entropy Difference (ED), which are closer to human perception of color distributions.",
                "AuthorNamesDeduped": "Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui Huang 0004",
                "AuthorNames": "Ruizhen Hu;Ziqi Ye;Bin Chen;Oliver van Kaick;Hui Huang",
                "AuthorAffiliation": "Shenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Shenzhen University, Visual Computing Research Center, China;Carleton University, School of Computer Science, Canada;Shenzhen University, Visual Computing Research Center, China",
                "InternalReferences": "0.1109/tvcg.2016.2598604;10.1109/tvcg.2021.3114780;10.1109/tvcg.2019.2934536;10.1109/tvcg.2018.2865147;10.1109/tvcg.2020.3030434;10.1109/tvcg.2015.2467471",
                "AuthorKeywords": "Color-concept association,colorization,EMD",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 619,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 210,
                "i": [
                    210
                ]
            }
        },
        {
            "name": "Hui Huang 0004",
            "value": 57,
            "numPapers": 29,
            "cluster": "8",
            "visible": 1,
            "index": 789,
            "x": -193.86132489646909,
            "y": 203.39072424618004,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Winglets: Visualizing Association with Uncertainty in Multi-class Scatterplots",
                "DOI": "10.1109/tvcg.2019.2934811",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934811",
                "FirstPage": 770,
                "LastPage": 779,
                "PaperType": "J",
                "Abstract": "This work proposes Winglets, an enhancement to the classic scatterplot to better perceptually pronounce multiple classes by improving the perception of association and uncertainty of points to their related cluster. Designed as a pair of dual-sided strokes belonging to a data point, Winglets leverage the Gestalt principle of Closure to shape the perception of the form of the clusters, rather than use an explicit divisive encoding. Through a subtle design of two dominant attributes, length and orientation, Winglets enable viewers to perform a mental completion of the clusters. A controlled user study was conducted to examine the efficiency of Winglets in perceiving the cluster association and the uncertainty of certain points. The results show Winglets form a more prominent association of points into clusters and improve the perception of associating uncertainty.",
                "AuthorNamesDeduped": "Min Lu 0002;Shuaiqi Wang;Joel Lanir;Noa Fish;Yang Yue 0001;Daniel Cohen-Or;Hui Huang 0004",
                "AuthorNames": "Min Lu;Shuaiqi Wang;Joel Lanir;Noa Fish;Yang Yue;Daniel Cohen-Or;Hui Huang",
                "AuthorAffiliation": "Shenzhen University;Shenzhen University;University of Haifa;Tel Aviv Univeristy;Shenzhen University;Shenzhen University;Shenzhen University",
                "InternalReferences": "0.1109/vast.2010.5652460;10.1109/tvcg.2014.2346594;10.1109/tvcg.2009.122;10.1109/tvcg.2013.183;10.1109/tvcg.2018.2865141;10.1109/tvcg.2018.2865141;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.153;10.1109/vast.2009.5332628;10.1109/tvcg.2018.2864912",
                "AuthorKeywords": "Scatterplot,Gestalt laws,Association,Uncertainty",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 559,
                "i": [
                    559
                ]
            }
        },
        {
            "name": "Wiebke Köpp",
            "value": 25,
            "numPapers": 21,
            "cluster": "11",
            "visible": 1,
            "index": 790,
            "x": 5.562366842540587,
            "y": -281.10329075823535,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Temporal Merge Tree Maps: A Topology-Based Static Visualization for Temporal Scalar Data",
                "DOI": "10.1109/tvcg.2022.3209387",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209387",
                "FirstPage": 1157,
                "LastPage": 1167,
                "PaperType": "J",
                "Abstract": "Creating a static visualization for a time-dependent scalar field is a non-trivial task, yet very insightful as it shows the dynamics in one picture. Existing approaches are based on a linearization of the domain or on feature tracking. Domain linearizations use space-filling curves to place all sample points into a 1D domain, thereby breaking up individual features. Feature tracking methods explicitly respect feature continuity in space and time, but generally neglect the data context in which those features live. We present a feature-based linearization of the spatial domain that keeps features together and preserves their context by involving all data samples. We use augmented merge trees to linearize the domain and show that our linearized function has the same merge tree as the original data. A greedy optimization scheme aligns the trees over time providing temporal continuity. This leads to a static 2D visualization with one temporal dimension, and all spatial dimensions compressed into one. We compare our method against other domain linearizations as well as feature-tracking approaches, and apply it to several real-world data sets.",
                "AuthorNamesDeduped": "Wiebke Köpp;Tino Weinkauf",
                "AuthorNames": "Wiebke Köpp;Tino Weinkauf",
                "AuthorAffiliation": "KTH Royal Institute of Technology, Stockholm, Sweden;KTH Royal Institute of Technology, Stockholm, Sweden",
                "InternalReferences": "0.1109/tvcg.2014.2346448;10.1109/tvcg.2019.2934368;10.1109/visual.1999.809896;10.1109/tvcg.2021.3114839;10.1109/visual.2005.1532851;10.1109/tvcg.2008.163;10.1109/tvcg.2007.70601;10.1109/tvcg.2018.2864510;10.1109/tvcg.2020.3030473;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2865265",
                "AuthorKeywords": "Scalar field visualization,augmented merge tree,pixel-based visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 586,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 211,
                "i": [
                    211
                ]
            }
        },
        {
            "name": "Prateek Mantri",
            "value": 4,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 791,
            "x": 185.8984998116332,
            "y": 211.16758218955914,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "How Do Viewers Synthesize Conflicting Information from Data Visualizations?",
                "DOI": "10.1109/tvcg.2022.3209467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209467",
                "FirstPage": 1005,
                "LastPage": 1015,
                "PaperType": "J",
                "Abstract": "Scientific knowledge develops through cumulative discoveries that build on, contradict, contextualize, or correct prior findings. Scientists and journalists often communicate these incremental findings to lay people through visualizations and text (e.g., the positive and negative effects of caffeine intake). Consequently, readers need to integrate diverse and contrasting evidence from multiple sources to form opinions or make decisions. However, the underlying mechanism for synthesizing information from multiple visualizations remains under-explored. To address this knowledge gap, we conducted a series of four experiments ($\\mathrm{N}=1166$) in which participants synthesized empirical evidence from a pair of line charts presented sequentially. In Experiment 1, we administered a baseline condition with charts depicting no specific context where participants held no strong belief. To test for the generalizability, we introduced real-world scenarios to our visualizations in Experiment 2 and added accompanying text descriptions similar to online news articles or blog posts in Experiment 3. In all three experiments, we varied the relative direction and magnitude of line slopes within the chart pairs. We found that participants tended to weigh the positive slope more when the two charts depicted relationships in the opposite direction (e.g., one positive slope and one negative slope). Participants tended to weigh the less steep slope more when the two charts depicted relationships in the same direction (e.g., both positive). Through these experiments, we characterize participants' synthesis behaviors depending on the relationship between the information they viewed, contribute to theories describing underlying cognitive mechanisms in information synthesis, and describe design implications for data storytelling.",
                "AuthorNamesDeduped": "Prateek Mantri;Hariharan Subramonyam;Audrey L. Michal;Cindy Xiong",
                "AuthorNames": "Prateek Mantri;Hariharan Subramonyam;Audrey L. Michal;Cindy Xiong",
                "AuthorAffiliation": "University of Massachusetts Amherst, USA;Stanford University, USA;University of Michigan Ann Arbor, USA;University of Massachusetts Amherst, USA",
                "InternalReferences": "0.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2020.3030422;10.1109/tvcg.2020.3029412;10.1109/tvcg.2020.3030345;10.1109/tvcg.2021.3114684;10.1109/tvcg.2014.2346419;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865231;10.1109/tvcg.2019.2934400;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209405",
                "AuthorKeywords": "Information theory,Information synthesis,Primacy effect,Attitude change,Conflicting information",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 97,
                "DownloadsXplore": 571,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 212,
                "i": [
                    212
                ]
            }
        },
        {
            "name": "Audrey L. Michal",
            "value": 4,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 792,
            "x": -279.8940655316418,
            "y": -30.15480194212867,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "How Do Viewers Synthesize Conflicting Information from Data Visualizations?",
                "DOI": "10.1109/tvcg.2022.3209467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209467",
                "FirstPage": 1005,
                "LastPage": 1015,
                "PaperType": "J",
                "Abstract": "Scientific knowledge develops through cumulative discoveries that build on, contradict, contextualize, or correct prior findings. Scientists and journalists often communicate these incremental findings to lay people through visualizations and text (e.g., the positive and negative effects of caffeine intake). Consequently, readers need to integrate diverse and contrasting evidence from multiple sources to form opinions or make decisions. However, the underlying mechanism for synthesizing information from multiple visualizations remains under-explored. To address this knowledge gap, we conducted a series of four experiments ($\\mathrm{N}=1166$) in which participants synthesized empirical evidence from a pair of line charts presented sequentially. In Experiment 1, we administered a baseline condition with charts depicting no specific context where participants held no strong belief. To test for the generalizability, we introduced real-world scenarios to our visualizations in Experiment 2 and added accompanying text descriptions similar to online news articles or blog posts in Experiment 3. In all three experiments, we varied the relative direction and magnitude of line slopes within the chart pairs. We found that participants tended to weigh the positive slope more when the two charts depicted relationships in the opposite direction (e.g., one positive slope and one negative slope). Participants tended to weigh the less steep slope more when the two charts depicted relationships in the same direction (e.g., both positive). Through these experiments, we characterize participants' synthesis behaviors depending on the relationship between the information they viewed, contribute to theories describing underlying cognitive mechanisms in information synthesis, and describe design implications for data storytelling.",
                "AuthorNamesDeduped": "Prateek Mantri;Hariharan Subramonyam;Audrey L. Michal;Cindy Xiong",
                "AuthorNames": "Prateek Mantri;Hariharan Subramonyam;Audrey L. Michal;Cindy Xiong",
                "AuthorAffiliation": "University of Massachusetts Amherst, USA;Stanford University, USA;University of Michigan Ann Arbor, USA;University of Massachusetts Amherst, USA",
                "InternalReferences": "0.1109/tvcg.2012.197;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.234;10.1109/tvcg.2020.3030422;10.1109/tvcg.2020.3029412;10.1109/tvcg.2020.3030345;10.1109/tvcg.2021.3114684;10.1109/tvcg.2014.2346419;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865231;10.1109/tvcg.2019.2934400;10.1109/tvcg.2021.3114823;10.1109/tvcg.2019.2934399;10.1109/tvcg.2022.3209405",
                "AuthorKeywords": "Information theory,Information synthesis,Primacy effect,Attitude change,Conflicting information",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 97,
                "DownloadsXplore": 571,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 212,
                "i": [
                    212
                ]
            }
        },
        {
            "name": "Ameya Patil",
            "value": 0,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 793,
            "x": 226.89746676130474,
            "y": -166.93573487214357,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Studying Early Decision Making with Progressive Bar Charts",
                "DOI": "10.1109/tvcg.2022.3209426",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209426",
                "FirstPage": 407,
                "LastPage": 417,
                "PaperType": "J",
                "Abstract": "We conduct a user study to quantify and compare user performance for a value comparison task using four bar chart designs, where the bars show the mean values of data loaded progressively and updated every second (progressive bar charts). Progressive visualization divides different stages of the visualization pipeline—data loading, processing, and visualization—into iterative animated steps to limit the latency when loading large amounts of data. An animated visualization appearing quickly, unfolding, and getting more accurate with time, enables users to make early decisions. However, intermediate mean estimates are computed only on partial data and may not have time to converge to the true means, potentially misleading users and resulting in incorrect decisions. To address this issue, we propose two new designs visualizing the history of values in progressive bar charts, in addition to the use of confidence intervals. We comparatively study four progressive bar chart designs: with/without confidence intervals, and using near-history representation with/without confidence intervals, on three realistic data distributions. We evaluate user performance based on the percentage of correct answers (accuracy), response time, and user confidence. Our results show that, overall, users can make early and accurate decisions with 92% accuracy using only 18% of the data, regardless of the design. We find that our proposed bar chart design with only near-history is comparable to bar charts with only confidence intervals in performance, and the qualitative feedback we received indicates a preference for designs with history.",
                "AuthorNamesDeduped": "Ameya Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete",
                "AuthorNames": "Ameya Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete",
                "AuthorAffiliation": "University of Washington, Seattle, USA;Inria & Université Paris-Saclay, France;Rice University, USA;Carnegie Mellon University, USA;Inria & Université Paris-Saclay, France",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2011.175;10.1109/tvcg.2018.2864909;10.1109/tvcg.2014.2346452;10.1109/tvcg.2008.125;10.1109/tvcg.2014.2346320;10.1109/tvcg.2018.2864889;10.1109/tvcg.2014.2346574",
                "AuthorKeywords": "Progressive visualization,Uncertainty,Bar charts,Confidence intervals",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 562,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 213,
                "i": [
                    213
                ]
            }
        },
        {
            "name": "Gaëlle Richer",
            "value": 0,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 794,
            "x": -54.578020048634315,
            "y": 276.53433733909225,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Studying Early Decision Making with Progressive Bar Charts",
                "DOI": "10.1109/tvcg.2022.3209426",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209426",
                "FirstPage": 407,
                "LastPage": 417,
                "PaperType": "J",
                "Abstract": "We conduct a user study to quantify and compare user performance for a value comparison task using four bar chart designs, where the bars show the mean values of data loaded progressively and updated every second (progressive bar charts). Progressive visualization divides different stages of the visualization pipeline—data loading, processing, and visualization—into iterative animated steps to limit the latency when loading large amounts of data. An animated visualization appearing quickly, unfolding, and getting more accurate with time, enables users to make early decisions. However, intermediate mean estimates are computed only on partial data and may not have time to converge to the true means, potentially misleading users and resulting in incorrect decisions. To address this issue, we propose two new designs visualizing the history of values in progressive bar charts, in addition to the use of confidence intervals. We comparatively study four progressive bar chart designs: with/without confidence intervals, and using near-history representation with/without confidence intervals, on three realistic data distributions. We evaluate user performance based on the percentage of correct answers (accuracy), response time, and user confidence. Our results show that, overall, users can make early and accurate decisions with 92% accuracy using only 18% of the data, regardless of the design. We find that our proposed bar chart design with only near-history is comparable to bar charts with only confidence intervals in performance, and the qualitative feedback we received indicates a preference for designs with history.",
                "AuthorNamesDeduped": "Ameya Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete",
                "AuthorNames": "Ameya Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete",
                "AuthorAffiliation": "University of Washington, Seattle, USA;Inria & Université Paris-Saclay, France;Rice University, USA;Carnegie Mellon University, USA;Inria & Université Paris-Saclay, France",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2011.175;10.1109/tvcg.2018.2864909;10.1109/tvcg.2014.2346452;10.1109/tvcg.2008.125;10.1109/tvcg.2014.2346320;10.1109/tvcg.2018.2864889;10.1109/tvcg.2014.2346574",
                "AuthorKeywords": "Progressive visualization,Uncertainty,Bar charts,Confidence intervals",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 562,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 213,
                "i": [
                    213
                ]
            }
        },
        {
            "name": "Christopher Jermaine",
            "value": 0,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 795,
            "x": -146.64432764384276,
            "y": -240.92621519852358,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Studying Early Decision Making with Progressive Bar Charts",
                "DOI": "10.1109/tvcg.2022.3209426",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209426",
                "FirstPage": 407,
                "LastPage": 417,
                "PaperType": "J",
                "Abstract": "We conduct a user study to quantify and compare user performance for a value comparison task using four bar chart designs, where the bars show the mean values of data loaded progressively and updated every second (progressive bar charts). Progressive visualization divides different stages of the visualization pipeline—data loading, processing, and visualization—into iterative animated steps to limit the latency when loading large amounts of data. An animated visualization appearing quickly, unfolding, and getting more accurate with time, enables users to make early decisions. However, intermediate mean estimates are computed only on partial data and may not have time to converge to the true means, potentially misleading users and resulting in incorrect decisions. To address this issue, we propose two new designs visualizing the history of values in progressive bar charts, in addition to the use of confidence intervals. We comparatively study four progressive bar chart designs: with/without confidence intervals, and using near-history representation with/without confidence intervals, on three realistic data distributions. We evaluate user performance based on the percentage of correct answers (accuracy), response time, and user confidence. Our results show that, overall, users can make early and accurate decisions with 92% accuracy using only 18% of the data, regardless of the design. We find that our proposed bar chart design with only near-history is comparable to bar charts with only confidence intervals in performance, and the qualitative feedback we received indicates a preference for designs with history.",
                "AuthorNamesDeduped": "Ameya Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete",
                "AuthorNames": "Ameya Patil;Gaëlle Richer;Christopher Jermaine;Dominik Moritz;Jean-Daniel Fekete",
                "AuthorAffiliation": "University of Washington, Seattle, USA;Inria & Université Paris-Saclay, France;Rice University, USA;Carnegie Mellon University, USA;Inria & Université Paris-Saclay, France",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2021.3114803;10.1109/tvcg.2014.2346298;10.1109/tvcg.2019.2934287;10.1109/tvcg.2011.175;10.1109/tvcg.2018.2864909;10.1109/tvcg.2014.2346452;10.1109/tvcg.2008.125;10.1109/tvcg.2014.2346320;10.1109/tvcg.2018.2864889;10.1109/tvcg.2014.2346574",
                "AuthorKeywords": "Progressive visualization,Uncertainty,Bar charts,Confidence intervals",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 562,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 213,
                "i": [
                    213
                ]
            }
        },
        {
            "name": "Marc Rautenhaus",
            "value": 88,
            "numPapers": 26,
            "cluster": "6",
            "visible": 1,
            "index": 796,
            "x": 271.04448398572634,
            "y": 78.64405699677069,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Analysis of the Temporal Evolution of Ensemble Forecast Sensitivities",
                "DOI": "10.1109/tvcg.2018.2864901",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864901",
                "FirstPage": 98,
                "LastPage": 108,
                "PaperType": "J",
                "Abstract": "Ensemble sensitivity analysis (ESA) has been established in the atmospheric sciences as a correlation-based approach to determine the sensitivity of a scalar forecast quantity computed by a numerical weather prediction model to changes in another model variable at a different model state. Its applications include determining the origin of forecast errors and placing targeted observations to improve future forecasts. We - a team of visualization scientists and meteorologists - present a visual analysis framework to improve upon current practice of ESA. We support the user in selecting regions to compute a meaningful target forecast quantity by embedding correlation-based grid-point clustering to obtain statistically coherent regions. The evolution of sensitivity features computed via ESA are then traced through time, by integrating a quantitative measure of feature matching into optical-flow-based feature assignment, and displayed by means of a swipe-path showing the geo-spatial evolution of the sensitivities. Visualization of the internal correlation structure of computed features guides the user towards those features robustly predicting a certain weather event. We demonstrate the use of our method by application to real-world 2D and 3D cases that occurred during the 2016 NAWDEX field campaign, showing the interactive generation of hypothesis chains to explore how atmospheric processes sensitive to each other are interrelated.",
                "AuthorNamesDeduped": "Alexander Kumpf;Marc Rautenhaus;Michael Riemer;Rüdiger Westermann",
                "AuthorNames": "Alexander Kumpf;Marc Rautenhaus;Michael Riemer;Rüdiger Westermann",
                "AuthorAffiliation": "Technische Universitat Munchen, Munchen, Bayern, DE;Technische Universitat Munchen, Munchen, Bayern, DE;Universitat Hamburg, Hamburg, Hamburg, DE;Technische Universitat Munchen, Munchen, Bayern, DE",
                "InternalReferences": "0.1109/tvcg.2013.131;10.1109/visual.2004.46;10.1109/tvcg.2017.2743989;10.1109/tvcg.2017.2745178;10.1109/tvcg.2006.165",
                "AuthorKeywords": "Correlation,clustering,tracking,ensemble visualization",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 875,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 758,
                "i": [
                    758
                ]
            }
        },
        {
            "name": "Gary K. L. Tam",
            "value": 51,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 797,
            "x": -253.14182365929614,
            "y": 125.17674350471749,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "An Analysis of Machine- and Human-Analytics in Classification",
                "DOI": "10.1109/tvcg.2016.2598829",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598829",
                "FirstPage": 71,
                "LastPage": 80,
                "PaperType": "J",
                "Abstract": "In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the “bag of features” approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.",
                "AuthorNamesDeduped": "Gary K. L. Tam;Vivek Kothari;Min Chen 0001",
                "AuthorNames": "Gary K. L. Tam;Vivek Kothari;Min Chen",
                "AuthorAffiliation": "Swansea University;University of Oxford;University of Oxford",
                "InternalReferences": "0.1109/vast.2010.5652467;10.1109/tvcg.2015.2467615;10.1109/vast.2012.6400492;10.1109/tvcg.2013.207;10.1109/tvcg.2015.2467552;10.1109/tvcg.2015.2467612;10.1109/vast.2010.5652398;10.1109/tvcg.2010.132;10.1109/tvcg.2014.2346660;10.1109/vast.2015.7347629;10.1109/vast.2011.6102453;10.1109/vast.2011.6102448",
                "AuthorKeywords": "information theory;Visual analytics;classification;decision tree;model;facial expression;visualization image",
                "AminerCitationCount": 90,
                "CitationCountCrossRef": 59,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2768,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 965,
                "i": [
                    965
                ]
            }
        },
        {
            "name": "Vivek Kothari",
            "value": 51,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 798,
            "x": 102.1672217094741,
            "y": -263.46130419506994,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "An Analysis of Machine- and Human-Analytics in Classification",
                "DOI": "10.1109/tvcg.2016.2598829",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598829",
                "FirstPage": 71,
                "LastPage": 80,
                "PaperType": "J",
                "Abstract": "In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the “bag of features” approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.",
                "AuthorNamesDeduped": "Gary K. L. Tam;Vivek Kothari;Min Chen 0001",
                "AuthorNames": "Gary K. L. Tam;Vivek Kothari;Min Chen",
                "AuthorAffiliation": "Swansea University;University of Oxford;University of Oxford",
                "InternalReferences": "0.1109/vast.2010.5652467;10.1109/tvcg.2015.2467615;10.1109/vast.2012.6400492;10.1109/tvcg.2013.207;10.1109/tvcg.2015.2467552;10.1109/tvcg.2015.2467612;10.1109/vast.2010.5652398;10.1109/tvcg.2010.132;10.1109/tvcg.2014.2346660;10.1109/vast.2015.7347629;10.1109/vast.2011.6102453;10.1109/vast.2011.6102448",
                "AuthorKeywords": "information theory;Visual analytics;classification;decision tree;model;facial expression;visualization image",
                "AminerCitationCount": 90,
                "CitationCountCrossRef": 59,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2768,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 965,
                "i": [
                    965
                ]
            }
        },
        {
            "name": "Zuobin Wang",
            "value": 41,
            "numPapers": 21,
            "cluster": "3",
            "visible": 1,
            "index": 799,
            "x": 102.6948687774713,
            "y": 263.44594118486236,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Visual Analytics of Multivariate Event Sequence Data in Racquet Sports",
                "DOI": "10.1109/vast50239.2020.00009",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00009",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "In this work, we propose a generic visual analytics framework to support tactic analysis based on data collected from racquet sports (such as tennis and badminton). The proposed approach models each rally in a game as a sequence of hits (i.e., events) until one athlete scores a point. Each hit can be described with a set of attributes, such as the positions of the ball and the techniques used to hit the ball (such as drive and volley in tennis). Thus, the mentioned sequence of hits can be viewed as a multivariate event sequence. By detecting and analyzing the multivariate subsequences that frequently occur in the rallies (namely, tactical patterns), athletes can gain insights into the playing styles adopted by their opponents, and therefore help them identify systematic weaknesses of the opponents and develop counter strategies in matches. To support such analysis effectively, we propose a steerable multivariate sequential pattern mining algorithm with adjustable weights over event attributes, such that the domain expert can obtain frequent tactical patterns according to the attributes specified by himself. We also propose a re-configurable glyph design to help users simultaneously analyze multiple attributes of the hits. The framework further supports comparative analysis of the tactical patterns, e.g., for different athletes or the same athlete playing under different conditions. By applying the framework on two datasets collected in tennis and badminton matches, we demonstrate that the system is generic and effective for tactic analysis in sports and can help identify signature techniques used by individual athletes. Finally, we discuss the strengths and limitations of the proposed approach based on the feedback from the domain experts.",
                "AuthorNamesDeduped": "Jiang Wu;Ziyang Guo;Zuobin Wang;Qingyang Xu;Yingcai Wu",
                "AuthorNames": "Jiang Wu;Ziyang Guo;Zuobin Wang;Qingyang Xu;Yingcai Wu",
                "AuthorAffiliation": "The State Key Lab of CAD&CG, Zhejiang University;The State Key Lab of CAD&CG, Zhejiang University;The State Key Lab of CAD&CG, Zhejiang University;The State Key Lab of CAD&CG, Zhejiang University;The State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2019.2934209;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2019.2934670;10.1109/vast.2014.7042478;10.1109/vast.2016.7883512;10.1109/vast.2006.261421;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/vast.2007.4389008;10.1109/tvcg.2016.2598797;10.1109/tvcg.2015.2467325;10.1109/tvcg.2013.192;10.1109/tvcg.2019.2934243;10.1109/tvcg.2014.2346445;10.1109/infvis.2000.885091;10.1109/tvcg.2009.117;10.1109/tvcg.2019.2934630;10.1109/tvcg.2017.2744218;10.1109/tvcg.2018.2865041;10.1109/tvcg.2020.3030359;10.1109/tvcg.2020.3030392",
                "AuthorKeywords": "Sports Analytics,Event Sequence,Multivariate Data,Sequential Pattern Mining,Comparative Analysis",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 870,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 480,
                "i": [
                    480
                ]
            }
        },
        {
            "name": "Sunwoo Ha",
            "value": 0,
            "numPapers": 9,
            "cluster": "5",
            "visible": 1,
            "index": 800,
            "x": -253.8377755708235,
            "y": -124.9655300203069,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "A Unified Comparison of User Modeling Techniques for Predicting Data Interaction and Detecting Exploration Bias",
                "DOI": "10.1109/tvcg.2022.3209476",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209476",
                "FirstPage": 483,
                "LastPage": 492,
                "PaperType": "J",
                "Abstract": "The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.",
                "AuthorNamesDeduped": "Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley",
                "AuthorNames": "Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley",
                "AuthorAffiliation": "Washington University, USA;Washington University, USA;Washington University, USA;Washington University, USA",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/tvcg.2016.2598468;10.1109/vast.2017.8585665;10.1109/tvcg.2018.2865117;10.1109/tvcg.2020.3030430;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114827;10.1109/tvcg.2015.2467551;10.1109/vast.2017.8585669;10.1109/tvcg.2021.3114862",
                "AuthorKeywords": "Visual Analytics,Analytic Provenance,User Interaction Modeling,Machine Learning,Benchmark Study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 405,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 219,
                "i": [
                    219
                ]
            }
        },
        {
            "name": "Shayan Monadjemi",
            "value": 16,
            "numPapers": 23,
            "cluster": "5",
            "visible": 1,
            "index": 801,
            "x": 271.7546601402284,
            "y": -79.36878915586998,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "A Unified Comparison of User Modeling Techniques for Predicting Data Interaction and Detecting Exploration Bias",
                "DOI": "10.1109/tvcg.2022.3209476",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209476",
                "FirstPage": 483,
                "LastPage": 492,
                "PaperType": "J",
                "Abstract": "The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.",
                "AuthorNamesDeduped": "Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley",
                "AuthorNames": "Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley",
                "AuthorAffiliation": "Washington University, USA;Washington University, USA;Washington University, USA;Washington University, USA",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/tvcg.2016.2598468;10.1109/vast.2017.8585665;10.1109/tvcg.2018.2865117;10.1109/tvcg.2020.3030430;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114827;10.1109/tvcg.2015.2467551;10.1109/vast.2017.8585669;10.1109/tvcg.2021.3114862",
                "AuthorKeywords": "Visual Analytics,Analytic Provenance,User Interaction Modeling,Machine Learning,Benchmark Study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 405,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 219,
                "i": [
                    219
                ]
            }
        },
        {
            "name": "Roman Garnett",
            "value": 16,
            "numPapers": 23,
            "cluster": "5",
            "visible": 1,
            "index": 802,
            "x": -146.86211351588378,
            "y": 242.24268743069968,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "A Unified Comparison of User Modeling Techniques for Predicting Data Interaction and Detecting Exploration Bias",
                "DOI": "10.1109/tvcg.2022.3209476",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209476",
                "FirstPage": 483,
                "LastPage": 492,
                "PaperType": "J",
                "Abstract": "The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.",
                "AuthorNamesDeduped": "Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley",
                "AuthorNames": "Sunwoo Ha;Shayan Monadjemi;Roman Garnett;Alvitta Ottley",
                "AuthorAffiliation": "Washington University, USA;Washington University, USA;Washington University, USA;Washington University, USA",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/tvcg.2016.2598468;10.1109/vast.2017.8585665;10.1109/tvcg.2018.2865117;10.1109/tvcg.2020.3030430;10.1109/tvcg.2009.111;10.1109/tvcg.2021.3114827;10.1109/tvcg.2015.2467551;10.1109/vast.2017.8585669;10.1109/tvcg.2021.3114862",
                "AuthorKeywords": "Visual Analytics,Analytic Provenance,User Interaction Modeling,Machine Learning,Benchmark Study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 405,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 219,
                "i": [
                    219
                ]
            }
        },
        {
            "name": "Alvitta Ottley",
            "value": 132,
            "numPapers": 43,
            "cluster": "5",
            "visible": 1,
            "index": 803,
            "x": -55.37550199895978,
            "y": -277.9991974419408,
            "vy": 0,
            "vx": 0,
            "r": 1.151986183074266,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "PROACT: Iterative Design of a Patient-Centered Visualization for Effective Prostate Cancer Health Risk Communication",
                "DOI": "10.1109/tvcg.2016.2598588",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598588",
                "FirstPage": 601,
                "LastPage": 610,
                "PaperType": "J",
                "Abstract": "Prostate cancer is the most common cancer among men in the US, and yet most cases represent localized cancer for which the optimal treatment is unclear. Accumulating evidence suggests that the available treatment options, including surgery and conservative treatment, result in a similar prognosis for most men with localized prostate cancer. However, approximately 90% of patients choose surgery over conservative treatment, despite the risk of severe side effects like erectile dysfunction and incontinence. Recent medical research suggests that a key reason is the lack of patient-centered tools that can effectively communicate personalized risk information and enable them to make better health decisions. In this paper, we report the iterative design process and results of developing the PROgnosis Assessment for Conservative Treatment (PROACT) tool, a personalized health risk communication tool for localized prostate cancer patients. PROACT utilizes two published clinical prediction models to communicate the patients' personalized risk estimates and compare treatment options. In collaboration with the Maine Medical Center, we conducted two rounds of evaluations with prostate cancer survivors and urologists to identify the design elements and narrative structure that effectively facilitate patient comprehension under emotional distress. Our results indicate that visualization can be an effective means to communicate complex risk information to patients with low numeracy and visual literacy. However, the visualizations need to be carefully chosen to balance readability with ease of comprehension. In addition, due to patients' charged emotional state, an intuitive narrative structure that considers the patients' information need is critical to aid the patients' comprehension of their risk information.",
                "AuthorNamesDeduped": "Anzu Hakone;Lane Harrison;Alvitta Ottley;Nathan Winters;Caitlin Gutheil;Paul K. J. Han;Remco Chang",
                "AuthorNames": "Anzu Hakone;Lane Harrison;Alvitta Ottley;Nathan Winters;Caitlin Gutheil;Paul K. J. Han;Remco Chang",
                "AuthorAffiliation": "Tufts University;Worcester Polytechnic Institute;Tufts University;Tufts University;Maine Medical Center;Maine Medical Center;Tufts University",
                "InternalReferences": "0.1109/tvcg.2012.225;10.1109/tvcg.2013.200;10.1109/tvcg.2014.2346984;10.1109/tvcg.2015.2467758;10.1109/tvcg.2012.219;10.1109/tvcg.2014.2346682",
                "AuthorKeywords": "Design studies;task and requirement analysis;presentation;production;and dissemination;medical visualization",
                "AminerCitationCount": 48,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1549,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 907,
                "i": [
                    907
                ]
            }
        },
        {
            "name": "Markus Wallinger",
            "value": 41,
            "numPapers": 33,
            "cluster": "5",
            "visible": 1,
            "index": 804,
            "x": 228.76015103328731,
            "y": 167.6865924849914,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MosaicSets: Embedding Set Systems into Grid Graphs",
                "DOI": "10.1109/tvcg.2022.3209485",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209485",
                "FirstPage": 875,
                "LastPage": 885,
                "PaperType": "J",
                "Abstract": "Visualizing sets of elements and their relations is an important research area in information visualization. In this paper, we present MosaicSets: a novel approach to create Euler-like diagrams from non-spatial set systems such that each element occupies one cell of a regular hexagonal or square grid. The main challenge is to find an assignment of the elements to the grid cells such that each set constitutes a contiguous region. As use case, we consider the research groups of a university faculty as elements, and the departments and joint research projects as sets. We aim at finding a suitable mapping between the research groups and the grid cells such that the department structure forms a base map layout. Our objectives are to optimize both the compactness of the entirety of all cells and of each set by itself. We show that computing the mapping is NP-hard. However, using integer linear programming we can solve real-world instances optimally within a few seconds. Moreover, we propose a relaxation of the contiguity requirement to visualize otherwise non-embeddable set systems. We present and discuss different rendering styles for the set overlays. Based on a case study with real-world data, our evaluation comprises quantitative measures as well as expert interviews.",
                "AuthorNamesDeduped": "Peter Rottmann;Markus Wallinger;Annika Bonerath;Sven Gedicke;Martin Nöllenburg;Jan-Henrik Haunert",
                "AuthorNames": "Peter Rottmann;Markus Wallinger;Annika Bonerath;Sven Gedicke;Martin Nöllenburg;Jan-Henrik Haunert",
                "AuthorAffiliation": "Geoinformation Group of the University of Bonn, Germany;Algorithms and Complexity Group of the Technical University of Vienna, Austria;Geoinformation Group of the University of Bonn, Germany;Geoinformation Group of the University of Bonn, Germany;Algorithms and Complexity Group of the Technical University of Vienna, Austria;Geoinformation Group of the University of Bonn, Germany",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2009.122;10.1109/tvcg.2020.3030475;10.1109/tvcg.2021.3114834;10.1109/tvcg.2014.2346248;10.1109/tvcg.2016.2598542;10.1109/tvcg.2020.3028953;10.1109/tvcg.2012.199;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346249;10.1109/tvcg.2008.165",
                "AuthorKeywords": "Set Visualization,Euler Diagram,Integer Linear Programming,Hypergraph",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 400,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 220,
                "i": [
                    220
                ]
            }
        },
        {
            "name": "Martin Nöllenburg",
            "value": 41,
            "numPapers": 33,
            "cluster": "5",
            "visible": 1,
            "index": 805,
            "x": -282.12646105792794,
            "y": 30.897572249765414,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "MosaicSets: Embedding Set Systems into Grid Graphs",
                "DOI": "10.1109/tvcg.2022.3209485",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209485",
                "FirstPage": 875,
                "LastPage": 885,
                "PaperType": "J",
                "Abstract": "Visualizing sets of elements and their relations is an important research area in information visualization. In this paper, we present MosaicSets: a novel approach to create Euler-like diagrams from non-spatial set systems such that each element occupies one cell of a regular hexagonal or square grid. The main challenge is to find an assignment of the elements to the grid cells such that each set constitutes a contiguous region. As use case, we consider the research groups of a university faculty as elements, and the departments and joint research projects as sets. We aim at finding a suitable mapping between the research groups and the grid cells such that the department structure forms a base map layout. Our objectives are to optimize both the compactness of the entirety of all cells and of each set by itself. We show that computing the mapping is NP-hard. However, using integer linear programming we can solve real-world instances optimally within a few seconds. Moreover, we propose a relaxation of the contiguity requirement to visualize otherwise non-embeddable set systems. We present and discuss different rendering styles for the set overlays. Based on a case study with real-world data, our evaluation comprises quantitative measures as well as expert interviews.",
                "AuthorNamesDeduped": "Peter Rottmann;Markus Wallinger;Annika Bonerath;Sven Gedicke;Martin Nöllenburg;Jan-Henrik Haunert",
                "AuthorNames": "Peter Rottmann;Markus Wallinger;Annika Bonerath;Sven Gedicke;Martin Nöllenburg;Jan-Henrik Haunert",
                "AuthorAffiliation": "Geoinformation Group of the University of Bonn, Germany;Algorithms and Complexity Group of the Technical University of Vienna, Austria;Geoinformation Group of the University of Bonn, Germany;Geoinformation Group of the University of Bonn, Germany;Algorithms and Complexity Group of the Technical University of Vienna, Austria;Geoinformation Group of the University of Bonn, Germany",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2009.122;10.1109/tvcg.2020.3030475;10.1109/tvcg.2021.3114834;10.1109/tvcg.2014.2346248;10.1109/tvcg.2016.2598542;10.1109/tvcg.2020.3028953;10.1109/tvcg.2012.199;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346249;10.1109/tvcg.2008.165",
                "AuthorKeywords": "Set Visualization,Euler Diagram,Integer Linear Programming,Hypergraph",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 400,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 220,
                "i": [
                    220
                ]
            }
        },
        {
            "name": "David Pugmire",
            "value": 11,
            "numPapers": 26,
            "cluster": "11",
            "visible": 1,
            "index": 806,
            "x": 187.27640231123507,
            "y": -213.48899067015245,
            "vy": 0,
            "vx": 0,
            "r": 1.0126655152561888,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models",
                "DOI": "10.1109/tvcg.2022.3209424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209424",
                "FirstPage": 613,
                "LastPage": 623,
                "PaperType": "J",
                "Abstract": "Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green's theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Christopher R. Johnson 0001;Sudhanshu Sane;David Pugmire",
                "AuthorNames": "Tushar M. Athawale;Chris R. Johnson;Sudhanshu Sane;David Pugmire",
                "AuthorAffiliation": "Oak Ridge National Laboratory, USA;Scientific Computing & Imaging (SCI) Institute, University of Utah, USA;Luminary Cloud, Inc., USA;Oak Ridge National Laboratory, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030394;10.1109/tvcg.2015.2467958;10.1109/tvcg.2018.2864432;10.1109/tvcg.2015.2467204;10.1109/tvcg.2012.227;10.1109/infvis.2002.1173157;10.1109/tvcg.2017.2744099;10.1109/tvcg.2009.131;10.1109/tvcg.2008.116;10.1109/visual.1996.568116;10.1109/tvcg.2007.70518;10.1109/tvcg.2020.3030365;10.1109/tvcg.2018.2864846;10.1109/tvcg.2006.165;10.1109/tvcg.2016.2599017;10.1109/tvcg.2013.143;10.1109/tvcg.2016.2599040;10.1109/vast.2006.261424;10.1109/tvcg.2019.2934242;10.1109/tvcg.2020.3030466;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864505;10.1109/tvcg.2008.119",
                "AuthorKeywords": "Uncertainty visualization,fiber surfaces,and probability",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 346,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 221,
                "i": [
                    221
                ]
            }
        },
        {
            "name": "Claes Lundström",
            "value": 103,
            "numPapers": 22,
            "cluster": "11",
            "visible": 1,
            "index": 807,
            "x": 6.121742193311456,
            "y": 284.09949713527936,
            "vy": 0,
            "vx": 0,
            "r": 1.1185952792170408,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Uncertainty Visualization in Medical Volume Rendering Using Probabilistic Animation",
                "DOI": "10.1109/tvcg.2007.70518",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70518",
                "FirstPage": 1648,
                "LastPage": 1655,
                "PaperType": "J",
                "Abstract": "Direct volume rendering has proved to be an effective visualization method for medical data sets and has reached wide-spread clinical use. The diagnostic exploration, in essence, corresponds to a tissue classification task, which is often complex and time-consuming. Moreover, a major problem is the lack of information on the uncertainty of the classification, which can have dramatic consequences for the diagnosis. In this paper this problem is addressed by proposing animation methods to convey uncertainty in the rendering. The foundation is a probabilistic Transfer Function model which allows for direct user interaction with the classification. The rendering is animated by sampling the probability domain over time, which results in varying appearance for uncertain regions. A particularly promising application of this technique is a \"sensitivity lens\" applied to focus regions in the data set. The methods have been evaluated by radiologists in a study simulating the clinical task of stenosis assessment, in which the animation technique is shown to outperform traditional rendering in terms of assessment accuracy.",
                "AuthorNamesDeduped": "Claes Lundström;Patric Ljung;Anders Persson;Anders Ynnerman",
                "AuthorNames": "Claes Lundström;Patric Ljung;Anders Persson;Anders Ynnerman",
                "AuthorAffiliation": "Center for Medical Image Science and Visualization CMIV, Linköping University and Sectra-Imtec AB, AB, Canada and Sectra-Imtec AB, Linköping University;Division for Visual Information Technology and Applications (VITA), Linköping University, Sweden;Sectra-Imtec AB, Linköping University, Sweden;Division for Visual Information Technology and Applications (VITA), Linköping University, Sweden",
                "InternalReferences": "0.1109/visual.2005.1532807;10.1109/visual.1992.235199;10.1109/visual.2003.1250414",
                "AuthorKeywords": "Uncertainty, probability, medical visualization, volume rendering, transfer function",
                "AminerCitationCount": 187,
                "CitationCountCrossRef": 98,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 1590,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2162,
                "i": [
                    2162
                ]
            }
        },
        {
            "name": "Patric Ljung",
            "value": 194,
            "numPapers": 22,
            "cluster": "11",
            "visible": 1,
            "index": 808,
            "x": -196.54202002210963,
            "y": -205.4780629790652,
            "vy": 0,
            "vx": 0,
            "r": 1.2233736327000575,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Uncertainty Visualization in Medical Volume Rendering Using Probabilistic Animation",
                "DOI": "10.1109/tvcg.2007.70518",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70518",
                "FirstPage": 1648,
                "LastPage": 1655,
                "PaperType": "J",
                "Abstract": "Direct volume rendering has proved to be an effective visualization method for medical data sets and has reached wide-spread clinical use. The diagnostic exploration, in essence, corresponds to a tissue classification task, which is often complex and time-consuming. Moreover, a major problem is the lack of information on the uncertainty of the classification, which can have dramatic consequences for the diagnosis. In this paper this problem is addressed by proposing animation methods to convey uncertainty in the rendering. The foundation is a probabilistic Transfer Function model which allows for direct user interaction with the classification. The rendering is animated by sampling the probability domain over time, which results in varying appearance for uncertain regions. A particularly promising application of this technique is a \"sensitivity lens\" applied to focus regions in the data set. The methods have been evaluated by radiologists in a study simulating the clinical task of stenosis assessment, in which the animation technique is shown to outperform traditional rendering in terms of assessment accuracy.",
                "AuthorNamesDeduped": "Claes Lundström;Patric Ljung;Anders Persson;Anders Ynnerman",
                "AuthorNames": "Claes Lundström;Patric Ljung;Anders Persson;Anders Ynnerman",
                "AuthorAffiliation": "Center for Medical Image Science and Visualization CMIV, Linköping University and Sectra-Imtec AB, AB, Canada and Sectra-Imtec AB, Linköping University;Division for Visual Information Technology and Applications (VITA), Linköping University, Sweden;Sectra-Imtec AB, Linköping University, Sweden;Division for Visual Information Technology and Applications (VITA), Linköping University, Sweden",
                "InternalReferences": "0.1109/visual.2005.1532807;10.1109/visual.1992.235199;10.1109/visual.2003.1250414",
                "AuthorKeywords": "Uncertainty, probability, medical visualization, volume rendering, transfer function",
                "AminerCitationCount": 187,
                "CitationCountCrossRef": 98,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 1590,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2162,
                "i": [
                    2162
                ]
            }
        },
        {
            "name": "Anders Persson",
            "value": 109,
            "numPapers": 26,
            "cluster": "6",
            "visible": 1,
            "index": 809,
            "x": 283.8978140207174,
            "y": 18.76249435064924,
            "vy": 0,
            "vx": 0,
            "r": 1.125503742084053,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Uncertainty Visualization in Medical Volume Rendering Using Probabilistic Animation",
                "DOI": "10.1109/tvcg.2007.70518",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70518",
                "FirstPage": 1648,
                "LastPage": 1655,
                "PaperType": "J",
                "Abstract": "Direct volume rendering has proved to be an effective visualization method for medical data sets and has reached wide-spread clinical use. The diagnostic exploration, in essence, corresponds to a tissue classification task, which is often complex and time-consuming. Moreover, a major problem is the lack of information on the uncertainty of the classification, which can have dramatic consequences for the diagnosis. In this paper this problem is addressed by proposing animation methods to convey uncertainty in the rendering. The foundation is a probabilistic Transfer Function model which allows for direct user interaction with the classification. The rendering is animated by sampling the probability domain over time, which results in varying appearance for uncertain regions. A particularly promising application of this technique is a \"sensitivity lens\" applied to focus regions in the data set. The methods have been evaluated by radiologists in a study simulating the clinical task of stenosis assessment, in which the animation technique is shown to outperform traditional rendering in terms of assessment accuracy.",
                "AuthorNamesDeduped": "Claes Lundström;Patric Ljung;Anders Persson;Anders Ynnerman",
                "AuthorNames": "Claes Lundström;Patric Ljung;Anders Persson;Anders Ynnerman",
                "AuthorAffiliation": "Center for Medical Image Science and Visualization CMIV, Linköping University and Sectra-Imtec AB, AB, Canada and Sectra-Imtec AB, Linköping University;Division for Visual Information Technology and Applications (VITA), Linköping University, Sweden;Sectra-Imtec AB, Linköping University, Sweden;Division for Visual Information Technology and Applications (VITA), Linköping University, Sweden",
                "InternalReferences": "0.1109/visual.2005.1532807;10.1109/visual.1992.235199;10.1109/visual.2003.1250414",
                "AuthorKeywords": "Uncertainty, probability, medical visualization, volume rendering, transfer function",
                "AminerCitationCount": 187,
                "CitationCountCrossRef": 98,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 1590,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2162,
                "i": [
                    2162
                ]
            }
        },
        {
            "name": "Mahsa Mirzargar",
            "value": 191,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 810,
            "x": -222.1483817765329,
            "y": 178.04520907361638,
            "vy": 0,
            "vx": 0,
            "r": 1.2199194012665515,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Curve Boxplot: Generalization of Boxplot for Ensembles of Curves",
                "DOI": "10.1109/tvcg.2014.2346455",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346455",
                "FirstPage": 2654,
                "LastPage": 2663,
                "PaperType": "J",
                "Abstract": "In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.",
                "AuthorNamesDeduped": "Mahsa Mirzargar;Ross T. Whitaker;Robert M. Kirby",
                "AuthorNames": "Mahsa Mirzargar;Ross T. Whitaker;Robert M. Kirby",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT;Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT;Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT",
                "InternalReferences": "0.1109/tvcg.2013.143;10.1109/visual.2002.1183769;10.1109/visual.1996.568116;10.1109/visual.1996.568105;10.1109/tvcg.2013.141;10.1109/tvcg.2010.212;10.1109/tvcg.2013.126;10.1109/tvcg.2010.181",
                "AuthorKeywords": "Uncertainty visualization, boxplots, ensemble visualization, order statistics, data depth, nonparametric statistic, functional data, parametric curves",
                "AminerCitationCount": 159,
                "CitationCountCrossRef": 125,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2337,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1209,
                "i": [
                    1209
                ]
            }
        },
        {
            "name": "Robert M. Kirby",
            "value": 354,
            "numPapers": 56,
            "cluster": "11",
            "visible": 1,
            "index": 811,
            "x": 43.56434250509145,
            "y": -281.51758037838255,
            "vy": 0,
            "vx": 0,
            "r": 1.4075993091537133,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Curve Boxplot: Generalization of Boxplot for Ensembles of Curves",
                "DOI": "10.1109/tvcg.2014.2346455",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346455",
                "FirstPage": 2654,
                "LastPage": 2663,
                "PaperType": "J",
                "Abstract": "In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.",
                "AuthorNamesDeduped": "Mahsa Mirzargar;Ross T. Whitaker;Robert M. Kirby",
                "AuthorNames": "Mahsa Mirzargar;Ross T. Whitaker;Robert M. Kirby",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT;Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT;Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT",
                "InternalReferences": "0.1109/tvcg.2013.143;10.1109/visual.2002.1183769;10.1109/visual.1996.568116;10.1109/visual.1996.568105;10.1109/tvcg.2013.141;10.1109/tvcg.2010.212;10.1109/tvcg.2013.126;10.1109/tvcg.2010.181",
                "AuthorKeywords": "Uncertainty visualization, boxplots, ensemble visualization, order statistics, data depth, nonparametric statistic, functional data, parametric curves",
                "AminerCitationCount": 159,
                "CitationCountCrossRef": 125,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 2337,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1209,
                "i": [
                    1209
                ]
            }
        },
        {
            "name": "Yu-Hsuan Chan",
            "value": 145,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 812,
            "x": 158.1367477355633,
            "y": 237.15557976910182,
            "vy": 0,
            "vx": 0,
            "r": 1.1669545192861255,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "A framework for uncertainty-aware visual analytics",
                "DOI": "10.1109/vast.2009.5332611",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5332611",
                "FirstPage": 51,
                "LastPage": 58,
                "PaperType": "C",
                "Abstract": "Visual analytics has become an important tool for gaining insight on large and complex collections of data. Numerous statistical tools and data transformations, such as projections, binning and clustering, have been coupled with visualization to help analysts understand data better and faster. However, data is inherently uncertain, due to error, noise or unreliable sources. When making decisions based on uncertain data, it is important to quantify and present to the analyst both the aggregated uncertainty of the results and the impact of the sources of that uncertainty. In this paper, we present a new framework to support uncertainty in the visual analytics process, through statistic methods such as uncertainty modeling, propagation and aggregation. We show that data transformations, such as regression, principal component analysis and k-means clustering, can be adapted to account for uncertainty. This framework leads to better visualizations that improve the decision-making process and help analysts gain insight on the analytic process itself.",
                "AuthorNamesDeduped": "Carlos D. Correa;Yu-Hsuan Chan;Kwan-Liu Ma",
                "AuthorNames": "Carlos D. Correa;Yu-Hsuan Chan;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California,슠Davis, USA;University of California Davis, USA;University of California,슠Davis, USA",
                "InternalReferences": "0.1109/vast.2008.4677368;10.1109/vast.2007.4389000",
                "AuthorKeywords": "Uncertainty, Data Transformations, Principal Component Analysis, Model fitting",
                "AminerCitationCount": 154,
                "CitationCountCrossRef": 65,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 1281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1855,
                "i": [
                    1855
                ]
            }
        },
        {
            "name": "Jochen Görtler",
            "value": 70,
            "numPapers": 31,
            "cluster": "6",
            "visible": 1,
            "index": 813,
            "x": -276.97169557323156,
            "y": -68.09317037184543,
            "vy": 0,
            "vx": 0,
            "r": 1.0805987334484743,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "SPEULER: Semantics-preserving Euler Diagrams",
                "DOI": "10.1109/tvcg.2021.3114834",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114834",
                "FirstPage": 433,
                "LastPage": 442,
                "PaperType": "J",
                "Abstract": "Creating comprehensible visualizations of highly overlapping set-typed data is a challenging task due to its complexity. To facilitate insights into set connectivity and to leverage semantic relations between intersections, we propose a fast two-step layout technique for Euler diagrams that are both well-matched and well-formed. Our method conforms to established form guidelines for Euler diagrams regarding semantics, aesthetics, and readability. First, we establish an initial ordering of the data, which we then use to incrementally create a planar, connected, and monotone dual graph representation. In the next step, the graph is transformed into a circular layout that maintains the semantics and yields simple Euler diagrams with smooth curves. When the data cannot be represented by simple diagrams, our algorithm always falls back to a solution that is not well-formed but still well-matched, whereas previous methods often fail to produce expected results. We show the usefulness of our method for visualizing set-typed data using examples from text analysis and infographics. Furthermore, we discuss the characteristics of our approach and evaluate our method against state-of-the-art methods.",
                "AuthorNamesDeduped": "Rebecca Kehlbeck;Jochen Görtler;Yunhai Wang;Oliver Deussen",
                "AuthorNames": "Rebecca Kehlbeck;Jochen Görtler;Yunhai Wang;Oliver Deussen",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Shandong University, China;University of Konstanz, Germany",
                "InternalReferences": "0.1109/tvcg.2013.184;10.1109/tvcg.2014.2346660;10.1109/tvcg.2009.122;10.1109/tvcg.2020.3030475;10.1109/tvcg.2014.2346248;10.1109/tvcg.2012.199;10.1109/tvcg.2015.2467992",
                "AuthorKeywords": "Euler diagrams,Venn diagrams,set visualization,layout algorithm",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 402,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 343,
                "i": [
                    343
                ]
            }
        },
        {
            "name": "Heike Jänicke",
            "value": 205,
            "numPapers": 34,
            "cluster": "6",
            "visible": 1,
            "index": 814,
            "x": 250.3803331836891,
            "y": -136.96601313765697,
            "vy": 0,
            "vx": 0,
            "r": 1.2360391479562465,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Brushing of Attribute Clouds for the Visualization of Multivariate Data",
                "DOI": "10.1109/tvcg.2008.116",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.116",
                "FirstPage": 1459,
                "LastPage": 1466,
                "PaperType": "J",
                "Abstract": "The visualization and exploration of multivariate data is still a challenging task. Methods either try to visualize all variables simultaneously at each position using glyph-based approaches or use linked views for the interaction between attribute space and physical domain such as brushing of scatterplots. Most visualizations of the attribute space are either difficult to understand or suffer from visual clutter. We propose a transformation of the high-dimensional data in attribute space to 2D that results in a point cloud, called attribute cloud, such that points with similar multivariate attributes are located close to each other. The transformation is based on ideas from multivariate density estimation and manifold learning. The resulting attribute cloud is an easy to understand visualization of multivariate data in two dimensions. We explain several techniques to incorporate additional information into the attribute cloud, that help the user get a better understanding of multivariate data. Using different examples from fluid dynamics and climate simulation, we show how brushing can be used to explore the attribute cloud and find interesting structures in physical space.",
                "AuthorNamesDeduped": "Heike Jänicke;Michael Böttinger;Gerik Scheuermann",
                "AuthorNames": "Heike Jänicke;Michael Böttinger;Gerik Scheuermann",
                "AuthorAffiliation": "Universität Leipzig;German Climate Computing Center;Universität Leipzig",
                "InternalReferences": "0.1109/infvis.2003.1249024;10.1109/infvis.2002.1173157;10.1109/visual.1995.485139;10.1109/infvis.1999.801858;10.1109/visual.1996.567800;10.1109/visual.2004.113;10.1109/visual.1998.745289",
                "AuthorKeywords": "Multivariate data, brushing, data transformation, manifold learning, linked views",
                "AminerCitationCount": 101,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 995,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2046,
                "i": [
                    2046
                ]
            }
        },
        {
            "name": "Ian M. Thornton",
            "value": 51,
            "numPapers": 2,
            "cluster": "6",
            "visible": 1,
            "index": 815,
            "x": -92.15997538366754,
            "y": 270.2897314684408,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Evaluating the impact of task demands and block resolution on the effectiveness of pixel-based visualization",
                "DOI": "10.1109/tvcg.2010.150",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.150",
                "FirstPage": 963,
                "LastPage": 972,
                "PaperType": "J",
                "Abstract": "Pixel-based visualization is a popular method of conveying large amounts of numerical data graphically. Application scenarios include business and finance, bioinformatics and remote sensing. In this work, we examined how the usability of such visual representations varied across different tasks and block resolutions. The main stimuli consisted of temporal pixel-based visualization with a white-red color map, simulating monthly temperature variation over a six-year period. In the first study, we included 5 separate tasks to exert different perceptual loads. We found that performance varied considerably as a function of task, ranging from 75% correct in low-load tasks to below 40% in high-load tasks. There was a small but consistent effect of resolution, with the uniform patch improving performance by around 6% relative to higher block resolution. In the second user study, we focused on a high-load task for evaluating month-to-month changes across different regions of the temperature range. We tested both CIE L*u*v* and RGB color spaces. We found that the nature of the change-evaluation errors related directly to the distance between the compared regions in the mapped color space. We were able to reduce such errors by using multiple color bands for the same data range. In a final study, we examined more fully the influence of block resolution on performance, and found block resolution had a limited impact on the effectiveness of pixel-based visualization.",
                "AuthorNamesDeduped": "Rita Borgo;Karl J. Proctor;Min Chen 0001;Heike Jänicke;Tavi Murray;Ian M. Thornton",
                "AuthorNames": "Rita Borgo;Karl Proctor;Min Chen;Heike Janicke;Tavi Murray;Ian Thornton",
                "AuthorAffiliation": "Computer Science, Swansea University, UK;Department of Psychology, Swansea University, UK;Computer Science, Swansea University, UK;Interdisciplinary Center for Scientific Computing, Heidelberg University, Germany;Department of Geography, Swansea University, UK;Department of Psychology, Swansea University, UK",
                "InternalReferences": "0.1109/visual.1995.480803",
                "AuthorKeywords": "Pixel-based visualization, evaluation, user study, visual search, change detection",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 591,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1715,
                "i": [
                    1715
                ]
            }
        },
        {
            "name": "Kresimir Matkovic",
            "value": 309,
            "numPapers": 59,
            "cluster": "6",
            "visible": 1,
            "index": 816,
            "x": -114.69244831075994,
            "y": -261.7167214766448,
            "vy": 0,
            "vx": 0,
            "r": 1.3557858376511227,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "A Comparison of Radial and Linear Charts for Visualizing Daily Patterns",
                "DOI": "10.1109/tvcg.2019.2934784",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934784",
                "FirstPage": 1033,
                "LastPage": 1042,
                "PaperType": "J",
                "Abstract": "Radial charts are generally considered less effective than linear charts. Perhaps the only exception is in visualizing periodical time-dependent data, which is believed to be naturally supported by the radial layout. It has been demonstrated that the drawbacks of radial charts outweigh the benefits of this natural mapping. Visualization of daily patterns, as a special case, has not been systematically evaluated using radial charts. In contrast to yearly or weekly recurrent trends, the analysis of daily patterns on a radial chart may benefit from our trained skill on reading radial clocks that are ubiquitous in our culture. In a crowd-sourced experiment with 92 non-expert users, we evaluated the accuracy, efficiency, and subjective ratings of radial and linear charts for visualizing daily traffic accident patterns. We systematically compared juxtaposed 12-hours variants and single 24-hours variants for both layouts in four low-level tasks and one high-level interpretation task. Our results show that over all tasks, the most elementary 24-hours linear bar chart is most accurate and efficient and is also preferred by the users. This provides strong evidence for the use of linear layouts – even for visualizing periodical daily patterns.",
                "AuthorNamesDeduped": "Manuela Waldner;Alexandra Diehl;Denis Gracanin;Rainer Splechtna;Claudio Delrieux;Kresimir Matkovic",
                "AuthorNames": "Manuela Waldner;Alexandra Diehl;Denis Gračanin;Rainer Splechtna;Claudio Delrieux;Krešimir Matković",
                "AuthorAffiliation": "TU Wien;University of Zurich;Virginia Tech;VRVis Research Center;Electric and Computer Eng. Dept., Universidad Nacional del SUR and CONICET;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2013.184;10.1109/tvcg.2018.2865142;10.1109/tvcg.2013.234;10.1109/tvcg.2018.2865234;10.1109/infvis.1998.729557;10.1109/tvcg.2010.209;10.1109/tvcg.2014.2346426;10.1109/tvcg.2018.2865077;10.1109/tvcg.2015.2467771;10.1109/tvcg.2010.162;10.1109/tvcg.2018.2865158;10.1109/infvis.2000.885091;10.1109/tvcg.2014.2346320;10.1109/infvis.2001.963273",
                "AuthorKeywords": "Radial charts,time series series data,daily patterns,crowd-sourced experiment",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1140,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 550,
                "i": [
                    550
                ]
            }
        },
        {
            "name": "Denis Gracanin",
            "value": 214,
            "numPapers": 41,
            "cluster": "6",
            "visible": 1,
            "index": 817,
            "x": 261.51774577420423,
            "y": 115.57884168470756,
            "vy": 0,
            "vx": 0,
            "r": 1.2464018422567644,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "A Comparison of Radial and Linear Charts for Visualizing Daily Patterns",
                "DOI": "10.1109/tvcg.2019.2934784",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934784",
                "FirstPage": 1033,
                "LastPage": 1042,
                "PaperType": "J",
                "Abstract": "Radial charts are generally considered less effective than linear charts. Perhaps the only exception is in visualizing periodical time-dependent data, which is believed to be naturally supported by the radial layout. It has been demonstrated that the drawbacks of radial charts outweigh the benefits of this natural mapping. Visualization of daily patterns, as a special case, has not been systematically evaluated using radial charts. In contrast to yearly or weekly recurrent trends, the analysis of daily patterns on a radial chart may benefit from our trained skill on reading radial clocks that are ubiquitous in our culture. In a crowd-sourced experiment with 92 non-expert users, we evaluated the accuracy, efficiency, and subjective ratings of radial and linear charts for visualizing daily traffic accident patterns. We systematically compared juxtaposed 12-hours variants and single 24-hours variants for both layouts in four low-level tasks and one high-level interpretation task. Our results show that over all tasks, the most elementary 24-hours linear bar chart is most accurate and efficient and is also preferred by the users. This provides strong evidence for the use of linear layouts – even for visualizing periodical daily patterns.",
                "AuthorNamesDeduped": "Manuela Waldner;Alexandra Diehl;Denis Gracanin;Rainer Splechtna;Claudio Delrieux;Kresimir Matkovic",
                "AuthorNames": "Manuela Waldner;Alexandra Diehl;Denis Gračanin;Rainer Splechtna;Claudio Delrieux;Krešimir Matković",
                "AuthorAffiliation": "TU Wien;University of Zurich;Virginia Tech;VRVis Research Center;Electric and Computer Eng. Dept., Universidad Nacional del SUR and CONICET;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2013.184;10.1109/tvcg.2018.2865142;10.1109/tvcg.2013.234;10.1109/tvcg.2018.2865234;10.1109/infvis.1998.729557;10.1109/tvcg.2010.209;10.1109/tvcg.2014.2346426;10.1109/tvcg.2018.2865077;10.1109/tvcg.2015.2467771;10.1109/tvcg.2010.162;10.1109/tvcg.2018.2865158;10.1109/infvis.2000.885091;10.1109/tvcg.2014.2346320;10.1109/infvis.2001.963273",
                "AuthorKeywords": "Radial charts,time series series data,daily patterns,crowd-sourced experiment",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1140,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 550,
                "i": [
                    550
                ]
            }
        },
        {
            "name": "Haoyu Li",
            "value": 13,
            "numPapers": 36,
            "cluster": "6",
            "visible": 1,
            "index": 818,
            "x": -271.0730745588617,
            "y": 91.48436068096991,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "NNVA: Neural Network Assisted Visual Analysis of Yeast Cell Polarization Simulation",
                "DOI": "10.1109/tvcg.2019.2934591",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934591",
                "FirstPage": 34,
                "LastPage": 44,
                "PaperType": "J",
                "Abstract": "Complex computational models are often designed to simulate real-world physical phenomena in many scientific disciplines. However, these simulation models tend to be computationally very expensive and involve a large number of simulation input parameters, which need to be analyzed and properly calibrated before the models can be applied for real scientific studies. We propose a visual analysis system to facilitate interactive exploratory analysis of high-dimensional input parameter space for a complex yeast cell polarization simulation. The proposed system can assist the computational biologists, who designed the simulation model, to visually calibrate the input parameters by modifying the parameter values and immediately visualizing the predicted simulation outcome without having the need to run the original expensive simulation for every instance. Our proposed visual analysis system is driven by a trained neural network-based surrogate model as the backend analysis framework. In this work, we demonstrate the advantage of using neural networks as surrogate models for visual analysis by incorporating some of the recent advances in the field of uncertainty quantification, interpretability and explainability of neural network-based models. We utilize the trained network to perform interactive parameter sensitivity analysis of the original simulation as well as recommend optimal parameter configurations using the activation maximization framework of neural networks. We also facilitate detail analysis of the trained network to extract useful insights about the simulation model, learned by the network, during the training process. We performed two case studies, and discovered multiple new parameter configurations, which can trigger high cell polarization results in the original simulation model. We evaluated our results by comparing with the original simulation model outcomes as well as the findings from previous parameter analysis performed by our experts.",
                "AuthorNamesDeduped": "Subhashis Hazarika;Haoyu Li;Ko-Chih Wang;Han-Wei Shen;Ching-Shan Chou",
                "AuthorNames": "Subhashis Hazarika;Haoyu Li;Ko-Chih Wang;Han-Wei Shen;Ching-Shan Chou",
                "AuthorAffiliation": "Department of Computer Science, Ohio State University;Department of Computer Science, Ohio State University;Department of Computer Science, Ohio State University;Department of Computer Science, Ohio State University;Department of Mathematics, Ohio State University",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2016.2598869;10.1109/tvcg.2013.147;10.1109/tvcg.2018.2865029;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/tvcg.2018.2864887;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2865051;10.1109/tvcg.2014.2346321;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2016.2598830;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2865026;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Surrogate modeling,Neural networks,Computational biology,Visual analysis,Parameter analysis",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1001,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 624,
                "i": [
                    624
                ]
            }
        },
        {
            "name": "Jiayi Xu 0001",
            "value": 28,
            "numPapers": 30,
            "cluster": "6",
            "visible": 1,
            "index": 819,
            "x": 138.16837729395877,
            "y": -250.7179680755934,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "IDLat: An Importance-Driven Latent Generation Method for Scientific Data",
                "DOI": "10.1109/tvcg.2022.3209419",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209419",
                "FirstPage": 679,
                "LastPage": 689,
                "PaperType": "J",
                "Abstract": "Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.",
                "AuthorNamesDeduped": "Jingyi Shen;Haoyu Li;Jiayi Xu 0001;Ayan Biswas;Han-Wei Shen",
                "AuthorNames": "Jingyi Shen;Haoyu Li;Jiayi Xu;Ayan Biswas;Han-Wei Shen",
                "AuthorAffiliation": "Department of Computer Science and Engineering, The Ohio State University, USA;Department of Computer Science and Engineering, The Ohio State University, USA;Department of Computer Science and Engineering, The Ohio State University, USA;Los Alamos National Laboratory, USA;Department of Computer Science and Engineering, The Ohio State University, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030346;10.1109/visual.2003.1250374;10.1109/tvcg.2006.152;10.1109/visual.2004.48;10.1109/tvcg.2008.140",
                "AuthorKeywords": "Latent space,scientific data representation,deep Learning",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 438,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 230,
                "i": [
                    230
                ]
            }
        },
        {
            "name": "Ayan Biswas",
            "value": 108,
            "numPapers": 55,
            "cluster": "6",
            "visible": 1,
            "index": 820,
            "x": 67.5176494353557,
            "y": 278.3727124104021,
            "vy": 0,
            "vx": 0,
            "r": 1.1243523316062176,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Drag and Track: A Direct Manipulation Interface for Contextualizing Data Instances within a Continuous Parameter Space",
                "DOI": "10.1109/tvcg.2018.2865051",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865051",
                "FirstPage": 256,
                "LastPage": 266,
                "PaperType": "J",
                "Abstract": "We present a direct manipulation technique that allows material scientists to interactively highlight relevant parameterized simulation instances located in dimensionally reduced spaces, enabling a user-defined understanding of a continuous parameter space. Our goals are two-fold: first, to build a user-directed intuition of dimensionally reduced data, and second, to provide a mechanism for creatively exploring parameter relationships in parameterized simulation sets, called ensembles. We start by visualizing ensemble data instances in dimensionally reduced scatter plots. To understand these abstract views, we employ user-defined virtual data instances that, through direct manipulation, search an ensemble for similar instances. Users can create multiple of these direct manipulation queries to visually annotate the spaces with sets of highlighted ensemble data instances. User-defined goals are therefore translated into custom illustrations that are projected onto the dimensionally reduced spaces. Combined forward and inverse searches of the parameter space follow naturally allowing for continuous parameter space prediction and visual query comparison in the context of an ensemble. The potential for this visualization technique is confirmed via expert user feedback for a shock physics application and synthetic model analysis.",
                "AuthorNamesDeduped": "Daniel Orban;Daniel F. Keefe;Ayan Biswas;James P. Ahrens;David H. Rogers 0001",
                "AuthorNames": "Daniel Orban;Daniel F. Keefe;Ayan Biswas;James Ahrens;David Rogers",
                "AuthorAffiliation": "University of Minnesota, Minneapolis, MN, US;University of Minnesota, Minneapolis, MN, US;Los Alamos National Labs;Los Alamos National Labs;Los Alamos National Labs",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2016.2598869;10.1109/vast.2012.6400486;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/vast.2012.6400489;10.1109/tvcg.2015.2467436;10.1109/tvcg.2012.260;10.1109/vast.2011.6102449;10.1109/tvcg.2015.2467204;10.1109/tvcg.2013.141;10.1109/tvcg.2017.2745178;10.1109/tvcg.2014.2346455;10.1109/tvcg.2016.2598589;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598839;10.1109/tvcg.2014.2346321;10.1109/tvcg.2011.248;10.1109/tvcg.2016.2598830;10.1109/tvcg.2010.223",
                "AuthorKeywords": "Visual Parameter Space Analysis,Ensemble Visualization,Semantic Interaction,Direct Manipulation,Shock Physics",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 708,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 760,
                "i": [
                    760
                ]
            }
        },
        {
            "name": "Rephael Wenger",
            "value": 42,
            "numPapers": 23,
            "cluster": "11",
            "visible": 1,
            "index": 821,
            "x": -237.96836065107146,
            "y": -159.753119935235,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Volume tracking using higher dimensional isosurfacing",
                "DOI": "10.1109/visual.2003.1250374",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250374",
                "FirstPage": 209,
                "LastPage": 216,
                "PaperType": "C",
                "Abstract": "Tracking and visualizing local features from a time-varying volumetric data allows the user to focus on selected regions of interest, both in space and time, which can lead to a better understanding of the underlying dynamics. In this paper, we present an efficient algorithm to track time-varying isosurfaces and interval volumes using isosurfacing in higher dimensions. Instead of extracting the data features such as isosurfaces or interval volumes separately from multiple time steps and computing the spatial correspondence between those features, our algorithm extracts the correspondence directly from the higher dimensional geometry and thus can more efficiently follow the user selected local features in time. In addition, by analyzing the resulting higher dimensional geometry, it becomes easier to detect important topological events and the corresponding critical time steps for the selected features. With our algorithm, the user can interact with the underlying time-varying data more easily. The computation cost for performing time-varying volume tracking is also minimized.",
                "AuthorNamesDeduped": "Guangfeng Ji;Han-Wei Shen;Rephael Wenger",
                "AuthorNames": "Guangfeng Ji;Han-Wei Shen;R. Wenger",
                "AuthorAffiliation": "Department of Computer and Information Science, Ohio State Uinversity, USA;Department of Computer and Information Science, Ohio State Uinversity, USA;Department of Computer and Information Science, Ohio State Uinversity, USA",
                "InternalReferences": "0.1109/visual.2000.885704;10.1109/visual.1998.745288;10.1109/visual.1996.568103;10.1109/visual.2002.1183774;10.1109/visual.1996.567807;10.1109/visual.2000.885703;10.1109/visual.1995.480789;10.1109/visual.1997.663886",
                "AuthorKeywords": "tracking, isosurface, interval volume, higher dimensional isosurfacing",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 272,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2698,
                "i": [
                    2698
                ]
            }
        },
        {
            "name": "Hongfeng Yu 0001",
            "value": 67,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 822,
            "x": 283.5545711302028,
            "y": -42.97447138903239,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Importance-Driven Time-Varying Data Visualization",
                "DOI": "10.1109/tvcg.2008.140",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.140",
                "FirstPage": 1547,
                "LastPage": 1554,
                "PaperType": "J",
                "Abstract": "The ability to identify and present the most essential aspects of time-varying data is critically important in many areas of science and engineering. This paper introduces an importance-driven approach to time-varying volume data visualization for enhancing that ability. By conducting a block-wise analysis of the data in the joint feature-temporal space, we derive an importance curve for each data block based on the formulation of conditional entropy from information theory. Each curve characterizes the local temporal behavior of the respective block, and clustering the importance curves of all the volume blocks effectively classifies the underlying data. Based on different temporal trends exhibited by importance curves and their clustering results, we suggest several interesting and effective visualization techniques to reveal the important aspects of time-varying data.",
                "AuthorNamesDeduped": "Chaoli Wang 0001;Hongfeng Yu 0001;Kwan-Liu Ma",
                "AuthorNames": "Chaoli Wang;Hongfeng Yu;Kwan-Liu Ma",
                "AuthorAffiliation": "VIDI research group, Department of Computer Science, University of California, Davis, CA, USA;VIDI research group, Department of Computer Science, University of California, Davis, CA, USA;VIDI research group, Department of Computer Science, University of California, Davis, CA, USA",
                "InternalReferences": "0.1109/visual.1995.480809;10.1109/visual.2003.1250402;10.1109/tvcg.2007.70615;10.1109/tvcg.2006.152;10.1109/visual.2001.964531;10.1109/visual.1994.346321;10.1109/visual.1999.809910;10.1109/visual.2004.48",
                "AuthorKeywords": "Time-varying data, conditional entropy, joint feature-temporal space, clustering, highlighting, transfer function",
                "AminerCitationCount": 224,
                "CitationCountCrossRef": 115,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 2085,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2038,
                "i": [
                    2038
                ]
            }
        },
        {
            "name": "Wenyuan Wang",
            "value": 39,
            "numPapers": 37,
            "cluster": "1",
            "visible": 1,
            "index": 823,
            "x": -180.1649006322134,
            "y": 223.36205716321805,
            "vy": 0,
            "vx": 0,
            "r": 1.0449050086355787,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analysis of High-Dimensional Event Sequence Data via Dynamic Hierarchical Aggregation",
                "DOI": "10.1109/tvcg.2019.2934661",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934661",
                "FirstPage": 440,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "Temporal event data are collected across a broad range of domains, and a variety of visual analytics techniques have been developed to empower analysts working with this form of data. These techniques generally display aggregate statistics computed over sets of event sequences that share common patterns. Such techniques are often hindered, however, by the high-dimensionality of many real-world event sequence datasets which can prevent effective aggregation. A common coping strategy for this challenge is to group event types together prior to visualization, as a pre-process, so that each group can be represented within an analysis as a single event type. However, computing these event groupings as a pre-process also places significant constraints on the analysis. This paper presents a new visual analytics approach for dynamic hierarchical dimension aggregation. The approach leverages a predefined hierarchy of dimensions to computationally quantify the informativeness, with respect to a measure of interest, of alternative levels of grouping within the hierarchy at runtime. This information is then interactively visualized, enabling users to dynamically explore the hierarchy to select the most appropriate level of grouping to use at any individual step within an analysis. Key contributions include an algorithm for interactively determining the most informative set of event groupings for a specific analysis context, and a scented scatter-plus-focus visualization design with an optimization-based layout algorithm that supports interactive hierarchical exploration of alternative event type groupings. We apply these techniques to high-dimensional event sequence data from the medical domain and report findings from domain expert interviews.",
                "AuthorNamesDeduped": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorNames": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorAffiliation": "School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Biostatistics, University of North Carolina, Chapel Hill;School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Computer Science, University of North Carolina, Chapel Hill;RENCI, University of North Carolina, Chapel Hill",
                "InternalReferences": "0.1109/tvcg.2019.2934209;10.1109/tvcg.2017.2745278;10.1109/tvcg.2014.2346433;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2017.2745320;10.1109/tvcg.2018.2864886;10.1109/tvcg.2013.200;10.1109/vast.2011.6102443;10.1109/infvis.2005.1532152;10.1109/infvis.2000.885091;10.1109/tvcg.2017.2744686;10.1109/tvcg.2009.108;10.1109/tvcg.2007.70589;10.1109/vast.2014.7042487;10.1109/tvcg.2012.238",
                "AuthorKeywords": "Temporal event sequence visualization,visual analytics,hierarchical aggregation,medical informatics",
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1035,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 627,
                "i": [
                    627
                ]
            }
        },
        {
            "name": "Zeyu Li 0003",
            "value": 43,
            "numPapers": 45,
            "cluster": "1",
            "visible": 1,
            "index": 824,
            "x": -18.04185548010874,
            "y": -286.57371032743896,
            "vy": 0,
            "vx": 0,
            "r": 1.04951065054692,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards Visual Explainable Active Learning for Zero-Shot Classification",
                "DOI": "10.1109/tvcg.2021.3114793",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114793",
                "FirstPage": 791,
                "LastPage": 801,
                "PaperType": "J",
                "Abstract": "Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.",
                "AuthorNamesDeduped": "Shichao Jia;Zeyu Li 0003;Nuo Chen;Jiawan Zhang",
                "AuthorNames": "Shichao Jia;Zeyu Li;Nuo Chen;Jiawan Zhang",
                "AuthorAffiliation": "College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China and Tianjin cultural heritage conservation and inheritance engineering technology center and Key Research Center for Surface Monitoring and Analysis of Relics, State Administration of Cultural Heritage, China",
                "InternalReferences": "0.1109/tvcg.2017.2744818;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865047;10.1109/tvcg.2012.260;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/tvcg.2017.2744378;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2019.2934267;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2015.2467191;10.1109/vast47406.2019.8986943;10.1109/vast.2012.6400486",
                "AuthorKeywords": "Active Learning,Explainable Artificial Intelligence,Human-AI Teaming,Mixed-Initiative Visual Analytics",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1559,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 284,
                "i": [
                    284
                ]
            }
        },
        {
            "name": "Shichao Jia",
            "value": 43,
            "numPapers": 45,
            "cluster": "1",
            "visible": 1,
            "index": 825,
            "x": 207.0066832517013,
            "y": 199.24415446664878,
            "vy": 0,
            "vx": 0,
            "r": 1.04951065054692,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards Visual Explainable Active Learning for Zero-Shot Classification",
                "DOI": "10.1109/tvcg.2021.3114793",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114793",
                "FirstPage": 791,
                "LastPage": 801,
                "PaperType": "J",
                "Abstract": "Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.",
                "AuthorNamesDeduped": "Shichao Jia;Zeyu Li 0003;Nuo Chen;Jiawan Zhang",
                "AuthorNames": "Shichao Jia;Zeyu Li;Nuo Chen;Jiawan Zhang",
                "AuthorAffiliation": "College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China and Tianjin cultural heritage conservation and inheritance engineering technology center and Key Research Center for Surface Monitoring and Analysis of Relics, State Administration of Cultural Heritage, China",
                "InternalReferences": "0.1109/tvcg.2017.2744818;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865047;10.1109/tvcg.2012.260;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/tvcg.2017.2744378;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2019.2934267;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2015.2467191;10.1109/vast47406.2019.8986943;10.1109/vast.2012.6400486",
                "AuthorKeywords": "Active Learning,Explainable Artificial Intelligence,Human-AI Teaming,Mixed-Initiative Visual Analytics",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1559,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 284,
                "i": [
                    284
                ]
            }
        },
        {
            "name": "Christoph Schulz 0001",
            "value": 53,
            "numPapers": 23,
            "cluster": "6",
            "visible": 1,
            "index": 826,
            "x": -287.4016976634412,
            "y": -7.089723561037089,
            "vy": 0,
            "vx": 0,
            "r": 1.0610247553252734,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Bubble Treemaps for Uncertainty Visualization",
                "DOI": "10.1109/tvcg.2017.2743959",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743959",
                "FirstPage": 719,
                "LastPage": 728,
                "PaperType": "J",
                "Abstract": "We present a novel type of circular treemap, where we intentionally allocate extra space for additional visual variables. With this extended visual design space, we encode hierarchically structured data along with their uncertainties in a combined diagram. We introduce a hierarchical and force-based circle-packing algorithm to compute Bubble Treemaps, where each node is visualized using nested contour arcs. Bubble Treemaps do not require any color or shading, which offers additional design choices. We explore uncertainty visualization as an application of our treemaps using standard error and Monte Carlo-based statistical models. To this end, we discuss how uncertainty propagates within hierarchies. Furthermore, we show the effectiveness of our visualization using three different examples: the package structure of Flare, the S&amp;P 500 index, and the US consumer expenditure survey.",
                "AuthorNamesDeduped": "Jochen Görtler;Christoph Schulz 0001;Daniel Weiskopf;Oliver Deussen",
                "AuthorNames": "Jochen Görtler;Christoph Schulz;Daniel Weiskopf;Oliver Deussen",
                "AuthorAffiliation": "University of Konstanz;VISUS, University of Stuttgart;VISUS, University of Stuttgart;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2012.220;10.1109/tvcg.2009.122;10.1109/vast.2009.5332611;10.1109/tvcg.2014.2346298;10.1109/tvcg.2015.2467752;10.1109/visual.1991.175815;10.1109/tvcg.2013.180;10.1109/tvcg.2012.279;10.1109/tvcg.2010.210;10.1109/tvcg.2016.2598919;10.1109/tvcg.2015.2467992;10.1109/tvcg.2013.232",
                "AuthorKeywords": "Uncertainty visualization,hierarchy visualization,treemaps,tree layout,circle packing,contours",
                "AminerCitationCount": 70,
                "CitationCountCrossRef": 48,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2475,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 787,
                "i": [
                    787
                ]
            }
        },
        {
            "name": "Nafiul Nipu",
            "value": 22,
            "numPapers": 37,
            "cluster": "0",
            "visible": 1,
            "index": 827,
            "x": 216.84116827989723,
            "y": -189.02356397816993,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Negar Naghashzadeh",
            "value": 0,
            "numPapers": 20,
            "cluster": "6",
            "visible": 1,
            "index": 828,
            "x": -32.227801482114295,
            "y": 286.026867289822,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Analysis and Detection of Contrails in Aircraft Engine Simulations",
                "DOI": "10.1109/tvcg.2022.3209356",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209356",
                "FirstPage": 798,
                "LastPage": 808,
                "PaperType": "J",
                "Abstract": "Contrails are condensation trails generated from emitted particles by aircraft engines, which perturb Earth's radiation budget. Simulation modeling is used to interpret the formation and development of contrails. These simulations are computationally intensive and rely on high-performance computing solutions, and the contrail structures are not well defined. We propose a visual computing system to assist in defining contrails and their characteristics, as well as in the analysis of parameters for computer-generated aircraft engine simulations. The back-end of our system leverages a contrail-formation criterion and clustering methods to detect contrails' shape and evolution and identify similar simulation runs. The front-end system helps analyze contrails and their parameters across multiple simulation runs. The evaluation with domain experts shows this approach successfully aids in contrail data investigation.",
                "AuthorNamesDeduped": "Nafiul Nipu;Carla Floricel;Negar Naghashzadeh;Roberto Paoli;G. Elisabeta Marai",
                "AuthorNames": "Nafiul Nipu;Carla Floricel;Negar Naghashzadeh;Roberto Paoli;G. Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598869;10.1109/scivis.2015.7429487;10.1109/tvcg.2011.185;10.1109/tvcg.2010.190;10.1109/tvcg.2014.2346448;10.1109/tvcg.2015.2467204;10.1109/tvcg.2016.2598868;10.1109/tvcg.2021.3114810;10.1109/tvcg.2011.203;10.1109/tvcg.2009.141;10.1109/tvcg.2017.2745178;10.1109/tvcg.2015.2467431;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/vast.2006.261451;10.1109/tvcg.2014.2346455;10.1109/tvcg.2014.2346755;10.1109/tvcg.2010.181;10.1109/vast.2015.7347635;10.1109/tvcg.2016.2598830;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Scalar Field Data,Physical & Environmental Sciences,Mathematics,Feature Detection,Tracking & Transformation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 96,
                "DownloadsXplore": 730,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 241,
                "i": [
                    241
                ]
            }
        },
        {
            "name": "Roberto Paoli",
            "value": 0,
            "numPapers": 20,
            "cluster": "6",
            "visible": 1,
            "index": 829,
            "x": -169.5468239563549,
            "y": -232.81725555961864,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Visual Analysis and Detection of Contrails in Aircraft Engine Simulations",
                "DOI": "10.1109/tvcg.2022.3209356",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209356",
                "FirstPage": 798,
                "LastPage": 808,
                "PaperType": "J",
                "Abstract": "Contrails are condensation trails generated from emitted particles by aircraft engines, which perturb Earth's radiation budget. Simulation modeling is used to interpret the formation and development of contrails. These simulations are computationally intensive and rely on high-performance computing solutions, and the contrail structures are not well defined. We propose a visual computing system to assist in defining contrails and their characteristics, as well as in the analysis of parameters for computer-generated aircraft engine simulations. The back-end of our system leverages a contrail-formation criterion and clustering methods to detect contrails' shape and evolution and identify similar simulation runs. The front-end system helps analyze contrails and their parameters across multiple simulation runs. The evaluation with domain experts shows this approach successfully aids in contrail data investigation.",
                "AuthorNamesDeduped": "Nafiul Nipu;Carla Floricel;Negar Naghashzadeh;Roberto Paoli;G. Elisabeta Marai",
                "AuthorNames": "Nafiul Nipu;Carla Floricel;Negar Naghashzadeh;Roberto Paoli;G. Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA;University of Illinois at Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598869;10.1109/scivis.2015.7429487;10.1109/tvcg.2011.185;10.1109/tvcg.2010.190;10.1109/tvcg.2014.2346448;10.1109/tvcg.2015.2467204;10.1109/tvcg.2016.2598868;10.1109/tvcg.2021.3114810;10.1109/tvcg.2011.203;10.1109/tvcg.2009.141;10.1109/tvcg.2017.2745178;10.1109/tvcg.2015.2467431;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/vast.2006.261451;10.1109/tvcg.2014.2346455;10.1109/tvcg.2014.2346755;10.1109/tvcg.2010.181;10.1109/vast.2015.7347635;10.1109/tvcg.2016.2598830;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Scalar Field Data,Physical & Environmental Sciences,Mathematics,Feature Detection,Tracking & Transformation",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 96,
                "DownloadsXplore": 730,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 241,
                "i": [
                    241
                ]
            }
        },
        {
            "name": "Çagatay Demiralp",
            "value": 137,
            "numPapers": 28,
            "cluster": "0",
            "visible": 1,
            "index": 830,
            "x": 282.4544498925059,
            "y": 57.17939957643705,
            "vy": 0,
            "vx": 0,
            "r": 1.1577432354634427,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Clustrophile 2: Guided Visual Clustering Analysis",
                "DOI": "10.1109/tvcg.2018.2864477",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864477",
                "FirstPage": 267,
                "LastPage": 276,
                "PaperType": "J",
                "Abstract": "Data clustering is a common unsupervised learning method frequently used in exploratory data analysis. However, identifying relevant structures in unlabeled, high-dimensional data is nontrivial, requiring iterative experimentation with clustering parameters as well as data features and instances. The number of possible clusterings for a typical dataset is vast, and navigating in this vast space is also challenging. The absence of ground-truth labels makes it impossible to define an optimal solution, thus requiring user judgment to establish what can be considered a satisfiable clustering result. Data scientists need adequate interactive tools to effectively explore and navigate the large clustering space so as to improve the effectiveness of exploratory clustering analysis. We introduce Clustrophile 2, a new interactive tool for guided clustering analysis. Clustrophile 2 guides users in clustering-based exploratory analysis, adapts user feedback to improve user guidance, facilitates the interpretation of clusters, and helps quickly reason about differences between clusterings. To this end, Clustrophile 2 contributes a novel feature, the Clustering Tour, to help users choose clustering parameters and assess the quality of different clustering results in relation to current analysis goals and user expectations. We evaluate Clustrophile 2 through a user study with 12 data scientists, who used our tool to explore and interpret sub-cohorts in a dataset of Parkinson's disease patients. Results suggest that Clustrophile 2 improves the speed and effectiveness of exploratory clustering analysis for both experts and non-experts.",
                "AuthorNamesDeduped": "Marco Cavallo;Çagatay Demiralp",
                "AuthorNames": "Marco Cavallo;Çağatay Demiralp",
                "AuthorAffiliation": "IBM Research;MIT CSAIL & Fitnescity Labs",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2013.119;10.1109/tvcg.2012.219;10.1109/tvcg.2017.2745085;10.1109/tvcg.2010.138;10.1109/vast.2007.4388999;10.1109/tvcg.2012.207;10.1109/tvcg.2017.2744805;10.1109/vast.2008.4677350;10.1109/infvis.2004.3;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Clustering tour,Guided data analysis,Unsupervised learning,Exploratory data analysis,Interactive clustering analysis,Interpretability,Explainability,Visual data exploration recommendation,Dimensionality reduction,What-if analysis,Clustrophile",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 1631,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 737,
                "i": [
                    737
                ]
            }
        },
        {
            "name": "Joachim Giesen",
            "value": 72,
            "numPapers": 29,
            "cluster": "6",
            "visible": 1,
            "index": 831,
            "x": -247.0458492915889,
            "y": 148.72238684138156,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "GRay: Ray Casting for Visualization and Interactive Data Exploration of Gaussian Mixture Models",
                "DOI": "10.1109/tvcg.2022.3209374",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209374",
                "FirstPage": 526,
                "LastPage": 536,
                "PaperType": "J",
                "Abstract": "The Gaussian mixture model (GMM) describes the distribution of random variables from several different populations. GMMs have widespread applications in probability theory, statistics, machine learning for unsupervised cluster analysis and topic modeling, as well as in deep learning pipelines. So far, few efforts have been made to explore the underlying point distribution in combination with the GMMs, in particular when the data becomes high-dimensional and when the GMMs are composed of many Gaussians. We present an analysis tool comprising various GPU-based visualization techniques to explore such complex GMMs. To facilitate the exploration of high-dimensional data, we provide a novel navigation system to analyze the underlying data. Instead of projecting the data to 2D, we utilize interactive 3D views to better support users in understanding the spatial arrangements of the Gaussian distributions. The interactive system is composed of two parts: (1) raycasting-based views that visualize cluster memberships, spatial arrangements, and support the discovery of new modes. (2) overview visualizations that enable the comparison of Gaussians with each other, as well as small multiples of different choices of basis vectors. Users are supported in their exploration with customization tools and smooth camera navigations. Our tool was developed and assessed by five domain experts, and its usefulness was evaluated with 23 participants. To demonstrate the effectiveness, we identify interesting features in several data sets.",
                "AuthorNamesDeduped": "Kai Lawonn;Monique Meuschke;Pepe Eulzer;Matthias Mitterreiter;Joachim Giesen;Tobias Günther",
                "AuthorNames": "Kai Lawonn;Monique Meuschke;Pepe Eulzer;Matthias Mitterreiter;Joachim Giesen;Tobias Günther",
                "AuthorAffiliation": "Friedrich Schiller University of Jena, Germany;Otto von Guericke University of Magdeburg, Germany;Friedrich Schiller University of Jena, Germany;Friedrich Schiller University of Jena, Germany;Friedrich Schiller University of Jena, Germany;Friedrich-Alexander-Universitä t Erlangen-Nürnberg, Germany",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.229;10.1109/tvcg.2011.201;10.1109/tvcg.2008.153;10.1109/infvis.2005.1532141;10.1109/vast.2010.5652484;10.1109/tvcg.2013.160;10.1109/vast.2010.5652398;10.1109/tvcg.2020.3030379;10.1109/visual.2000.885740;10.1109/infvis.2004.3;10.1109/vast.2009.5332628;10.1109/tvcg.2007.70589;10.1109/tvcg.2009.179",
                "AuthorKeywords": "Scientific visualization,Gaussian mixture models,ray casting,volume visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 493,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 244,
                "i": [
                    244
                ]
            }
        },
        {
            "name": "Jaemin Jo",
            "value": 112,
            "numPapers": 53,
            "cluster": "5",
            "visible": 1,
            "index": 832,
            "x": 81.75250723401838,
            "y": -276.7065730353216,
            "vy": 0,
            "vx": 0,
            "r": 1.128957973517559,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Measuring and Explaining the Inter-Cluster Reliability of Multidimensional Projections",
                "DOI": "10.1109/tvcg.2021.3114833",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114833",
                "FirstPage": 551,
                "LastPage": 561,
                "PaperType": "J",
                "Abstract": "We propose Steadiness and Cohesiveness, two novel metrics to measure the inter-cluster reliability of multidimensional projection (MDP), specifically how well the inter-cluster structures are preserved between the original high-dimensional space and the low-dimensional projection space. Measuring inter-cluster reliability is crucial as it directly affects how well inter-cluster tasks (e.g., identifying cluster relationships in the original space from a projected view) can be conducted; however, despite the importance of inter-cluster tasks, we found that previous metrics, such as Trustworthiness and Continuity, fail to measure inter-cluster reliability. Our metrics consider two aspects of the inter-cluster reliability: Steadiness measures the extent to which clusters in the projected space form clusters in the original space, and Cohesiveness measures the opposite. They extract random clusters with arbitrary shapes and positions in one space and evaluate how much the clusters are stretched or dispersed in the other space. Furthermore, our metrics can quantify pointwise distortions, allowing for the visualization of inter-cluster reliability in a projection, which we call a reliability map. Through quantitative experiments, we verify that our metrics precisely capture the distortions that harm inter-cluster reliability while previous metrics have difficulty capturing the distortions. A case study also demonstrates that our metrics and the reliability map 1) support users in selecting the proper projection techniques or hyperparameters and 2) prevent misinterpretation while performing inter-cluster tasks, thus allow an adequate identification of inter-cluster structure.",
                "AuthorNamesDeduped": "Hyeon Jeon;Hyung-Kwon Ko;Jaemin Jo;Youngtaek Kim;Jinwook Seo",
                "AuthorNames": "Hyeon Jeon;Hyung-Kwon Ko;Jaemin Jo;Youngtaek Kim;Jinwook Seo",
                "AuthorAffiliation": "Seoul National University, South Korea;Seoul National University, South Korea;Sungkyunkwan University, South Korea;Seoul National University, South Korea;Seoul National University, South Korea",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/vast.2010.5652443;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/vast.2007.4388999;10.1109/tvcg.2010.207;10.1109/tvcg.2016.2598495;10.1109/tvcg.2013.153;10.1109/tvcg.2017.2744098",
                "AuthorKeywords": "Multidimensional projections,MDP distortions,Inter-cluster tasks,Inter-cluster reliability,Distortion metrics",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 921,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 315,
                "i": [
                    315
                ]
            }
        },
        {
            "name": "Alfred Inselberg",
            "value": 357,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 833,
            "x": 126.7068816434944,
            "y": 259.41350416696025,
            "vy": 0,
            "vx": 0,
            "r": 1.4110535405872193,
            "node": {
                "Conference": "InfoVis",
                "Year": 1997,
                "Title": "Multidimensional detective",
                "DOI": "10.1109/infvis.1997.636793",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1997.636793",
                "FirstPage": 100,
                "LastPage": 107,
                "PaperType": "C",
                "Abstract": "The display of multivariate datasets in parallel coordinates, transforms the search for relations among the variables into a 2-D pattern recognition problem. This is the basis for the application to visual data mining. The knowledge discovery process together with some general guidelines are illustrated on a dataset from the production of a VLSI chip. The special strength of parallel coordinates is in modeling relations. As an example, a simplified economic model is constructed with data from various economic sectors of a real country. The visual model shows the interelationship and dependencies between the sectors, circumstances where there is competition for the same resource, and feasible economic policies. Interactively, the model can be used to do trade-off analyses, discover sensitivities, do approximate optimization, monitor (as in a process) and provide decision support.",
                "AuthorNamesDeduped": "Alfred Inselberg",
                "AuthorNames": "A. Inselberg",
                "AuthorAffiliation": "Computer Science Department, San Diego SuperComputing Center, Tel-Aviv University, Israel",
                "InternalReferences": "0.1109/visual.1990.146402;10.1109/visual.1994.346302",
                "AuthorKeywords": null,
                "AminerCitationCount": null,
                "CitationCountCrossRef": 79,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 1043,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3218,
                "i": [
                    3218
                ]
            }
        },
        {
            "name": "Bernard Dimsdale",
            "value": 285,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 834,
            "x": -268.82213149313117,
            "y": -105.75756057837998,
            "vy": 0,
            "vx": 0,
            "r": 1.3281519861830744,
            "node": {
                "Conference": "Vis",
                "Year": 1990,
                "Title": "Parallel coordinates: a tool for visualizing multi-dimensional geometry",
                "DOI": "10.1109/visual.1990.146402",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1990.146402",
                "FirstPage": 361,
                "LastPage": 378,
                "PaperType": "C",
                "Abstract": "A methodology for visualizing analytic and synthetic geometry in R/sup N/ is presented. It is based on a system of parallel coordinates which induces a nonprojective mapping between N-dimensional and two-dimensional sets. Hypersurfaces are represented by their planar images which have some geometrical properties analogous to the properties of the hypersurface that they represent. A point from to line duality when N=2 generalizes to lines and hyperplanes enabling the representation of polyhedra in R/sup N/. The representation of a class of convex and non-convex hypersurfaces is discussed, together with an algorithm for constructing and displaying any interior point. The display shows some local properties of the hypersurface and provides information on the point's proximity to the boundary. Applications are discussed.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Alfred Inselberg;Bernard Dimsdale",
                "AuthorNames": "A. Inselberg;B. Dimsdale",
                "AuthorAffiliation": "IBM Scientific Center, Los Angeles, CA, USA and Department of Computer Sciences, University of Southern California, Los Angeles, CA, USA;IBM Scientific Center, Los Angeles, CA, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 1746,
                "CitationCountCrossRef": 407,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1213,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3700,
                "i": [
                    3700
                ]
            }
        },
        {
            "name": "Anjul Kumar Tyagi",
            "value": 10,
            "numPapers": 35,
            "cluster": "11",
            "visible": 1,
            "index": 835,
            "x": 269.8208002135281,
            "y": -103.66646406688787,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural Network Synthesis",
                "DOI": "10.1109/tvcg.2022.3209361",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209361",
                "FirstPage": 299,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "The success of DL can be attributed to hours of parameter and architecture tuning by human experts. Neural Architecture Search (NAS) techniques aim to solve this problem by automating the search procedure for DNN architectures making it possible for non-experts to work with DNNs. Specifically, One-shot NAS techniques have recently gained popularity as they are known to reduce the search time for NAS techniques. One-Shot NAS works by training a large template network through parameter sharing which includes all the candidate NNs. This is followed by applying a procedure to rank its components through evaluating the possible candidate architectures chosen randomly. However, as these search models become increasingly powerful and diverse, they become harder to understand. Consequently, even though the search results work well, it is hard to identify search biases and control the search progression, hence a need for explainability and human-in-the-loop (HIL) One-Shot NAS. To alleviate these problems, we present NAS-Navigator, a visual analytics (VA) system aiming to solve three problems with One-Shot NAS; explainability, HIL design, and performance improvements compared to existing state-of-the-art (SOTA) techniques. NAS-Navigator gives full control of NAS back in the hands of the users while still keeping the perks of automated search, thus assisting non-expert users. Analysts can use their domain knowledge aided by cues from the interface to guide the search. Evaluation results confirm the performance of our improved One-Shot NAS algorithm is comparable to other SOTA techniques. While adding Visual Analytics (VA) using NAS-Navigator shows further improvements in search time and performance. We designed our interface in collaboration with several deep learning researchers and evaluated NAS-Navigator through a control experiment and expert interviews.",
                "AuthorNamesDeduped": "Anjul Kumar Tyagi;Cong Xie;Klaus Mueller 0001",
                "AuthorNames": "Anjul Tyagi;Cong Xie;Klaus Mueller",
                "AuthorAffiliation": "Computer Science Department, Visual Analytics and Imaging Lab, Stony Brook University, New York, USA;Computer Science Department, Visual Analytics and Imaging Lab, Stony Brook University, New York, USA;Computer Science Department, Visual Analytics and Imaging Lab, Stony Brook University, New York, USA",
                "InternalReferences": "0.1109/vast.2012.6400490;10.1109/tvcg.2019.2934261;10.1109/tvcg.2018.2864477;10.1109/tvcg.2017.2745085;10.1109/tvcg.2017.2745158;10.1109/tvcg.2013.125;10.1109/vast.2007.4388999;10.1109/tvcg.2017.2744805;10.1109/vast47406.2019.8986923;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "Deep Learning,Neural Network Architecture Search,Visual Analytics,Explainability",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 391,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 248,
                "i": [
                    248
                ]
            }
        },
        {
            "name": "Dirk J. Lehmann",
            "value": 140,
            "numPapers": 42,
            "cluster": "11",
            "visible": 1,
            "index": 836,
            "x": -129.00890579462924,
            "y": 258.8565282655094,
            "vy": 0,
            "vx": 0,
            "r": 1.1611974668969487,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Optimal Sets of Projections of High-Dimensional Data",
                "DOI": "10.1109/tvcg.2015.2467132",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467132",
                "FirstPage": 609,
                "LastPage": 618,
                "PaperType": "J",
                "Abstract": "Finding good projections of n-dimensional datasets into a 2D visualization domain is one of the most important problems in Information Visualization. Users are interested in getting maximal insight into the data by exploring a minimal number of projections. However, if the number is too small or improper projections are used, then important data patterns might be overlooked. We propose a data-driven approach to find minimal sets of projections that uniquely show certain data patterns. For this we introduce a dissimilarity measure of data projections that discards affine transformations of projections and prevents repetitions of the same data patterns. Based on this, we provide complete data tours of at most n/2 projections. Furthermore, we propose optimal paths of projection matrices for an interactive data exploration. We illustrate our technique with a set of state-of-the-art real high-dimensional benchmark datasets.",
                "AuthorNamesDeduped": "Dirk J. Lehmann;Holger Theisel",
                "AuthorNames": "Dirk J. Lehmann;Holger Theisel",
                "AuthorAffiliation": "University of Magdeburg;University of Magdeburg",
                "InternalReferences": "0.1109/vast.2010.5652433;10.1109/vast.2011.6102437;10.1109/tvcg.2011.229;10.1109/visual.1997.663916;10.1109/tvcg.2011.220;10.1109/tvcg.2013.182;10.1109/tvcg.2010.207;10.1109/vast.2006.261423;10.1109/infvis.2005.1532142",
                "AuthorKeywords": "Multivariate Projections, Star Coordinates, Radial Visualization, High-dimensional Data",
                "AminerCitationCount": 50,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 1304,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1025,
                "i": [
                    1025
                ]
            }
        },
        {
            "name": "Tyler Estro",
            "value": 8,
            "numPapers": 26,
            "cluster": "11",
            "visible": 1,
            "index": 837,
            "x": -79.77556174830022,
            "y": -278.18314066050647,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "PC-Expo: A Metrics-Based Interactive Axes Reordering Method for Parallel Coordinate Displays",
                "DOI": "10.1109/tvcg.2022.3209392",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209392",
                "FirstPage": 712,
                "LastPage": 722,
                "PaperType": "J",
                "Abstract": "Parallel coordinate plots (PCPs) have been widely used for high-dimensional (HD) data storytelling because they allow for presenting a large number of dimensions without distortions. The axes ordering in PCP presents a particular story from the data based on the user perception of PCP polylines. Existing works focus on directly optimizing for PCP axes ordering based on some common analysis tasks like clustering, neighborhood, and correlation. However, direct optimization for PCP axes based on these common properties is restrictive because it does not account for multiple properties occurring between the axes, and for local properties that occur in small regions in the data. Also, many of these techniques do not support the human-in-the-loop (HIL) paradigm, which is crucial (i) for explainability and (ii) in cases where no single reordering scheme fits the users' goals. To alleviate these problems, we present PC-Expo, a real-time visual analytics framework for all-in-one PCP line pattern detection and axes reordering. We studied the connection of line patterns in PCPs with different data analysis tasks and datasets. PC-Expo expands prior work on PCP axes reordering by developing real-time, local detection schemes for the 12 most common analysis tasks (properties). Users can choose the story they want to present with PCPs by optimizing directly over their choice of properties. These properties can be ranked, or combined using individual weights, creating a custom optimization scheme for axes reordering. Users can control the granularity at which they want to work with their detection scheme in the data, allowing exploration of local regions. PC-Expo also supports HIL axes reordering via local-property visualization, which shows the regions of granular activity for every axis pair. Local-property visualization is helpful for PCP axes reordering based on multiple properties, when no single reordering scheme fits the user goals. A comprehensive evaluation was done with real users and diverse datasets confirm the efficacy of PC-Expo in data storytelling with PCPs.",
                "AuthorNamesDeduped": "Anjul Kumar Tyagi;Tyler Estro;Geoff Kuenning;Erez Zadok;Klaus Mueller 0001",
                "AuthorNames": "Anjul Tyagi;Tyler Estro;Geoff Kuenning;Erez Zadok;Klaus Mueller",
                "AuthorAffiliation": "Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Harvey Mudd College, Claremont, California, USA;Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Stony Brook University, New York, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/infvis.1998.729559;10.1109/tvcg.2010.184;10.1109/tvcg.2006.138;10.1109/tvcg.2007.70535;10.1109/visual.1997.663916;10.1109/visual.1990.146402;10.1109/tvcg.2015.2466992;10.1109/infvis.2005.1532138;10.1109/tvcg.2015.2467132;10.1109/tvcg.2009.111;10.1109/infvis.2004.15;10.1109/vast47406.2019.8986923;10.1109/infvis.2005.1532142",
                "AuthorKeywords": "High dimensional data visualization,Parallel Coordinates Chart,Data Storytelling,Data Analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 392,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 247,
                "i": [
                    247
                ]
            }
        },
        {
            "name": "Erez Zadok",
            "value": 8,
            "numPapers": 26,
            "cluster": "11",
            "visible": 1,
            "index": 838,
            "x": 246.88128796000933,
            "y": 151.3262358456291,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "PC-Expo: A Metrics-Based Interactive Axes Reordering Method for Parallel Coordinate Displays",
                "DOI": "10.1109/tvcg.2022.3209392",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209392",
                "FirstPage": 712,
                "LastPage": 722,
                "PaperType": "J",
                "Abstract": "Parallel coordinate plots (PCPs) have been widely used for high-dimensional (HD) data storytelling because they allow for presenting a large number of dimensions without distortions. The axes ordering in PCP presents a particular story from the data based on the user perception of PCP polylines. Existing works focus on directly optimizing for PCP axes ordering based on some common analysis tasks like clustering, neighborhood, and correlation. However, direct optimization for PCP axes based on these common properties is restrictive because it does not account for multiple properties occurring between the axes, and for local properties that occur in small regions in the data. Also, many of these techniques do not support the human-in-the-loop (HIL) paradigm, which is crucial (i) for explainability and (ii) in cases where no single reordering scheme fits the users' goals. To alleviate these problems, we present PC-Expo, a real-time visual analytics framework for all-in-one PCP line pattern detection and axes reordering. We studied the connection of line patterns in PCPs with different data analysis tasks and datasets. PC-Expo expands prior work on PCP axes reordering by developing real-time, local detection schemes for the 12 most common analysis tasks (properties). Users can choose the story they want to present with PCPs by optimizing directly over their choice of properties. These properties can be ranked, or combined using individual weights, creating a custom optimization scheme for axes reordering. Users can control the granularity at which they want to work with their detection scheme in the data, allowing exploration of local regions. PC-Expo also supports HIL axes reordering via local-property visualization, which shows the regions of granular activity for every axis pair. Local-property visualization is helpful for PCP axes reordering based on multiple properties, when no single reordering scheme fits the user goals. A comprehensive evaluation was done with real users and diverse datasets confirm the efficacy of PC-Expo in data storytelling with PCPs.",
                "AuthorNamesDeduped": "Anjul Kumar Tyagi;Tyler Estro;Geoff Kuenning;Erez Zadok;Klaus Mueller 0001",
                "AuthorNames": "Anjul Tyagi;Tyler Estro;Geoff Kuenning;Erez Zadok;Klaus Mueller",
                "AuthorAffiliation": "Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Harvey Mudd College, Claremont, California, USA;Computer Science Department, Stony Brook University, New York, USA;Computer Science Department, Stony Brook University, New York, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/infvis.1998.729559;10.1109/tvcg.2010.184;10.1109/tvcg.2006.138;10.1109/tvcg.2007.70535;10.1109/visual.1997.663916;10.1109/visual.1990.146402;10.1109/tvcg.2015.2466992;10.1109/infvis.2005.1532138;10.1109/tvcg.2015.2467132;10.1109/tvcg.2009.111;10.1109/infvis.2004.15;10.1109/vast47406.2019.8986923;10.1109/infvis.2005.1532142",
                "AuthorKeywords": "High dimensional data visualization,Parallel Coordinates Chart,Data Storytelling,Data Analysis",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 392,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 247,
                "i": [
                    247
                ]
            }
        },
        {
            "name": "Alexander Bock 0002",
            "value": 69,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 839,
            "x": -284.43143764442226,
            "y": 55.215552877130115,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Visual Verification of Space Weather Ensemble Simulations",
                "DOI": "10.1109/scivis.2015.7429487",
                "Link": "http://dx.doi.org/10.1109/SciVis.2015.7429487",
                "FirstPage": 17,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel.",
                "AuthorNamesDeduped": "Alexander Bock 0002;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorNames": "Alexander Bock;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorAffiliation": "Linköping University;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;Linköping University;Ulm University",
                "InternalReferences": "0.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Visual Verification, Space Weather, Coronal Mass Ejections, Ensemble",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 446,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1057,
                "i": [
                    1057
                ]
            }
        },
        {
            "name": "Asher Pembroke",
            "value": 37,
            "numPapers": 2,
            "cluster": "6",
            "visible": 1,
            "index": 840,
            "x": 172.5359894517189,
            "y": -232.98354522136611,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Visual Verification of Space Weather Ensemble Simulations",
                "DOI": "10.1109/scivis.2015.7429487",
                "Link": "http://dx.doi.org/10.1109/SciVis.2015.7429487",
                "FirstPage": 17,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel.",
                "AuthorNamesDeduped": "Alexander Bock 0002;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorNames": "Alexander Bock;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorAffiliation": "Linköping University;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;Linköping University;Ulm University",
                "InternalReferences": "0.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Visual Verification, Space Weather, Coronal Mass Ejections, Ensemble",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 446,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1057,
                "i": [
                    1057
                ]
            }
        },
        {
            "name": "M. Leila Mays",
            "value": 37,
            "numPapers": 2,
            "cluster": "6",
            "visible": 1,
            "index": 841,
            "x": 30.173388214239313,
            "y": 288.51268021297227,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Visual Verification of Space Weather Ensemble Simulations",
                "DOI": "10.1109/scivis.2015.7429487",
                "Link": "http://dx.doi.org/10.1109/SciVis.2015.7429487",
                "FirstPage": 17,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel.",
                "AuthorNamesDeduped": "Alexander Bock 0002;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorNames": "Alexander Bock;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorAffiliation": "Linköping University;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;Linköping University;Ulm University",
                "InternalReferences": "0.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Visual Verification, Space Weather, Coronal Mass Ejections, Ensemble",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 446,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1057,
                "i": [
                    1057
                ]
            }
        },
        {
            "name": "Lutz Rastaetter",
            "value": 37,
            "numPapers": 2,
            "cluster": "6",
            "visible": 1,
            "index": 842,
            "x": -217.26541185195563,
            "y": -192.47270147426124,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Visual Verification of Space Weather Ensemble Simulations",
                "DOI": "10.1109/scivis.2015.7429487",
                "Link": "http://dx.doi.org/10.1109/SciVis.2015.7429487",
                "FirstPage": 17,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel.",
                "AuthorNamesDeduped": "Alexander Bock 0002;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorNames": "Alexander Bock;Asher Pembroke;M. Leila Mays;Lutz Rastaetter;Timo Ropinski;Anders Ynnerman",
                "AuthorAffiliation": "Linköping University;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;NASA Goddard Space Flight Center;Linköping University;Ulm University",
                "InternalReferences": "0.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Visual Verification, Space Weather, Coronal Mass Ejections, Ensemble",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 446,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1057,
                "i": [
                    1057
                ]
            }
        },
        {
            "name": "Xiaotong Liu",
            "value": 126,
            "numPapers": 45,
            "cluster": "6",
            "visible": 1,
            "index": 843,
            "x": 290.3903799776217,
            "y": -4.840166986013646,
            "vy": 0,
            "vx": 0,
            "r": 1.145077720207254,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots",
                "DOI": "10.1109/tvcg.2016.2598830",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598830",
                "FirstPage": 81,
                "LastPage": 90,
                "PaperType": "J",
                "Abstract": "Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.",
                "AuthorNamesDeduped": "Junpeng Wang;Xiaotong Liu;Han-Wei Shen;Guang Lin",
                "AuthorNames": "Junpeng Wang;Xiaotong Liu;Han-Wei Shen;Guang Lin",
                "AuthorAffiliation": "The Ohio State University;The Ohio State University;The Ohio State University;Purdue University",
                "InternalReferences": "0.1109/tvcg.2010.181;10.1109/tvcg.2008.153;10.1109/infvis.1998.729559;10.1109/tvcg.2012.237;10.1109/infvis.2004.68;10.1109/tvcg.2014.2346755;10.1109/scivis.2015.7429487;10.1109/visual.1999.809866;10.1109/tvcg.2013.122;10.1109/infvis.2004.15;10.1109/tvcg.2015.2467431;10.1109/tvcg.2015.2468093;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346321",
                "AuthorKeywords": "Parallel coordinates plots;parameter analysis;multi-resolution climate ensembles",
                "AminerCitationCount": 89,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 964,
                "i": [
                    964
                ]
            }
        },
        {
            "name": "Guang Lin",
            "value": 90,
            "numPapers": 23,
            "cluster": "6",
            "visible": 1,
            "index": 844,
            "x": -210.9802941924371,
            "y": 199.84322721191398,
            "vy": 0,
            "vx": 0,
            "r": 1.1036269430051813,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots",
                "DOI": "10.1109/tvcg.2016.2598830",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598830",
                "FirstPage": 81,
                "LastPage": 90,
                "PaperType": "J",
                "Abstract": "Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.",
                "AuthorNamesDeduped": "Junpeng Wang;Xiaotong Liu;Han-Wei Shen;Guang Lin",
                "AuthorNames": "Junpeng Wang;Xiaotong Liu;Han-Wei Shen;Guang Lin",
                "AuthorAffiliation": "The Ohio State University;The Ohio State University;The Ohio State University;Purdue University",
                "InternalReferences": "0.1109/tvcg.2010.181;10.1109/tvcg.2008.153;10.1109/infvis.1998.729559;10.1109/tvcg.2012.237;10.1109/infvis.2004.68;10.1109/tvcg.2014.2346755;10.1109/scivis.2015.7429487;10.1109/visual.1999.809866;10.1109/tvcg.2013.122;10.1109/infvis.2004.15;10.1109/tvcg.2015.2467431;10.1109/tvcg.2015.2468093;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346321",
                "AuthorKeywords": "Parallel coordinates plots;parameter analysis;multi-resolution climate ensembles",
                "AminerCitationCount": 89,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 964,
                "i": [
                    964
                ]
            }
        },
        {
            "name": "Eric D. Ragan",
            "value": 122,
            "numPapers": 46,
            "cluster": "5",
            "visible": 1,
            "index": 845,
            "x": 20.59032249004849,
            "y": -290.0448906975536,
            "vy": 0,
            "vx": 0,
            "r": 1.1404720782959126,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Characterizing Provenance in Visualization and Data Analysis: An Organizational Framework of Provenance Types and Purposes",
                "DOI": "10.1109/tvcg.2015.2467551",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467551",
                "FirstPage": 31,
                "LastPage": 40,
                "PaperType": "J",
                "Abstract": "While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance information and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research.",
                "AuthorNamesDeduped": "Eric D. Ragan;Alex Endert;Jibonananda Sanyal;Jian Chen 0006",
                "AuthorNames": "Eric D. Ragan;Alex Endert;Jibonananda Sanyal;Jian Chen",
                "AuthorAffiliation": "Texas A&M University;Georgia Tech;Oak Ridge National Laboratory;University of Maryland, Baltimore County",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/visual.2005.1532788;10.1109/tvcg.2013.155;10.1109/visual.1993.398857;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2010.5652932;10.1109/vast.2008.4677365;10.1109/tvcg.2008.137;10.1109/tvcg.2013.126;10.1109/vast.2009.5333020;10.1109/vast.2010.5653598;10.1109/tvcg.2012.271;10.1109/tvcg.2014.2346573;10.1109/vast.2008.4677366;10.1109/tvcg.2013.130;10.1109/tvcg.2010.181;10.1109/tvcg.2010.179;10.1109/visual.1990.146375",
                "AuthorKeywords": "Provenance, Analytic provenance, Visual analytics, Framework, Visualization, Conceptual model",
                "AminerCitationCount": 221,
                "CitationCountCrossRef": 142,
                "PubsCitedCrossRef": 97,
                "DownloadsXplore": 2752,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1099,
                "i": [
                    1099
                ]
            }
        },
        {
            "name": "Songheng Zhang",
            "value": 25,
            "numPapers": 10,
            "cluster": "1",
            "visible": 1,
            "index": 846,
            "x": 180.8466973972129,
            "y": 227.9132993936969,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "KG4Vis: A Knowledge Graph-Based Approach for Visualization Recommendation",
                "DOI": "10.1109/tvcg.2021.3114863",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114863",
                "FirstPage": 195,
                "LastPage": 205,
                "PaperType": "J",
                "Abstract": "Visualization recommendation or automatic visualization generation can significantly lower the barriers for general users to rapidly create effective data visualizations, especially for those users without a background in data visualizations. However, existing rule-based approaches require tedious manual specifications of visualization rules by visualization experts. Other machine learning-based approaches often work like black-box and are difficult to understand why a specific visualization is recommended, limiting the wider adoption of these approaches. This paper fills the gap by presenting KG4Vis, a knowledge graph (KG)-based approach for visualization recommendation. It does not require manual specifications of visualization rules and can also guarantee good explainability. Specifically, we propose a framework for building knowledge graphs, consisting of three types of entities (i.e., data features, data columns and visualization design choices) and the relations between them, to model the mapping rules between data and effective visualizations. A TransE-based embedding technique is employed to learn the embeddings of both entities and relations of the knowledge graph from existing dataset-visualization pairs. Such embeddings intrinsically model the desirable visualization rules. Then, given a new dataset, effective visualizations can be inferred from the knowledge graph with semantically meaningful rules. We conducted extensive evaluations to assess the proposed approach, including quantitative comparisons, case studies and expert interviews. The results demonstrate the effectiveness of our approach.",
                "AuthorNamesDeduped": "Haotian Li 0001;Yong Wang 0021;Songheng Zhang;Yangqiu Song;Huamin Qu",
                "AuthorNames": "Haotian Li;Yong Wang;Songheng Zhang;Yangqiu Song;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology and Singapore Management University, Hong Kong;Singapore Management University, Singapore;Singapore Management University, Singapore;Hong Kong University of Science and Technology, Hong Kong;Hong Kong University of Science and Technology, Hong Kong",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2020.3030338;10.1109/tvcg.2019.2934810;10.1109/tvcg.2020.3030469;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2864812;10.1109/tvcg.2018.2865240;10.1109/tvcg.2015.2467091;10.1109/tvcg.2019.2934798;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030423",
                "AuthorKeywords": "Data visualization,Visualization recommendation,Knowledge graph",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 48,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2773,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 252,
                "i": [
                    252
                ]
            }
        },
        {
            "name": "Hongyuan Zha",
            "value": 176,
            "numPapers": 12,
            "cluster": "3",
            "visible": 1,
            "index": 847,
            "x": -287.4735990402165,
            "y": -45.92308629507432,
            "vy": 0,
            "vx": 0,
            "r": 1.2026482440990214,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Progression Analysis of Event Sequence Data",
                "DOI": "10.1109/tvcg.2018.2864885",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864885",
                "FirstPage": 417,
                "LastPage": 426,
                "PaperType": "J",
                "Abstract": "Event sequence data is common to a broad range of application domains, from security to health care to scholarly communication. This form of data captures information about the progression of events for an individual entity (e.g., a computer network device; a patient; an author) in the form of a series of time-stamped observations. Moreover, each event is associated with an event type (e.g., a computer login attempt, or a hospital discharge). Analyses of event sequence data have been shown to help reveal important temporal patterns, such as clinical paths resulting in improved outcomes, or an understanding of common career trajectories for scholars. Moreover, recent research has demonstrated a variety of techniques designed to overcome methodological challenges such as large volumes of data and high dimensionality. However, the effective identification and analysis of latent stages of progression, which can allow for variation within different but similarly evolving event sequences, remain a significant challenge with important real-world motivations. In this paper, we propose an unsupervised stage analysis algorithm to identify semantically meaningful progression stages as well as the critical events which help define those stages. The algorithm follows three key steps: (1) event representation estimation, (2) event sequence warping and alignment, and (3) sequence segmentation. We also present a novel visualization system, ET<sup>2</sup>, which interactively illustrates the results of the stage analysis algorithm to help reveal evolution patterns across stages. Finally, we report three forms of evaluation for ET<sup>2</sup>: (1) case studies with two real-world datasets, (2) interviews with domain expert users, and (3) a performance evaluation on the progression analysis algorithm and the visualization design.",
                "AuthorNamesDeduped": "Shunan Guo;Zhuochen Jin;David Gotz;Fan Du;Hongyuan Zha;Nan Cao 0001",
                "AuthorNames": "Shunan Guo;Zhuochen Jin;David Gotz;Fan Du;Hongyuan Zha;Nan Cao",
                "AuthorAffiliation": "East China Normal University;iDVX lab, Tongji University;University of North Carolina, Chapel Hill;University of Maryland;East China Normal University;iDVX lab, Tongji University",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2017.2745320;10.1109/tvcg.2014.2346574;10.1109/tvcg.2009.187;10.1109/tvcg.2014.2346913",
                "AuthorKeywords": "Progression Analysis,Visual Analysis,Event Sequence Data",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 1816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 740,
                "i": [
                    740
                ]
            }
        },
        {
            "name": "Pierre Y. Andrews",
            "value": 162,
            "numPapers": 9,
            "cluster": "1",
            "visible": 1,
            "index": 848,
            "x": 243.13800170795193,
            "y": -160.41792956357457,
            "vy": 0,
            "vx": 0,
            "r": 1.1865284974093264,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models",
                "DOI": "10.1109/tvcg.2017.2744718",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744718",
                "FirstPage": 88,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ActiVis, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance-and subset-level. ActiVis has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ActiVis may work with different models.",
                "AuthorNamesDeduped": "Minsuk Kahng;Pierre Y. Andrews;Aditya Kalro;Duen Horng (Polo) Chau",
                "AuthorNames": "Minsuk Kahng;Pierre Y. Andrews;Aditya Kalro;Duen Horng (Polo) Chau",
                "AuthorAffiliation": "Georgia Institute of Technology;Facebook;Facebook;Georgia Institute of Technology",
                "InternalReferences": "0.1109/vast.2015.7347637;10.1109/vast.2010.5652443;10.1109/tvcg.2013.157;10.1109/tvcg.2014.2346482;10.1109/tvcg.2015.2467622;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/visual.2005.1532820;10.1109/vast.2011.6102453",
                "AuthorKeywords": "Visual analytics,deep learning,machine learning,information visualization",
                "AminerCitationCount": 341,
                "CitationCountCrossRef": 214,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 3325,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 837,
                "i": [
                    837
                ]
            }
        },
        {
            "name": "Aditya Kalro",
            "value": 162,
            "numPapers": 9,
            "cluster": "1",
            "visible": 1,
            "index": 849,
            "x": -70.96342114916048,
            "y": 282.69098474979154,
            "vy": 0,
            "vx": 0,
            "r": 1.1865284974093264,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models",
                "DOI": "10.1109/tvcg.2017.2744718",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744718",
                "FirstPage": 88,
                "LastPage": 97,
                "PaperType": "J",
                "Abstract": "While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ActiVis, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance-and subset-level. ActiVis has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ActiVis may work with different models.",
                "AuthorNamesDeduped": "Minsuk Kahng;Pierre Y. Andrews;Aditya Kalro;Duen Horng (Polo) Chau",
                "AuthorNames": "Minsuk Kahng;Pierre Y. Andrews;Aditya Kalro;Duen Horng (Polo) Chau",
                "AuthorAffiliation": "Georgia Institute of Technology;Facebook;Facebook;Georgia Institute of Technology",
                "InternalReferences": "0.1109/vast.2015.7347637;10.1109/vast.2010.5652443;10.1109/tvcg.2013.157;10.1109/tvcg.2014.2346482;10.1109/tvcg.2015.2467622;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/visual.2005.1532820;10.1109/vast.2011.6102453",
                "AuthorKeywords": "Visual analytics,deep learning,machine learning,information visualization",
                "AminerCitationCount": 341,
                "CitationCountCrossRef": 214,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 3325,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 837,
                "i": [
                    837
                ]
            }
        },
        {
            "name": "Harry Stavropoulos",
            "value": 233,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 850,
            "x": -138.71036858659977,
            "y": -256.5334942002109,
            "vy": 0,
            "vx": 0,
            "r": 1.2682786413356362,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "DecisionFlow: Visual Analytics for High-Dimensional Temporal Event Sequence Data",
                "DOI": "10.1109/tvcg.2014.2346682",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346682",
                "FirstPage": 1783,
                "LastPage": 1792,
                "PaperType": "J",
                "Abstract": "Temporal event sequence data is increasingly commonplace, with applications ranging from electronic medical records to financial transactions to social media activity. Previously developed techniques have focused on low-dimensional datasets (e.g., with less than 20 distinct event types). Real-world datasets are often far more complex. This paper describes DecisionFlow, a visual analysis technique designed to support the analysis of high-dimensional temporal event sequence data (e.g., thousands of event types). DecisionFlow combines a scalable and dynamic temporal event data structure with interactive multi-view visualizations and ad hoc statistical analytics. We provide a detailed review of our methods, and present the results from a 12-person user study. The study results demonstrate that DecisionFlow enables the quick and accurate completion of a range of sequence analysis tasks for datasets containing thousands of event types and millions of individual events.",
                "AuthorNamesDeduped": "David Gotz;Harry Stavropoulos",
                "AuthorNames": "David Gotz;Harry Stavropoulos",
                "AuthorAffiliation": "University of North Carolina at Chapel Hill;IBM T.J. Watson Research Center",
                "InternalReferences": "0.1109/tvcg.2013.206;10.1109/tvcg.2012.225;10.1109/tvcg.2011.179;10.1109/infvis.2000.885097;10.1109/vast.2009.5332595;10.1109/vast.2010.5652890;10.1109/tvcg.2009.117;10.1109/vast.2006.261421;10.1109/tvcg.2013.200",
                "AuthorKeywords": "Information Visualization, Temporal Event Sequences, Visual Analytics, Flow Diagrams, Medical Informatics",
                "AminerCitationCount": 189,
                "CitationCountCrossRef": 126,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 2601,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1246,
                "i": [
                    1246
                ]
            }
        },
        {
            "name": "Chengbo Zheng",
            "value": 69,
            "numPapers": 24,
            "cluster": "3",
            "visible": 1,
            "index": 851,
            "x": 275.7285494568702,
            "y": 95.51841191315033,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "NL2Color: Refining Color Palettes for Charts with Natural Language",
                "DOI": "10.1109/tvcg.2023.3326522",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326522",
                "FirstPage": 814,
                "LastPage": 824,
                "PaperType": "J",
                "Abstract": "Choice of color is critical to creating effective charts with an engaging, enjoyable, and informative reading experience. However, designing a good color palette for a chart is a challenging task for novice users who lack related design expertise. For example, they often find it difficult to articulate their abstract intentions and translate these intentions into effective editing actions to achieve a desired outcome. In this work, we present NL2Color, a tool that allows novice users to refine chart color palettes using natural language expressions of their desired outcomes. We first collected and categorized a dataset of 131 triplets, each consisting of an original color palette of a chart, an editing intent, and a new color palette designed by human experts according to the intent. Our tool employs a large language model (LLM) to substitute the colors in original palettes and produce new color palettes by selecting some of the triplets as few-shot prompts. To evaluate our tool, we conducted a comprehensive two-stage evaluation, including a crowd-sourcing study ($\\mathrm{N}=71$) and a within-subjects user study ($\\mathrm{N}=12$). The results indicate that the quality of the color palettes revised by NL2Color has no significantly large difference from those designed by human experts. The participants who used NL2Color obtained revised color palettes to their satisfaction in a shorter period and with less effort.",
                "AuthorNamesDeduped": "Chuhan Shi;Weiwei Cui;Chengzhong Liu;Chengbo Zheng;Haidong Zhang;Qiong Luo 0001;Xiaojuan Ma",
                "AuthorNames": "Chuhan Shi;Weiwei Cui;Chengzhong Liu;Chengbo Zheng;Haidong Zhang;Qiong Luo;Xiaojuan Ma",
                "AuthorAffiliation": "Southeast University, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, China;Hong Kong University of Science and Technology, China",
                "InternalReferences": "10.1109/tvcg.2013.234;10.1109/tvcg.2019.2934785;10.1109/tvcg.2021.3114848;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2865147;10.1109/tvcg.2015.2467471;10.1109/tvcg.2019.2934284;10.1109/tvcg.2022.3209357;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "chart,color palette,natural language,large language model",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 486,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 27,
                "i": [
                    27
                ]
            }
        },
        {
            "name": "Mingze Ma",
            "value": 69,
            "numPapers": 15,
            "cluster": "3",
            "visible": 1,
            "index": 852,
            "x": -267.99263789158647,
            "y": 115.88764401742317,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Towards Better Bus Networks: A Visual Analytics Approach",
                "DOI": "10.1109/tvcg.2020.3030458",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030458",
                "FirstPage": 817,
                "LastPage": 827,
                "PaperType": "J",
                "Abstract": "Bus routes are typically updated every 3–5 years to meet constantly changing travel demands. However, identifying deficient bus routes and finding their optimal replacements remain challenging due to the difficulties in analyzing a complex bus network and the large solution space comprising alternative routes. Most of the automated approaches cannot produce satisfactory results in real-world settings without laborious inspection and evaluation of the candidates. The limitations observed in these approaches motivate us to collaborate with domain experts and propose a visual analytics solution for the performance analysis and incremental planning of bus routes based on an existing bus network. Developing such a solution involves three major challenges, namely, a) the in-depth analysis of complex bus route networks, b) the interactive generation of improved route candidates, and c) the effective evaluation of alternative bus routes. For challenge a, we employ an overview-to-detail approach by dividing the analysis of a complex bus network into three levels to facilitate the efficient identification of deficient routes. For challenge b, we improve a route generation model and interpret the performance of the generation with tailored visualizations. For challenge c, we incorporate a conflict resolution strategy in the progressive decision-making process to assist users in evaluating the alternative routes and finding the most optimal one. The proposed system is evaluated with two usage scenarios based on real-world data and received positive feedback from the experts. Index Terms-Bus route planning, spatial decision-making, urban data visual analytics",
                "AuthorNamesDeduped": "Di Weng;Chengbo Zheng;Zikun Deng;Mingze Ma;Jie Bao 0003;Yu Zheng 0004;Mingliang Xu;Yingcai Wu",
                "AuthorNames": "Di Weng;Chengbo Zheng;Zikun Deng;Mingze Ma;Jie Bao;Yu Zheng;Mingliang Xu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China and Zhejiang Lab, Hangzhou, China;Zhejiang Lab, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, China and Zhejiang Lab, Hangzhou, China;Zhejiang Lab, Hangzhou, China;JD Intelligent Cities Research, JD Intelligent Cities Business Unit, JD Digits, Beijing, China;JD Intelligent Cities Research, JD Intelligent Cities Business Unit, JD Digits, Beijing, China;School of Information Engineering, Zhengzhou University, Henan Institute of Advanced Technology, Zhengzhou University, Zhengzhou, China;State Key Lab of CAD&CG, Zhejiang University, China and Zhejiang Lab, Hangzhou, China",
                "InternalReferences": "0.1109/vast.2007.4388995;10.1109/vast.2011.6102454;10.1109/vast.2010.5652478;10.1109/vast.2009.5332584;10.1109/tvcg.2013.193;10.1109/tvcg.2015.2467196;10.1109/tvcg.2019.2934670;10.1109/tvcg.2013.145;10.1109/tvcg.2013.173;10.1109/tvcg.2016.2598432;10.1109/tvcg.2015.2467554;10.1109/vast.2014.7042490;10.1109/tvcg.2020.3030359;10.1109/tvcg.2014.2346893;10.1109/tvcg.2018.2864503;10.1109/vast50239.2020.00011",
                "AuthorKeywords": "Bus route planning,spatial decision-making,urban data visual analytics",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 1698,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 463,
                "i": [
                    463
                ]
            }
        },
        {
            "name": "Yehuda Koren",
            "value": 275,
            "numPapers": 3,
            "cluster": "2",
            "visible": 1,
            "index": 853,
            "x": 119.39841882650418,
            "y": -266.63461437279796,
            "vy": 0,
            "vx": 0,
            "r": 1.3166378814047208,
            "node": {
                "Conference": "InfoVis",
                "Year": 2003,
                "Title": "Visualization of Labeled Data Using Linear Transformation",
                "DOI": "10.1109/infvis.2003.1249017",
                "Link": "http://doi.ieeecomputersociety.org/10.1109/INFVIS.2003.1249017",
                "FirstPage": 121,
                "LastPage": 128,
                "PaperType": "C",
                "Abstract": "We present a novel family of data-driven linear transformations, aimed at visualizing multivariate data in a low-dimensional space in a way that optimally preserves the structure of the data. The well-studied PCA and Fisher's LDA (linear discriminant analysis) are shown to be special members in this family of transformations, and we demonstrate how to generalize these two methods such as to enhance their performance. Furthermore, our technique is the only one, to the best of our knowledge, that reflects in the resulting embedding both the data coordinates and pairwise similarities and/or dissimilarities between the data elements. Even more so, when information on the clustering (labeling) decomposition of the data is known, this information can be integrated in the linear transformation, resulting in embeddings that clearly show the separation between the clusters, as well as their infrastructure. All this makes our technique very flexible and powerful, and lets us cope with kinds of data that other techniques fail to describe properly.",
                "AuthorNamesDeduped": "Yehuda Koren;Liran Carmel",
                "AuthorNames": "Y. Koren;L. Carmel",
                "AuthorAffiliation": "Dept. of Computer Science and Applied Mathematics, The Weizmann Institute of Science, Rehovot, Israel;Dept. of Computer Science and Applied Mathematics, The Weizmann Institute of Science, Rehovot, Israel",
                "InternalReferences": "0.1109/infvis.2002.1173159;10.1109/infvis.2001.963275;10.1109/infvis.2002.1173161",
                "AuthorKeywords": "visualization, dimensionality-reduction, projection, principal component analysis, Fisher's linear discriminant analysis, eigenprojection, classification",
                "AminerCitationCount": 89,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 9,
                "DownloadsXplore": 224,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2631,
                "i": [
                    2631
                ]
            }
        },
        {
            "name": "Hao Zheng 0006",
            "value": 12,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 854,
            "x": 92.12233600254976,
            "y": 277.4229175995259,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes",
                "DOI": "10.1109/tvcg.2021.3114815",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114815",
                "FirstPage": 270,
                "LastPage": 280,
                "PaperType": "J",
                "Abstract": "We present STNet, an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data. STNet includes two modules: a generator and a spatiotemporal discriminator. The input to the generator is two low-resolution volumes at both ends, and the output is the intermediate and the two-ending spatiotemporal super-resolution volumes. The spatiotemporal discriminator, leveraging convolutional long short-term memory, accepts a spatiotemporal super-resolution sequence as input and predicts a conditional score for each volume based on its spatial (the volume itself) and temporal (the previous volumes) information. We propose an unsupervised pre-training stage using cycle loss to improve the generalization of STNet. Once trained, STNet can generate spatiotemporal super-resolution volumes from low-resolution ones, offering scientists an option to save data storage (i.e., sparsely sampling the simulation output in both spatial and temporal dimensions). We compare STNet with the baseline bicubic+linear interpolation, two deep learning solutions (<inline-formula><tex-math notation=\"LaTeX\">$\\mathsf{SSR}+\\mathsf{TSF}$</tex-math><alternatives><graphic orientation=\"portrait\" position=\"float\" xlink:href=\"28tvcg01-han-3114815-eqinline-1-small.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>, STD), and a state-of-the-art tensor compression solution (TTHRESH) to show the effectiveness of STNet.",
                "AuthorNamesDeduped": "Jun Han 0010;Hao Zheng 0006;Danny Z. Chen;Chaoli Wang 0001",
                "AuthorNames": "Jun Han;Hao Zheng;Danny Z. Chen;Chaoli Wang",
                "AuthorAffiliation": "Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA",
                "InternalReferences": "0.1109/tvcg.2019.2934332;10.1109/tvcg.2020.3030344;10.1109/tvcg.2019.2934255;10.1109/tvcg.2020.3030346;10.1109/tvcg.2019.2934312;10.1109/tvcg.2006.143;10.1109/tvcg.2019.2934375",
                "AuthorKeywords": "Time-varying data,generative adversarial network,spatiotemporal super-resolution",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 968,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 263,
                "i": [
                    263
                ]
            }
        },
        {
            "name": "Danny Z. Chen",
            "value": 12,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 855,
            "x": -255.47398812272888,
            "y": -142.41854300851344,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes",
                "DOI": "10.1109/tvcg.2021.3114815",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114815",
                "FirstPage": 270,
                "LastPage": 280,
                "PaperType": "J",
                "Abstract": "We present STNet, an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data. STNet includes two modules: a generator and a spatiotemporal discriminator. The input to the generator is two low-resolution volumes at both ends, and the output is the intermediate and the two-ending spatiotemporal super-resolution volumes. The spatiotemporal discriminator, leveraging convolutional long short-term memory, accepts a spatiotemporal super-resolution sequence as input and predicts a conditional score for each volume based on its spatial (the volume itself) and temporal (the previous volumes) information. We propose an unsupervised pre-training stage using cycle loss to improve the generalization of STNet. Once trained, STNet can generate spatiotemporal super-resolution volumes from low-resolution ones, offering scientists an option to save data storage (i.e., sparsely sampling the simulation output in both spatial and temporal dimensions). We compare STNet with the baseline bicubic+linear interpolation, two deep learning solutions (<inline-formula><tex-math notation=\"LaTeX\">$\\mathsf{SSR}+\\mathsf{TSF}$</tex-math><alternatives><graphic orientation=\"portrait\" position=\"float\" xlink:href=\"28tvcg01-han-3114815-eqinline-1-small.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>, STD), and a state-of-the-art tensor compression solution (TTHRESH) to show the effectiveness of STNet.",
                "AuthorNamesDeduped": "Jun Han 0010;Hao Zheng 0006;Danny Z. Chen;Chaoli Wang 0001",
                "AuthorNames": "Jun Han;Hao Zheng;Danny Z. Chen;Chaoli Wang",
                "AuthorAffiliation": "Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA;Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA",
                "InternalReferences": "0.1109/tvcg.2019.2934332;10.1109/tvcg.2020.3030344;10.1109/tvcg.2019.2934255;10.1109/tvcg.2020.3030346;10.1109/tvcg.2019.2934312;10.1109/tvcg.2006.143;10.1109/tvcg.2019.2934375",
                "AuthorKeywords": "Time-varying data,generative adversarial network,spatiotemporal super-resolution",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 968,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 263,
                "i": [
                    263
                ]
            }
        },
        {
            "name": "Kalyan Veeramachaneni",
            "value": 46,
            "numPapers": 26,
            "cluster": "1",
            "visible": 1,
            "index": 856,
            "x": 284.74718726933986,
            "y": -67.59466948065858,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models",
                "DOI": "10.1109/tvcg.2021.3114836",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114836",
                "FirstPage": 378,
                "LastPage": 388,
                "PaperType": "J",
                "Abstract": "Machine learning (ML) is increasingly applied to Electronic Health Records (EHRs) to solve clinical prediction tasks. Although many ML models perform promisingly, issues with model transparency and interpretability limit their adoption in clinical practice. Directly using existing explainable ML techniques in clinical settings can be challenging. Through literature surveys and collaborations with six clinicians with an average of 17 years of clinical experience, we identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence. Following an iterative design process, we further designed and developed VBridge, a visual analytics tool that seamlessly incorporates ML explanations into clinicians' decision-making workflow. The system includes a novel hierarchical display of contribution-based feature explanations and enriched interactions that <i>connect the dots</i> between ML features, explanations, and data. We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians, showing that visually associating model explanations with patients' situational records can help clinicians better interpret and use model predictions when making clinician decisions. We further derived a list of design implications for developing future explainable ML tools to support clinical decision-making.",
                "AuthorNamesDeduped": "Furui Cheng;Dongyu Liu;Fan Du;Yanna Lin;Alexandra Zytek;Haomin Li;Huamin Qu;Kalyan Veeramachaneni",
                "AuthorNames": "Furui Cheng;Dongyu Liu;Fan Du;Yanna Lin;Alexandra Zytek;Haomin Li;Huamin Qu;Kalyan Veeramachaneni",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, China;Massachusetts Institute of Technology, USA;Adobe Research, USA;Hong Kong University of Science and Technology, China;Massachusetts Institute of Technology, USA;Children's Hospital of Zhejiang University School of Medicine, China;Hong Kong University of Science and Technology, China;Massachusetts Institute of Technology, USA",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2017.2744419;10.1109/tvcg.2020.3030342;10.1109/tvcg.2014.2346682;10.1109/tvcg.2014.2346482;10.1109/tvcg.2011.179;10.1109/tvcg.2018.2865027;10.1109/tvcg.2017.2745085;10.1109/tvcg.2018.2864812;10.1109/tvcg.2012.213;10.1109/tvcg.2020.3030458;10.1109/tvcg.2019.2934619;10.1109/vast.2009.5332595",
                "AuthorKeywords": "Explainable Artificial Intelligence,Healthcare,Visual Analytics,Decision Making",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 69,
                "DownloadsXplore": 1432,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 265,
                "i": [
                    265
                ]
            }
        },
        {
            "name": "Peter Kerpedjiev",
            "value": 61,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 857,
            "x": -164.40005889125504,
            "y": 242.3275069746558,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Pattern-Driven Navigation in 2D Multiscale Visualizations with Scalable Insets",
                "DOI": "10.1109/tvcg.2019.2934555",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934555",
                "FirstPage": 611,
                "LastPage": 621,
                "PaperType": "J",
                "Abstract": "We present Scalable Insets, a technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visualizations such as gigapixel images, matrices, or maps. Exploration of many but sparsely-distributed patterns in multiscale visualizations is challenging as visual representations change across zoom levels, context and navigational cues get lost upon zooming, and navigation is time consuming. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the annotated patterns. Insets support users in searching, comparing, and contextualizing patterns while reducing the amount of navigation needed. They are dynamically placed either within the viewport or along the boundary of the viewport to offer a compromise between locality and context preservation. Annotated patterns are interactively clustered by location and type. They are visually represented as an aggregated inset to provide scalable exploration within a single viewport. In a controlled user study with 18 participants, we found that Scalable Insets can speed up visual search and improve the accuracy of pattern comparison at the cost of slower frequency estimation compared to a baseline technique. A second study with 6 experts in the field of genomics showed that Scalable Insets is easy to learn and provides first insights into how Scalable Insets can be applied in an open-ended data exploration scenario.",
                "AuthorNamesDeduped": "Fritz Lekschas;Michael Behrisch 0001;Benjamin Bach;Peter Kerpedjiev;Nils Gehlenborg;Hanspeter Pfister",
                "AuthorNames": "Fritz Lekschas;Michael Behrisch;Benjamin Bach;Peter Kerpedjiev;Nils Gehlenborg;Hanspeter Pfister",
                "AuthorAffiliation": "School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA;University of Edinburgh, Edinburgh, UK;Harvard Medical School, Boston, USA;Harvard Medical School, Boston, USA;School of Engineering and Applied Sciences, Harvard University, Cambridge, USA",
                "InternalReferences": "0.1109/tvcg.2006.136;10.1109/tvcg.2011.185;10.1109/vast.2009.5333443;10.1109/tvcg.2011.231;10.1109/tvcg.2017.2745978;10.1109/tvcg.2013.154;10.1109/tvcg.2013.213;10.1109/tvcg.2014.2346441;10.1109/tvcg.2014.2346352;10.1109/tvcg.2007.70589",
                "AuthorKeywords": "Guided Navigation,Pattern Exploration,Multiscale Visualizations,Gigapixel Images,Geospatial Maps,Genomics",
                "AminerCitationCount": 20,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 715,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 556,
                "i": [
                    556
                ]
            }
        },
        {
            "name": "Shahid Latif",
            "value": 69,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 858,
            "x": -42.49114654296967,
            "y": -289.90429880473295,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kori: Interactive Synthesis of Text and Charts in Data Documents",
                "DOI": "10.1109/tvcg.2021.3114802",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114802",
                "FirstPage": 184,
                "LastPage": 194,
                "PaperType": "J",
                "Abstract": "Charts go hand in hand with text to communicate complex data and are widely adopted in news articles, online blogs, and academic papers. They provide graphical summaries of the data, while text explains the message and context. However, synthesizing information across text and charts is difficult; it requires readers to frequently shift their attention. We investigated ways to support the tight coupling of text and charts in data documents. To understand their interplay, we analyzed the design space of chart-text references through news articles and scientific papers. Informed by the analysis, we developed a mixed-initiative interface enabling users to construct interactive references between text and charts. It leverages natural language processing to automatically suggest references as well as allows users to manually construct other references effortlessly. A user study complemented with algorithmic evaluation of the system suggests that the interface provides an effective way to compose interactive data documents.",
                "AuthorNamesDeduped": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck 0001;Nam Wook Kim",
                "AuthorNames": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck;Nam Wook Kim",
                "AuthorAffiliation": "University of Duisburg-Essen, Germany;Boston College, USA;Harvard University, USA;University of Duisburg-Essen, Germany;Boston College, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2018.2865119;10.1109/tvcg.2015.2467732;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2018.2865022;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2011.183;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Data-driven storytelling,interaction design,authoring,visualization-text linking,mixed-initiative interface,interactive documents",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 992,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 268,
                "i": [
                    268
                ]
            }
        },
        {
            "name": "Yoon Kim",
            "value": 32,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 859,
            "x": 227.2914507097764,
            "y": 185.17180248149361,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Kori: Interactive Synthesis of Text and Charts in Data Documents",
                "DOI": "10.1109/tvcg.2021.3114802",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114802",
                "FirstPage": 184,
                "LastPage": 194,
                "PaperType": "J",
                "Abstract": "Charts go hand in hand with text to communicate complex data and are widely adopted in news articles, online blogs, and academic papers. They provide graphical summaries of the data, while text explains the message and context. However, synthesizing information across text and charts is difficult; it requires readers to frequently shift their attention. We investigated ways to support the tight coupling of text and charts in data documents. To understand their interplay, we analyzed the design space of chart-text references through news articles and scientific papers. Informed by the analysis, we developed a mixed-initiative interface enabling users to construct interactive references between text and charts. It leverages natural language processing to automatically suggest references as well as allows users to manually construct other references effortlessly. A user study complemented with algorithmic evaluation of the system suggests that the interface provides an effective way to compose interactive data documents.",
                "AuthorNamesDeduped": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck 0001;Nam Wook Kim",
                "AuthorNames": "Shahid Latif;Zheng Zhou;Yoon Kim;Fabian Beck;Nam Wook Kim",
                "AuthorAffiliation": "University of Duisburg-Essen, Germany;Boston College, USA;Harvard University, USA;University of Duisburg-Essen, Germany;Boston College, USA",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2018.2865119;10.1109/tvcg.2015.2467732;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2018.2865022;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2011.183;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Data-driven storytelling,interaction design,authoring,visualization-text linking,mixed-initiative interface,interactive documents",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 992,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 268,
                "i": [
                    268
                ]
            }
        },
        {
            "name": "Cong Guo 0004",
            "value": 114,
            "numPapers": 27,
            "cluster": "6",
            "visible": 1,
            "index": 860,
            "x": -292.8496092887419,
            "y": 17.00312734267522,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data",
                "DOI": "10.1109/tvcg.2015.2467619",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467619",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.",
                "AuthorNamesDeduped": "Siming Chen 0001;Xiaoru Yuan;Zhenhuang Wang;Cong Guo 0004;Jie Liang 0004;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorNames": "Siming Chen;Xiaoru Yuan;Zhenhuang Wang;Cong Guo;Jie Liang;Zuchao Wang;Xiaolong Luke Zhang;Jiawan Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;Key Laboratory of Machine Perception (Ministry of Education), School of EECS, Peking University;College of Information Sciences and Technology, Pennsylvania State University;School of Computer Science and Technology, and School of Computer Software, Tianjin University",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2008.4677356;10.1109/tvcg.2009.182;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/tvcg.2009.143;10.1109/infvis.2004.27;10.1109/infvis.2005.1532150;10.1109/tvcg.2012.265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spatial temporal visual analytics, Geo-tagged social media, Sparsely sampling, Uncertainty, Movement",
                "AminerCitationCount": 141,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2690,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1103,
                "i": [
                    1103
                ]
            }
        },
        {
            "name": "Roeland Scheepens",
            "value": 120,
            "numPapers": 23,
            "cluster": "3",
            "visible": 1,
            "index": 861,
            "x": 204.57150466364322,
            "y": -210.47683834487106,
            "vy": 0,
            "vx": 0,
            "r": 1.1381692573402418,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Composite Density Maps for Multivariate Trajectories",
                "DOI": "10.1109/tvcg.2011.181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.181",
                "FirstPage": 2518,
                "LastPage": 2527,
                "PaperType": "J",
                "Abstract": "We consider moving objects as multivariate time-series. By visually analyzing the attributes, patterns may appear that explain why certain movements have occurred. Density maps as proposed by Scheepens et al. [25] are a way to reveal these patterns by means of aggregations of filtered subsets of trajectories. Since filtering is often not sufficient for analysts to express their domain knowledge, we propose to use expressions instead. We present a flexible architecture for density maps to enable custom, versatile exploration using multiple density fields. The flexibility comes from a script, depicted in this paper as a block diagram, which defines an advanced computation of a density field. We define six different types of blocks to create, compose, and enhance trajectories or density fields. Blocks are customized by means of expressions that allow the analyst to model domain knowledge. The versatility of our architecture is demonstrated with several maritime use cases developed with domain experts. Our approach is expected to be useful for the analysis of objects in other domains.",
                "AuthorNamesDeduped": "Roeland Scheepens;Niels Willems;Huub van de Wetering;Gennady L. Andrienko;Natalia V. Andrienko;Jarke J. van Wijk",
                "AuthorNames": "Roeland Scheepens;Niels Willems;Huub van de Wetering;Gennady Andrienko;Natalia Andrienko;Jarke J. van Wijk",
                "AuthorAffiliation": "Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Fraunhofer Institute IAIS, Germany;Fraunhofer Institute IAIS, Germany;Eindhoven University of Technology, Netherlands",
                "InternalReferences": "0.1109/tvcg.2006.178;10.1109/vast.2008.4677356;10.1109/tvcg.2007.70570;10.1109/vast.2010.5652478;10.1109/vast.2007.4388992;10.1109/vast.2010.5652467;10.1109/vast.2009.5332593",
                "AuthorKeywords": "Trajectories, Kernel Density Estimation, Multivariate Data, Geographical Information Systems, Raster Maps",
                "AminerCitationCount": 183,
                "CitationCountCrossRef": 103,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 1355,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1545,
                "i": [
                    1545
                ]
            }
        },
        {
            "name": "Feiran Wu",
            "value": 71,
            "numPapers": 23,
            "cluster": "1",
            "visible": 1,
            "index": 862,
            "x": -8.674629641079633,
            "y": 293.5553624115732,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "SRVis: Towards Better Spatial Integration in Ranking Visualization",
                "DOI": "10.1109/tvcg.2018.2865126",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865126",
                "FirstPage": 459,
                "LastPage": 469,
                "PaperType": "J",
                "Abstract": "Interactive ranking techniques have substantially promoted analysts' ability in making judicious and informed decisions effectively based on multiple criteria. However, the existing techniques cannot satisfactorily support the analysis tasks involved in ranking large-scale spatial alternatives, such as selecting optimal locations for chain stores, where the complex spatial contexts involved are essential to the decision-making process. Limitations observed in the prior attempts of integrating rankings with spatial contexts motivate us to develop a context-integrated visual ranking technique. Based on a set of generic design requirements we summarized by collaborating with domain experts, we propose SRVis, a novel spatial ranking visualization technique that supports efficient spatial multi-criteria decision-making processes by addressing three major challenges in the aforementioned context integration, namely, a) the presentation of spatial rankings and contexts, b) the scalability of rankings' visual representations, and c) the analysis of context-integrated spatial rankings. Specifically, we encode massive rankings and their cause with scalable matrix-based visualizations and stacked bar charts based on a novel two-phase optimization framework that minimizes the information loss, and the flexible spatial filtering and intuitive comparative analysis are adopted to enable the in-depth evaluation of the rankings and assist users in selecting the best spatial alternative. The effectiveness of the proposed technique has been evaluated and demonstrated with an empirical study of optimization methods, two case studies, and expert interviews.",
                "AuthorNamesDeduped": "Di Weng;Ran Chen;Zikun Deng;Feiran Wu;Jingmin Chen;Yingcai Wu",
                "AuthorNames": "Di Weng;Ran Chen;Zikun Deng;Feiran Wu;Jingmin Chen;Yingcai Wu",
                "AuthorAffiliation": "Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Alibaba Group, Hangzhou, China;Alibaba Group, Hangzhou, China;Zhejiang University, Hangzhou, Zhejiang, CN",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2013.193;10.1109/tvcg.2011.185;10.1109/tvcg.2008.166;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467771;10.1109/tvcg.2008.181;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865018;10.1109/vast.2011.6102455;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598585;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467112;10.1109/tvcg.2012.253;10.1109/tvcg.2015.2467717;10.1109/tvcg.2017.2745078;10.1109/tvcg.2014.2346913",
                "AuthorKeywords": "Spatial ranking,visualization",
                "AminerCitationCount": 37,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1388,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 676,
                "i": [
                    676
                ]
            }
        },
        {
            "name": "Zui Chen",
            "value": 33,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 863,
            "x": -192.00860885241246,
            "y": -222.44705915467006,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VizLinter: A Linter and Fixer Framework for Data Visualization",
                "DOI": "10.1109/tvcg.2021.3114804",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114804",
                "FirstPage": 206,
                "LastPage": 216,
                "PaperType": "J",
                "Abstract": "Despite the rising popularity of automated visualization tools, existing systems tend to provide direct results which do not always fit the input data or meet visualization requirements. Therefore, additional specification adjustments are still required in real-world use cases. However, manual adjustments are difficult since most users do not necessarily possess adequate skills or visualization knowledge. Even experienced users might create imperfect visualizations that involve chart construction errors. We present a framework, VizLinter, to help users detect flaws and rectify already-built but defective visualizations. The framework consists of two components, (1) a visualization linter, which applies well-recognized principles to inspect the legitimacy of rendered visualizations, and (2) a visualization fixer, which automatically corrects the detected violations according to the linter. We implement the framework into an online editor prototype based on Vega-Lite specifications. To further evaluate the system, we conduct an in-lab user study. The results prove its effectiveness and efficiency in identifying and fixing errors for data visualizations.",
                "AuthorNamesDeduped": "Qing Chen 0001;Fuling Sun;Xinyue Xu;Zui Chen;Jiazhe Wang;Nan Cao 0001",
                "AuthorNames": "Qing Chen;Fuling Sun;Xinyue Xu;Zui Chen;Jiazhe Wang;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Ant Group, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2006.138;10.1109/tvcg.2006.163;10.1109/tvcg.2013.126;10.1109/tvcg.2012.219;10.1109/tvcg.2018.2865240;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2745140;10.1109/infvis.2000.885086;10.1109/tvcg.2020.3030467;10.1109/vast.2009.5332628;10.1109/infvis.2003.1249018;10.1109/tvcg.2018.2864912;10.1109/tvcg.2017.2745919;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030423;10.1109/tvcg.2013.234",
                "AuthorKeywords": "Visualization Linting,Automated Visualization Design,Visualization Optimization",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 64,
                "DownloadsXplore": 1653,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 271,
                "i": [
                    271
                ]
            }
        },
        {
            "name": "Jiazhe Wang",
            "value": 33,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 864,
            "x": 292.0109408812388,
            "y": 34.34545684153374,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VizLinter: A Linter and Fixer Framework for Data Visualization",
                "DOI": "10.1109/tvcg.2021.3114804",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114804",
                "FirstPage": 206,
                "LastPage": 216,
                "PaperType": "J",
                "Abstract": "Despite the rising popularity of automated visualization tools, existing systems tend to provide direct results which do not always fit the input data or meet visualization requirements. Therefore, additional specification adjustments are still required in real-world use cases. However, manual adjustments are difficult since most users do not necessarily possess adequate skills or visualization knowledge. Even experienced users might create imperfect visualizations that involve chart construction errors. We present a framework, VizLinter, to help users detect flaws and rectify already-built but defective visualizations. The framework consists of two components, (1) a visualization linter, which applies well-recognized principles to inspect the legitimacy of rendered visualizations, and (2) a visualization fixer, which automatically corrects the detected violations according to the linter. We implement the framework into an online editor prototype based on Vega-Lite specifications. To further evaluate the system, we conduct an in-lab user study. The results prove its effectiveness and efficiency in identifying and fixing errors for data visualizations.",
                "AuthorNamesDeduped": "Qing Chen 0001;Fuling Sun;Xinyue Xu;Zui Chen;Jiazhe Wang;Nan Cao 0001",
                "AuthorNames": "Qing Chen;Fuling Sun;Xinyue Xu;Zui Chen;Jiazhe Wang;Nan Cao",
                "AuthorAffiliation": "Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Intelligent Big Data Visualization Lab at Tongji University, China;Ant Group, China;Intelligent Big Data Visualization Lab at Tongji University, China",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2006.138;10.1109/tvcg.2006.163;10.1109/tvcg.2013.126;10.1109/tvcg.2012.219;10.1109/tvcg.2018.2865240;10.1109/tvcg.2017.2744198;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2745140;10.1109/infvis.2000.885086;10.1109/tvcg.2020.3030467;10.1109/vast.2009.5332628;10.1109/infvis.2003.1249018;10.1109/tvcg.2018.2864912;10.1109/tvcg.2017.2745919;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030423;10.1109/tvcg.2013.234",
                "AuthorKeywords": "Visualization Linting,Automated Visualization Design,Visualization Optimization",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 64,
                "DownloadsXplore": 1653,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 271,
                "i": [
                    271
                ]
            }
        },
        {
            "name": "Fatih Korkmaz",
            "value": 57,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 865,
            "x": -238.65771520155096,
            "y": 172.02469292161334,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Feedback-Driven Interactive Exploration of Large Multidimensional Data Supported by Visual Classifier",
                "DOI": "10.1109/vast.2014.7042480",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042480",
                "FirstPage": 43,
                "LastPage": 52,
                "PaperType": "C",
                "Abstract": "The extraction of relevant and meaningful information from multivariate or high-dimensional data is a challenging problem. One reason for this is that the number of possible representations, which might contain relevant information, grows exponentially with the amount of data dimensions. Also, not all views from a possibly large view space, are potentially relevant to a given analysis task or user. Focus+Context or Semantic Zoom Interfaces can help to some extent to efficiently search for interesting views or data segments, yet they show scalability problems for very large data sets. Accordingly, users are confronted with the problem of identifying interesting views, yet the manual exploration of the entire view space becomes ineffective or even infeasible. While certain quality metrics have been proposed recently to identify potentially interesting views, these often are defined in a heuristic way and do not take into account the application or user context. We introduce a framework for a feedback-driven view exploration, inspired by relevance feedback approaches used in Information Retrieval. Our basic idea is that users iteratively express their notion of interestingness when presented with candidate views. From that expression, a model representing the user's preferences, is trained and used to recommend further interesting view candidates. A decision support system monitors the exploration process and assesses the relevance-driven search process for convergence and stability. We present an instantiation of our framework for exploration of Scatter Plot Spaces based on visual features. We demonstrate the effectiveness of this implementation by a case study on two real-world datasets. We also discuss our framework in light of design alternatives and point out its usefulness for development of user- and context-dependent visual exploration systems.",
                "AuthorNamesDeduped": "Michael Behrisch 0001;Fatih Korkmaz;Lin Shao 0001;Tobias Schreck",
                "AuthorNames": "Michael Behrisch;Fatih Korkmaz;Lin Shao;Tobias Schreck",
                "AuthorAffiliation": "Universität Konstanz, Germany;Universität Konstanz, Germany;Universität Konstanz, Germany;Universität Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532142;10.1109/tvcg.2012.277;10.1109/tvcg.2010.184;10.1109/vast.2012.6400486;10.1109/vast.2007.4389001;10.1109/tvcg.2013.160;10.1109/vast.2012.6400488",
                "AuthorKeywords": "View Space Exploration Framework, Interesting View Problem, Relevance Feedback, User Preference Model",
                "AminerCitationCount": 69,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 984,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1259,
                "i": [
                    1259
                ]
            }
        },
        {
            "name": "Lin Shao 0001",
            "value": 57,
            "numPapers": 6,
            "cluster": "3",
            "visible": 1,
            "index": 866,
            "x": 59.81228494930729,
            "y": -288.2229875793096,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Feedback-Driven Interactive Exploration of Large Multidimensional Data Supported by Visual Classifier",
                "DOI": "10.1109/vast.2014.7042480",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042480",
                "FirstPage": 43,
                "LastPage": 52,
                "PaperType": "C",
                "Abstract": "The extraction of relevant and meaningful information from multivariate or high-dimensional data is a challenging problem. One reason for this is that the number of possible representations, which might contain relevant information, grows exponentially with the amount of data dimensions. Also, not all views from a possibly large view space, are potentially relevant to a given analysis task or user. Focus+Context or Semantic Zoom Interfaces can help to some extent to efficiently search for interesting views or data segments, yet they show scalability problems for very large data sets. Accordingly, users are confronted with the problem of identifying interesting views, yet the manual exploration of the entire view space becomes ineffective or even infeasible. While certain quality metrics have been proposed recently to identify potentially interesting views, these often are defined in a heuristic way and do not take into account the application or user context. We introduce a framework for a feedback-driven view exploration, inspired by relevance feedback approaches used in Information Retrieval. Our basic idea is that users iteratively express their notion of interestingness when presented with candidate views. From that expression, a model representing the user's preferences, is trained and used to recommend further interesting view candidates. A decision support system monitors the exploration process and assesses the relevance-driven search process for convergence and stability. We present an instantiation of our framework for exploration of Scatter Plot Spaces based on visual features. We demonstrate the effectiveness of this implementation by a case study on two real-world datasets. We also discuss our framework in light of design alternatives and point out its usefulness for development of user- and context-dependent visual exploration systems.",
                "AuthorNamesDeduped": "Michael Behrisch 0001;Fatih Korkmaz;Lin Shao 0001;Tobias Schreck",
                "AuthorNames": "Michael Behrisch;Fatih Korkmaz;Lin Shao;Tobias Schreck",
                "AuthorAffiliation": "Universität Konstanz, Germany;Universität Konstanz, Germany;Universität Konstanz, Germany;Universität Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532142;10.1109/tvcg.2012.277;10.1109/tvcg.2010.184;10.1109/vast.2012.6400486;10.1109/vast.2007.4389001;10.1109/tvcg.2013.160;10.1109/vast.2012.6400488",
                "AuthorKeywords": "View Space Exploration Framework, Interesting View Problem, Relevance Feedback, User Preference Model",
                "AminerCitationCount": 69,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 984,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1259,
                "i": [
                    1259
                ]
            }
        },
        {
            "name": "Jürgen Bernard",
            "value": 183,
            "numPapers": 43,
            "cluster": "3",
            "visible": 1,
            "index": 867,
            "x": 150.67498265654996,
            "y": 253.0751856690979,
            "vy": 0,
            "vx": 0,
            "r": 1.2107081174438687,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Comparing Visual-Interactive Labeling with Active Learning: An Experimental Study",
                "DOI": "10.1109/tvcg.2017.2744818",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744818",
                "FirstPage": 298,
                "LastPage": 308,
                "PaperType": "J",
                "Abstract": "Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling.",
                "AuthorNamesDeduped": "Jürgen Bernard;Marco Hutter 0002;Matthias Zeppelzauer;Dieter W. Fellner;Michael Sedlmair",
                "AuthorNames": "Jürgen Bernard;Marco Hutter;Matthias Zeppelzauer;Dieter Fellner;Michael Sedlmair",
                "AuthorAffiliation": "Technische Universität Darmstadt, Darmstadt, Germany;Technische Universität Darmstadt, Darmstadt, Germany;St. Pölten University of Applied Sciences, St. Pölten, Austria;Fraunhofer IGD, Darmstadt, Germany;University of Vienna, Vienna, Austria",
                "InternalReferences": "0.1109/vast.2014.7042480;10.1109/vast.2012.6400486;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/vast.2010.5652392;10.1109/tvcg.2014.2346482;10.1109/tvcg.2016.2598589;10.1109/tvcg.2016.2598495;10.1109/tvcg.2013.153;10.1109/tvcg.2015.2467717;10.1109/vast.2010.5652484",
                "AuthorKeywords": "Labeling,Visual-Interactive Labeling,Information Visualization,Visual Analytics,Active Learning,Machine Learning,Classification,Evaluation,Experiment,Dimensionality Reduction",
                "AminerCitationCount": 128,
                "CitationCountCrossRef": 83,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 2851,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 844,
                "i": [
                    844
                ]
            }
        },
        {
            "name": "Joscha Eirich",
            "value": 10,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 868,
            "x": -282.21539422998893,
            "y": -84.87915680313935,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "IRVINE: A Design Study on Analyzing Correlation Patterns of Electrical Engines",
                "DOI": "10.1109/tvcg.2021.3114797",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114797",
                "FirstPage": 11,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "In this design study, we present IRVINE, a Visual Analytics (VA) system, which facilitates the analysis of acoustic data to detect and understand previously unknown errors in the manufacturing of electrical engines. In serial manufacturing processes, signatures from acoustic data provide valuable information on how the relationship between multiple produced engines serves to detect and understand previously unknown errors. To analyze such signatures, IRVINE leverages interactive clustering and data labeling techniques, allowing users to analyze clusters of engines with similar signatures, drill down to groups of engines, and select an engine of interest. Furthermore, IRVINE allows to assign labels to engines and clusters and annotate the cause of an error in the acoustic raw measurement of an engine. Since labels and annotations represent valuable knowledge, they are conserved in a knowledge database to be available for other stakeholders. We contribute a design study, where we developed IRVINE in four main iterations with engineers from a company in the automotive sector. To validate IRVINE, we conducted a field study with six domain experts. Our results suggest a high usability and usefulness of IRVINE as part of the improvement of a real-world manufacturing process. Specifically, with IRVINE domain experts were able to label and annotate produced electrical engines more than 30% faster.",
                "AuthorNamesDeduped": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorNames": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorAffiliation": "University of Bamberg, Germany and BMW Group, Germany;Fraunhofer IWU, Germany and BMW Group, Germany;BMW Group, Germany;University of Stuttgart, Germany;University of Bamberg, Germany;University of Bamberg, Germany;Graz University of Technology, Austria;University of Zurich, Switzerland",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744818;10.1109/tvcg.2013.178;10.1109/vast.2012.6400486;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2020.3030347;10.1109/tvcg.2017.2744805;10.1109/tvcg.2016.2598495;10.1109/tvcg.2012.213;10.1109/tvcg.2009.111;10.1109/tvcg.2011.185;10.1109/tvcg.2012.255",
                "AuthorKeywords": "Design study,interactive labeling,interactive clustering,H.5.2 [Information Interfaces and Presentation],User Interfaces—Graphical user interfaces (GUI),User-centered design",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1424,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 272,
                "i": [
                    272
                ]
            }
        },
        {
            "name": "Jakob Bonart",
            "value": 10,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 869,
            "x": 265.58466179647087,
            "y": -128.12020690919206,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "IRVINE: A Design Study on Analyzing Correlation Patterns of Electrical Engines",
                "DOI": "10.1109/tvcg.2021.3114797",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114797",
                "FirstPage": 11,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "In this design study, we present IRVINE, a Visual Analytics (VA) system, which facilitates the analysis of acoustic data to detect and understand previously unknown errors in the manufacturing of electrical engines. In serial manufacturing processes, signatures from acoustic data provide valuable information on how the relationship between multiple produced engines serves to detect and understand previously unknown errors. To analyze such signatures, IRVINE leverages interactive clustering and data labeling techniques, allowing users to analyze clusters of engines with similar signatures, drill down to groups of engines, and select an engine of interest. Furthermore, IRVINE allows to assign labels to engines and clusters and annotate the cause of an error in the acoustic raw measurement of an engine. Since labels and annotations represent valuable knowledge, they are conserved in a knowledge database to be available for other stakeholders. We contribute a design study, where we developed IRVINE in four main iterations with engineers from a company in the automotive sector. To validate IRVINE, we conducted a field study with six domain experts. Our results suggest a high usability and usefulness of IRVINE as part of the improvement of a real-world manufacturing process. Specifically, with IRVINE domain experts were able to label and annotate produced electrical engines more than 30% faster.",
                "AuthorNamesDeduped": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorNames": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorAffiliation": "University of Bamberg, Germany and BMW Group, Germany;Fraunhofer IWU, Germany and BMW Group, Germany;BMW Group, Germany;University of Stuttgart, Germany;University of Bamberg, Germany;University of Bamberg, Germany;Graz University of Technology, Austria;University of Zurich, Switzerland",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744818;10.1109/tvcg.2013.178;10.1109/vast.2012.6400486;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2020.3030347;10.1109/tvcg.2017.2744805;10.1109/tvcg.2016.2598495;10.1109/tvcg.2012.213;10.1109/tvcg.2009.111;10.1109/tvcg.2011.185;10.1109/tvcg.2012.255",
                "AuthorKeywords": "Design study,interactive labeling,interactive clustering,H.5.2 [Information Interfaces and Presentation],User Interfaces—Graphical user interfaces (GUI),User-centered design",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1424,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 272,
                "i": [
                    272
                ]
            }
        },
        {
            "name": "Ute Schmid",
            "value": 10,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 870,
            "x": -109.35273618890946,
            "y": 274.02915736833324,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "IRVINE: A Design Study on Analyzing Correlation Patterns of Electrical Engines",
                "DOI": "10.1109/tvcg.2021.3114797",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114797",
                "FirstPage": 11,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "In this design study, we present IRVINE, a Visual Analytics (VA) system, which facilitates the analysis of acoustic data to detect and understand previously unknown errors in the manufacturing of electrical engines. In serial manufacturing processes, signatures from acoustic data provide valuable information on how the relationship between multiple produced engines serves to detect and understand previously unknown errors. To analyze such signatures, IRVINE leverages interactive clustering and data labeling techniques, allowing users to analyze clusters of engines with similar signatures, drill down to groups of engines, and select an engine of interest. Furthermore, IRVINE allows to assign labels to engines and clusters and annotate the cause of an error in the acoustic raw measurement of an engine. Since labels and annotations represent valuable knowledge, they are conserved in a knowledge database to be available for other stakeholders. We contribute a design study, where we developed IRVINE in four main iterations with engineers from a company in the automotive sector. To validate IRVINE, we conducted a field study with six domain experts. Our results suggest a high usability and usefulness of IRVINE as part of the improvement of a real-world manufacturing process. Specifically, with IRVINE domain experts were able to label and annotate produced electrical engines more than 30% faster.",
                "AuthorNamesDeduped": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorNames": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorAffiliation": "University of Bamberg, Germany and BMW Group, Germany;Fraunhofer IWU, Germany and BMW Group, Germany;BMW Group, Germany;University of Stuttgart, Germany;University of Bamberg, Germany;University of Bamberg, Germany;Graz University of Technology, Austria;University of Zurich, Switzerland",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744818;10.1109/tvcg.2013.178;10.1109/vast.2012.6400486;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2020.3030347;10.1109/tvcg.2017.2744805;10.1109/tvcg.2016.2598495;10.1109/tvcg.2012.213;10.1109/tvcg.2009.111;10.1109/tvcg.2011.185;10.1109/tvcg.2012.255",
                "AuthorKeywords": "Design study,interactive labeling,interactive clustering,H.5.2 [Information Interfaces and Presentation],User Interfaces—Graphical user interfaces (GUI),User-centered design",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1424,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 272,
                "i": [
                    272
                ]
            }
        },
        {
            "name": "Kai Fischbach",
            "value": 10,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 871,
            "x": -104.53072068933552,
            "y": -276.08572659985185,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "IRVINE: A Design Study on Analyzing Correlation Patterns of Electrical Engines",
                "DOI": "10.1109/tvcg.2021.3114797",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114797",
                "FirstPage": 11,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "In this design study, we present IRVINE, a Visual Analytics (VA) system, which facilitates the analysis of acoustic data to detect and understand previously unknown errors in the manufacturing of electrical engines. In serial manufacturing processes, signatures from acoustic data provide valuable information on how the relationship between multiple produced engines serves to detect and understand previously unknown errors. To analyze such signatures, IRVINE leverages interactive clustering and data labeling techniques, allowing users to analyze clusters of engines with similar signatures, drill down to groups of engines, and select an engine of interest. Furthermore, IRVINE allows to assign labels to engines and clusters and annotate the cause of an error in the acoustic raw measurement of an engine. Since labels and annotations represent valuable knowledge, they are conserved in a knowledge database to be available for other stakeholders. We contribute a design study, where we developed IRVINE in four main iterations with engineers from a company in the automotive sector. To validate IRVINE, we conducted a field study with six domain experts. Our results suggest a high usability and usefulness of IRVINE as part of the improvement of a real-world manufacturing process. Specifically, with IRVINE domain experts were able to label and annotate produced electrical engines more than 30% faster.",
                "AuthorNamesDeduped": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorNames": "Joscha Eirich;Jakob Bonart;Dominik Jäckle;Michael Sedlmair;Ute Schmid;Kai Fischbach;Tobias Schreck;Jürgen Bernard",
                "AuthorAffiliation": "University of Bamberg, Germany and BMW Group, Germany;Fraunhofer IWU, Germany and BMW Group, Germany;BMW Group, Germany;University of Stuttgart, Germany;University of Bamberg, Germany;University of Bamberg, Germany;Graz University of Technology, Austria;University of Zurich, Switzerland",
                "InternalReferences": "0.1109/vast.2009.5332584;10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744818;10.1109/tvcg.2013.178;10.1109/vast.2012.6400486;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2020.3030347;10.1109/tvcg.2017.2744805;10.1109/tvcg.2016.2598495;10.1109/tvcg.2012.213;10.1109/tvcg.2009.111;10.1109/tvcg.2011.185;10.1109/tvcg.2012.255",
                "AuthorKeywords": "Design study,interactive labeling,interactive clustering,H.5.2 [Information Interfaces and Presentation],User Interfaces—Graphical user interfaces (GUI),User-centered design",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1424,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 272,
                "i": [
                    272
                ]
            }
        },
        {
            "name": "Jörn Kohlhammer",
            "value": 121,
            "numPapers": 9,
            "cluster": "3",
            "visible": 1,
            "index": 872,
            "x": 263.72210241500426,
            "y": 133.04379992246913,
            "vy": 0,
            "vx": 0,
            "r": 1.1393206678180772,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Guiding feature subset selection with an interactive visualization",
                "DOI": "10.1109/vast.2011.6102448",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102448",
                "FirstPage": 111,
                "LastPage": 120,
                "PaperType": "C",
                "Abstract": "We propose a method for the semi-automated refinement of the results of feature subset selection algorithms. Feature subset selection is a preliminary step in data analysis which identifies the most useful subset of features (columns) in a data table. So-called filter techniques use statistical ranking measures for the correlation of features. Usually a measure is applied to all entities (rows) of a data table. However, the differing contributions of subsets of data entities are masked by statistical aggregation. Feature and entity subset selection are, thus, highly interdependent. Due to the difficulty in visualizing a high-dimensional data table, most feature subset selection algorithms are applied as a black box at the outset of an analysis. Our visualization technique, SmartStripes, allows users to step into the feature subset selection process. It enables the investigation of dependencies and interdependencies between different feature and entity subsets. A user may even choose to control the iterations manually, taking into account the ranking measures, the contributions of different entity subsets, as well as the semantics of the features.",
                "AuthorNamesDeduped": "Thorsten May;Andreas Bannach;James Davey;Tobias Ruppert;Jörn Kohlhammer",
                "AuthorNames": "Thorsten May;Andreas Bannach;James Davey;Tobias Ruppert;Jörn Kohlhammer",
                "AuthorAffiliation": "Fraunhofer Institute of Computer Graphics Research (IGD), Darmstadt, Germany;Fraunhofer Institute of Computer Graphics Research (IGD), Darmstadt, Germany;Fraunhofer Institute of Computer Graphics Research (IGD), Darmstadt, Germany;Fraunhofer Institute of Computer Graphics Research (IGD), Darmstadt, Germany;Fraunhofer Institute of Computer Graphics Research (IGD), Darmstadt, Germany",
                "InternalReferences": "0.1109/vast.2010.5652392;10.1109/infvis.2003.1249006;10.1109/tvcg.2009.153;10.1109/tvcg.2008.153",
                "AuthorKeywords": null,
                "AminerCitationCount": 57,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 1427,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1588,
                "i": [
                    1588
                ]
            }
        },
        {
            "name": "Florian Heimerl",
            "value": 226,
            "numPapers": 52,
            "cluster": "1",
            "visible": 1,
            "index": 873,
            "x": -284.4931596612814,
            "y": 80.08521777419735,
            "vy": 0,
            "vx": 0,
            "r": 1.2602187679907888,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "CAVA: A Visual Analytics System for Exploratory Columnar Data Augmentation Using Knowledge Graphs",
                "DOI": "10.1109/tvcg.2020.3030443",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030443",
                "FirstPage": 1731,
                "LastPage": 1741,
                "PaperType": "J",
                "Abstract": "Most visual analytics systems assume that all foraging for data happens before the analytics process; once analysis begins, the set of data attributes considered is fixed. Such separation of data construction from analysis precludes iteration that can enable foraging informed by the needs that arise in-situ during the analysis. The separation of the foraging loop from the data analysis tasks can limit the pace and scope of analysis. In this paper, we present CAVA, a system that integrates data curation and data augmentation with the traditional data exploration and analysis tasks, enabling information foraging in-situ during analysis. Identifying attributes to add to the dataset is difficult because it requires human knowledge to determine which available attributes will be helpful for the ensuing analytical tasks. CAVA crawls knowledge graphs to provide users with a a broad set of attributes drawn from external data to choose from. Users can then specify complex operations on knowledge graphs to construct additional attributes. CAVA shows how visual analytics can help users forage for attributes by letting users visually explore the set of available data, and by serving as an interface for query construction. It also provides visualizations of the knowledge graph itself to help users understand complex joins such as multi-hop aggregations. We assess the ability of our system to enable users to perform complex data combinations without programming in a user study over two datasets. We then demonstrate the generalizability of CAVA through two additional usage scenarios. The results of the evaluation confirm that CAVA is effective in helping the user perform data foraging that leads to improved analysis outcomes, and offer evidence in support of integrating data augmentation as a part of the visual analytics pipeline.",
                "AuthorNamesDeduped": "Dylan Cashman;Shenyu Xu;Subhajit Das 0002;Florian Heimerl;Cong Liu;Shah Rukh Humayoun;Michael Gleicher;Alex Endert;Remco Chang",
                "AuthorNames": "Dylan Cashman;Shenyu Xu;Subhajit Das;Florian Heimerl;Cong Liu;Shah Rukh Humayoun;Michael Gleicher;Alex Endert;Remco Chang",
                "AuthorAffiliation": "Tufts University;Georgia Tech.;Georgia Tech.;University of Wisconsin, Madison;Tufts University;San Francisco State University;University of Wisconsin, Madison;Georgia Tech.;Tufts University",
                "InternalReferences": "0.1109/tvcg.2012.254;10.1109/tvcg.2008.178;10.1109/tvcg.2015.2467971;10.1109/tvcg.2018.2864838;10.1109/tvcg.2006.142",
                "AuthorKeywords": "Visual Analytics,Information Foraging,Data Augmentation",
                "AminerCitationCount": 10,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 870,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 486,
                "i": [
                    486
                ]
            }
        },
        {
            "name": "Harald Bosch",
            "value": 293,
            "numPapers": 25,
            "cluster": "1",
            "visible": 1,
            "index": 874,
            "x": 155.76870169440068,
            "y": -251.3684776825464,
            "vy": 0,
            "vx": 0,
            "r": 1.3373632700057572,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Visual Classifier Training for Text Document Retrieval",
                "DOI": "10.1109/tvcg.2012.277",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.277",
                "FirstPage": 2839,
                "LastPage": 2848,
                "PaperType": "J",
                "Abstract": "Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.",
                "AuthorNamesDeduped": "Florian Heimerl;Steffen Koch 0001;Harald Bosch;Thomas Ertl",
                "AuthorNames": "Florian Heimerl;Steffen Koch;Harald Bosch;Thomas Ertl",
                "AuthorAffiliation": "Institute for Visualization and Interactive Systems, Universität Stuttgart, Germany;Institute for Visualization and Interactive Systems, Universität Stuttgart, Germany;Institute for Visualization and Interactive Systems, Universität Stuttgart, Germany;Institute for Visualization and Interactive Systems, Universität Stuttgart, Germany",
                "InternalReferences": "0.1109/vast.2011.6102449;10.1109/vast.2011.6102453;10.1109/vast.2007.4389006;10.1109/vast.2012.6400492;10.1109/infvis.2004.37",
                "AuthorKeywords": "Visual analytics, human computer interaction, information retrieval, active learning, classification, user evaluation",
                "AminerCitationCount": 177,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2502,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1488,
                "i": [
                    1488
                ]
            }
        },
        {
            "name": "Stephen C. North",
            "value": 234,
            "numPapers": 33,
            "cluster": "4",
            "visible": 1,
            "index": 875,
            "x": 54.96937625241556,
            "y": 290.73762686384504,
            "vy": 0,
            "vx": 0,
            "r": 1.2694300518134716,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis",
                "DOI": "10.1109/tvcg.2016.2598495",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598495",
                "FirstPage": 241,
                "LastPage": 250,
                "PaperType": "J",
                "Abstract": "Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.",
                "AuthorNamesDeduped": "Dominik Sacha;Leishi Zhang;Michael Sedlmair;John Aldo Lee;Jaakko Peltonen;Daniel Weiskopf;Stephen C. North;Daniel A. Keim",
                "AuthorNames": "Dominik Sacha;Leishi Zhang;Michael Sedlmair;John A. Lee;Jaakko Peltonen;Daniel Weiskopf;Stephen C. North;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Middlesex University, UK;University of Vienna, Austria;SSS, Belgian F.R.S.-FNRS.;Helsinki Institute for Information Technology HIIT, University of Tampere, Finland;University of Konstanz, Germany;Infovisible LLC, Oldwick, U.S.A.;VISUS, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/tvcg.2012.195;10.1109/tvcg.2009.153;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346481;10.1109/vast.2011.6102449;10.1109/tvcg.2007.70515;10.1109/vast.2008.4677350;10.1109/vast.2009.5332629;10.1109/vast.2010.5652443;10.1109/vast.2014.7042492;10.1109/tvcg.2015.2467132;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346321;10.1109/tvcg.2013.153;10.1109/vast.2010.5652484;10.1109/tvcg.2006.156;10.1109/tvcg.2015.2467717;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/vast.2010.5652392;10.1109/tvcg.2013.126",
                "AuthorKeywords": "Interactive visualization;machine learning;visual analytics;dimensionality reduction",
                "AminerCitationCount": 248,
                "CitationCountCrossRef": 155,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 4185,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 957,
                "i": [
                    957
                ]
            }
        },
        {
            "name": "Ed Huai-hsin Chi",
            "value": 192,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 876,
            "x": -237.05842114399763,
            "y": -177.3507963464361,
            "vy": 0,
            "vx": 0,
            "r": 1.221070811744387,
            "node": {
                "Conference": "InfoVis",
                "Year": 2000,
                "Title": "A taxonomy of visualization techniques using the data state reference model",
                "DOI": "10.1109/infvis.2000.885092",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2000.885092",
                "FirstPage": 69,
                "LastPage": 75,
                "PaperType": "C",
                "Abstract": "In previous work, researchers have attempted to construct taxonomies of information visualization techniques by examining the data domains that are compatible with these techniques. This is useful because implementers can quickly identify various techniques that can be applied to their domain of interest. However, these taxonomies do not help the implementers understand how to apply and implement these techniques. The author extends and proposes a new way to taxonomize information visualization techniques by using the Data State Model (E.H. Chi and J.T. Reidl, 1998). In fact, as the taxonomic analysis in the paper will show, many of the techniques share similar operating steps that can easily be reused. The paper shows that the Data State Model not only helps researchers understand the space of design, but also helps implementers understand how information visualization techniques can be applied more broadly.",
                "AuthorNamesDeduped": "Ed Huai-hsin Chi",
                "AuthorNames": "E.H. Chi",
                "AuthorAffiliation": "Xerox Palo Alto Research Center, Palo Alto, CA, USA",
                "InternalReferences": "0.1109/infvis.1997.636761;10.1109/infvis.1997.636792;10.1109/infvis.1998.729560",
                "AuthorKeywords": "Information Visualization, Data State Model,Reference Model, Taxonomy, Techniques, Operators",
                "AminerCitationCount": 848,
                "CitationCountCrossRef": 187,
                "PubsCitedCrossRef": 8,
                "DownloadsXplore": 3703,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2939,
                "i": [
                    2939
                ]
            }
        },
        {
            "name": "Xinhai Wei",
            "value": 7,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 877,
            "x": 294.76624950502367,
            "y": -29.374447275517,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Interactive Dimensionality Reduction for Comparative Analysis",
                "DOI": "10.1109/tvcg.2021.3114807",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114807",
                "FirstPage": 758,
                "LastPage": 768,
                "PaperType": "J",
                "Abstract": "Finding the similarities and differences between groups of datasets is a fundamental analysis task. For high-dimensional data, dimensionality reduction (DR) methods are often used to find the characteristics of each group. However, existing DR methods provide limited capability and flexibility for such comparative analysis as each method is designed only for a narrow analysis target, such as identifying factors that most differentiate groups. This paper presents an interactive DR framework where we integrate our new DR method, called ULCA (unified linear comparative analysis), with an interactive visual interface. ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks. To provide flexibility for comparative analysis, we develop an optimization algorithm that enables analysts to interactively refine ULCA results. Additionally, the interactive visualization interface facilitates interpretation and refinement of the ULCA results. We evaluate ULCA and the optimization algorithm to show their efficiency as well as present multiple case studies using real-world datasets to demonstrate the usefulness of this framework.",
                "AuthorNamesDeduped": "Takanori Fujiwara;Xinhai Wei;Jian Zhao 0010;Kwan-Liu Ma",
                "AuthorNames": "Takanori Fujiwara;Xinhai Wei;Jian Zhao;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California, Davis, United States;University of Waterloo, Canada;University of Waterloo, Canada;University of California, Davis, United States",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/vast.2012.6400486;10.1109/tvcg.2018.2865047;10.1109/vast.2011.6102449;10.1109/tvcg.2019.2934433;10.1109/tvcg.2019.2934251;10.1109/tvcg.2013.157;10.1109/tvcg.2017.2744199;10.1109/tvcg.2009.153;10.1109/tvcg.2011.220;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467132;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598839;10.1109/tvcg.2017.2745258",
                "AuthorKeywords": "Dimensionality reduction,discriminant analysis,contrastive learning,comparative analysis,interpretability,visual analytics",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 99,
                "DownloadsXplore": 1249,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 274,
                "i": [
                    274
                ]
            }
        },
        {
            "name": "Jibonananda Sanyal",
            "value": 311,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 878,
            "x": -197.6218135286188,
            "y": 220.8973037808742,
            "vy": 0,
            "vx": 0,
            "r": 1.3580886586067933,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Characterizing Provenance in Visualization and Data Analysis: An Organizational Framework of Provenance Types and Purposes",
                "DOI": "10.1109/tvcg.2015.2467551",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467551",
                "FirstPage": 31,
                "LastPage": 40,
                "PaperType": "J",
                "Abstract": "While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance information and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research.",
                "AuthorNamesDeduped": "Eric D. Ragan;Alex Endert;Jibonananda Sanyal;Jian Chen 0006",
                "AuthorNames": "Eric D. Ragan;Alex Endert;Jibonananda Sanyal;Jian Chen",
                "AuthorAffiliation": "Texas A&M University;Georgia Tech;Oak Ridge National Laboratory;University of Maryland, Baltimore County",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/visual.2005.1532788;10.1109/tvcg.2013.155;10.1109/visual.1993.398857;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2010.5652932;10.1109/vast.2008.4677365;10.1109/tvcg.2008.137;10.1109/tvcg.2013.126;10.1109/vast.2009.5333020;10.1109/vast.2010.5653598;10.1109/tvcg.2012.271;10.1109/tvcg.2014.2346573;10.1109/vast.2008.4677366;10.1109/tvcg.2013.130;10.1109/tvcg.2010.181;10.1109/tvcg.2010.179;10.1109/visual.1990.146375",
                "AuthorKeywords": "Provenance, Analytic provenance, Visual analytics, Framework, Visualization, Conceptual model",
                "AminerCitationCount": 221,
                "CitationCountCrossRef": 142,
                "PubsCitedCrossRef": 97,
                "DownloadsXplore": 2752,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1099,
                "i": [
                    1099
                ]
            }
        },
        {
            "name": "Leishi Zhang",
            "value": 117,
            "numPapers": 22,
            "cluster": "4",
            "visible": 1,
            "index": 879,
            "x": -3.4957977502708504,
            "y": -296.54304813650447,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Visual analytics for the big data era---A comparative review of state-of-the-art commercial systems",
                "DOI": "10.1109/vast.2012.6400554",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400554",
                "FirstPage": 173,
                "LastPage": 182,
                "PaperType": "C",
                "Abstract": "Visual analytics (VA) system development started in academic research institutions where novel visualization techniques and open source toolkits were developed. Simultaneously, small software companies, sometimes spin-offs from academic research institutions, built solutions for specific application domains. In recent years we observed the following trend: some small VA companies grew exponentially; at the same time some big software vendors such as IBM and SAP started to acquire successful VA companies and integrated the acquired VA components into their existing frameworks. Generally the application domains of VA systems have broadened substantially. This phenomenon is driven by the generation of more and more data of high volume and complexity, which leads to an increasing demand for VA solutions from many application domains. In this paper we survey a selection of state-of-the-art commercial VA frameworks, complementary to an existing survey on open source VA tools. From the survey results we identify several improvement opportunities as future research directions.",
                "AuthorNamesDeduped": "Leishi Zhang;Andreas Stoffel;Michael Behrisch 0001;Sebastian Mittelstädt;Tobias Schreck;René Pompl;Stefan Weber 0004;Holger Last;Daniel A. Keim",
                "AuthorNames": "Leishi Zhang;Andreas Stoffel;Michael Behrisch;Sebastian Mittelstadt;Tobias Schreck;René Pompl;Stefan Weber;Holger Last;Daniel Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;Siemens AG;Siemens AG;Siemens AG;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.12;10.1109/infvis.2004.64;10.1109/infvis.2000.885098",
                "AuthorKeywords": null,
                "AminerCitationCount": 229,
                "CitationCountCrossRef": 97,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 3861,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1490,
                "i": [
                    1490
                ]
            }
        },
        {
            "name": "John Aldo Lee",
            "value": 105,
            "numPapers": 20,
            "cluster": "4",
            "visible": 1,
            "index": 880,
            "x": 203.0049542865754,
            "y": 216.42317005141896,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis",
                "DOI": "10.1109/tvcg.2016.2598495",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598495",
                "FirstPage": 241,
                "LastPage": 250,
                "PaperType": "J",
                "Abstract": "Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.",
                "AuthorNamesDeduped": "Dominik Sacha;Leishi Zhang;Michael Sedlmair;John Aldo Lee;Jaakko Peltonen;Daniel Weiskopf;Stephen C. North;Daniel A. Keim",
                "AuthorNames": "Dominik Sacha;Leishi Zhang;Michael Sedlmair;John A. Lee;Jaakko Peltonen;Daniel Weiskopf;Stephen C. North;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Middlesex University, UK;University of Vienna, Austria;SSS, Belgian F.R.S.-FNRS.;Helsinki Institute for Information Technology HIIT, University of Tampere, Finland;University of Konstanz, Germany;Infovisible LLC, Oldwick, U.S.A.;VISUS, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/tvcg.2012.195;10.1109/tvcg.2009.153;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346481;10.1109/vast.2011.6102449;10.1109/tvcg.2007.70515;10.1109/vast.2008.4677350;10.1109/vast.2009.5332629;10.1109/vast.2010.5652443;10.1109/vast.2014.7042492;10.1109/tvcg.2015.2467132;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346321;10.1109/tvcg.2013.153;10.1109/vast.2010.5652484;10.1109/tvcg.2006.156;10.1109/tvcg.2015.2467717;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/vast.2010.5652392;10.1109/tvcg.2013.126",
                "AuthorKeywords": "Interactive visualization;machine learning;visual analytics;dimensionality reduction",
                "AminerCitationCount": 248,
                "CitationCountCrossRef": 155,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 4185,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 957,
                "i": [
                    957
                ]
            }
        },
        {
            "name": "Jaakko Peltonen",
            "value": 112,
            "numPapers": 30,
            "cluster": "4",
            "visible": 1,
            "index": 881,
            "x": -296.04925743309053,
            "y": -22.46858191599263,
            "vy": 0,
            "vx": 0,
            "r": 1.128957973517559,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Edge-Path Bundling: A Less Ambiguous Edge Bundling Approach",
                "DOI": "10.1109/tvcg.2021.3114795",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114795",
                "FirstPage": 313,
                "LastPage": 323,
                "PaperType": "J",
                "Abstract": "Edge bundling techniques cluster edges with similar attributes (i.e. similarity in direction and proximity) together to reduce the visual clutter. All edge bundling techniques to date implicitly or explicitly cluster groups of individual edges, or parts of them, together based on these attributes. These clusters can result in ambiguous connections that do not exist in the data. Confluent drawings of networks do not have these ambiguities, but require the layout to be computed as part of the bundling process. We devise a new bundling method, Edge-Path bundling, to simplify edge clutter while greatly reducing ambiguities compared to previous bundling techniques. Edge-Path bundling takes a layout as input and clusters each edge along a weighted, shortest path to limit its deviation from a straight line. Edge-Path bundling does not incur independent edge ambiguities typically seen in all edge bundling methods, and the level of bundling can be tuned through shortest path distances, Euclidean distances, and combinations of the two. Also, directed edge bundling naturally emerges from the model. Through metric evaluations, we demonstrate the advantages of Edge-Path bundling over other techniques.",
                "AuthorNamesDeduped": "Markus Wallinger;Daniel Archambault;David Auber;Martin Nöllenburg;Jaakko Peltonen",
                "AuthorNames": "Markus Wallinger;Daniel Archambault;David Auber;Martin Nöllenburg;Jaakko Peltonen",
                "AuthorAffiliation": "TU Wien, Austria;Swansea University, United Kingdom;University of Bordeaux, France;TU Wien, Austria;Tampere University, Finland",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2006.147;10.1109/visual.1991.175815;10.1109/infvis.2005.1532150;10.1109/tvcg.2011.190;10.1109/infvis.2004.43;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008",
                "AuthorKeywords": "Graph/network and tree data,algorithms,edge bundling",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 818,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 334,
                "i": [
                    334
                ]
            }
        },
        {
            "name": "Jian Zhang 0070",
            "value": 95,
            "numPapers": 82,
            "cluster": "5",
            "visible": 1,
            "index": 882,
            "x": 233.60721069529308,
            "y": -183.51477082557952,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "KD-Box: Line-segment-based KD-tree for Interactive Exploration of Large-scale Time-Series Data",
                "DOI": "10.1109/tvcg.2021.3114865",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114865",
                "FirstPage": 890,
                "LastPage": 900,
                "PaperType": "J",
                "Abstract": "Time-series data-usually presented in the form of lines-plays an important role in many domains such as finance, meteorology, health, and urban informatics. Yet, little has been done to support interactive exploration of large-scale time-series data, which requires a clutter-free visual representation with low-latency interactions. In this paper, we contribute a novel line-segment-based KD-tree method to enable interactive analysis of many time series. Our method enables not only fast queries over time series in selected regions of interest but also a line splatting method for efficient computation of the density field and selection of representative lines. Further, we develop KD-Box, an interactive system that provides rich interactions, e.g., timebox, attribute filtering, and coordinated multiple views. We demonstrate the effectiveness of KD-Box in supporting efficient line query and density field computation through a quantitative comparison and show its usefulness for interactive visual analysis on several real-world datasets.",
                "AuthorNamesDeduped": "Yue Zhao;Yunhai Wang;Jian Zhang 0070;Chi-Wing Fu;Mingliang Xu;Dominik Moritz",
                "AuthorNames": "Yue Zhao;Yunhai Wang;Jian Zhang;Chi-Wing Fu;Mingliang Xu;Dominik Moritz",
                "AuthorAffiliation": "Shandong University, Qingdao, China;Shandong University, Qingdao, China;CNIC, CAS., United States;Chinese University of Hong Kong, China;Zhengzhou University, China;Carnegie Mellon University, United States",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.226;10.1109/vast.2008.4677357;10.1109/tvcg.2010.176;10.1109/tvcg.2010.162;10.1109/tvcg.2013.179;10.1109/tvcg.2014.2346452;10.1109/visual.2005.1532779;10.1109/tvcg.2006.170;10.1109/tvcg.2011.181;10.1109/infvis.1999.801851;10.1109/infvis.2001.963273;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Many time series,density-based visualization,interactive visualization for large-scale data",
                "AminerCitationCount": 4,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1015,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 275,
                "i": [
                    275
                ]
            }
        },
        {
            "name": "Arie E. Kaufman",
            "value": 542,
            "numPapers": 134,
            "cluster": "6",
            "visible": 1,
            "index": 883,
            "x": -48.31959361114397,
            "y": 293.28350937830425,
            "vy": 0,
            "vx": 0,
            "r": 1.6240644789867589,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Generation of Transfer Functions with Stochastic Search Technique",
                "DOI": "10.1109/visual.1996.568113",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568113",
                "FirstPage": 227,
                "LastPage": 234,
                "PaperType": "C",
                "Abstract": "This paper presents a novel approach to assist the user in exploring appropriate transfer functions for the visualization of volumetric datasets. The search for a transfer function is treated as a parameter optimization problem and addressed with stochastic search techniques. Starting from an initial population of (random or pre-defined) transfer functions, the evolution of the stochastic algorithms is controlled by either direct user selection of intermediate images or automatic fitness evaluation using user-specified objective functions. This approach essentially shields the user from the complex and tedious \"trial and error\" approach, and demonstrates effective and convenient generation of transfer functions.",
                "AuthorNamesDeduped": "Taosong He;Lichan Hong;Arie E. Kaufman;Hanspeter Pfister",
                "AuthorNames": "Taosong He;Lichan Hong;A. Kaufman;H. Pfister",
                "AuthorAffiliation": "Department of Computer Science\nState University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 342,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 189,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3371,
                "i": [
                    3371
                ]
            }
        },
        {
            "name": "Joseph Marino",
            "value": 40,
            "numPapers": 21,
            "cluster": "6",
            "visible": 1,
            "index": 884,
            "x": -162.57272643124128,
            "y": -249.03836776832762,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Planar Visualization of Treelike Structures",
                "DOI": "10.1109/tvcg.2015.2467413",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467413",
                "FirstPage": 906,
                "LastPage": 915,
                "PaperType": "J",
                "Abstract": "We present a novel method to create planar visualizations of treelike structures (e.g., blood vessels and airway trees) where the shape of the object is well preserved, allowing for easy recognition by users familiar with the structures. Based on the extracted skeleton within the treelike object, a radial planar embedding is first obtained such that there are no self-intersections of the skeleton which would have resulted in occlusions in the final view. An optimization procedure which adjusts the angular positions of the skeleton nodes is then used to reconstruct the shape as closely as possible to the original, according to a specified view plane, which thus preserves the global geometric context of the object. Using this shape recovered embedded skeleton, the object surface is then flattened to the plane without occlusions using harmonic mapping. The boundary of the mesh is adjusted during the flattening step to account for regions where the mesh is stretched over concavities. This parameterized surface can then be used either as a map for guidance during endoluminal navigation or directly for interrogation and decision making. Depth cues are provided with a grayscale border to aid in shape understanding. Examples are presented using bronchial trees, cranial and lower limb blood vessels, and upper aorta datasets, and the results are evaluated quantitatively and with a user study.",
                "AuthorNamesDeduped": "Joseph Marino;Arie E. Kaufman",
                "AuthorNames": "Joseph Marino;Arie Kaufman",
                "AuthorAffiliation": "Computer Science Department, Stony Brook University;Computer Science Department, Stony Brook University",
                "InternalReferences": "0.1109/tvcg.2011.235;10.1109/visual.2001.964540;10.1109/tvcg.2011.192;10.1109/tvcg.2014.2346406;10.1109/visual.2001.964538;10.1109/visual.2004.75;10.1109/visual.2002.1183754;10.1109/visual.2003.1250353;10.1109/tvcg.2011.182;10.1109/tvcg.2006.172",
                "AuthorKeywords": "Geometry-based techniques, view-dependent visualization, medical visualization, planar embedding",
                "AminerCitationCount": 25,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 727,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1063,
                "i": [
                    1063
                ]
            }
        },
        {
            "name": "Robert A. Amar",
            "value": 237,
            "numPapers": 9,
            "cluster": "5",
            "visible": 1,
            "index": 885,
            "x": 288.26188306229585,
            "y": 73.85855924250933,
            "vy": 0,
            "vx": 0,
            "r": 1.2728842832469776,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "Low-level components of analytic activity in information visualization",
                "DOI": "10.1109/infvis.2005.1532136",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532136",
                "FirstPage": 111,
                "LastPage": 117,
                "PaperType": "C",
                "Abstract": "Existing system level taxonomies of visualization tasks are geared more towards the design of particular representations than the facilitation of user analytic activity. We present a set of ten low level analysis tasks that largely capture people's activities while employing information visualization tools for understanding data. To help develop these tasks, we collected nearly 200 sample questions from students about how they would analyze five particular data sets from different domains. The questions, while not being totally comprehensive, illustrated the sheer variety of analytic questions typically posed by users when employing information visualization systems. We hope that the presented set of tasks is useful for information visualization system designers as a kind of common substrate to discuss the relative analytic capabilities of the systems. Further, the tasks may provide a form of checklist for system designers.",
                "AuthorNamesDeduped": "Robert A. Amar;James Eagan;John T. Stasko",
                "AuthorNames": "R. Amar;J. Eagan;J. Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology,College of Computing, GVU Center;Georgia Institute of Technology,College of Computing, GVU Center;Georgia Institute of Technology,College of Computing, GVU Center",
                "InternalReferences": "0.1109/visual.1990.146375;10.1109/infvis.1998.729560;10.1109/infvis.2000.885092;10.1109/infvis.2004.5;10.1109/infvis.2001.963289",
                "AuthorKeywords": "Analytic activity, taxonomy, knowledge discovery, design, evaluation",
                "AminerCitationCount": 844,
                "CitationCountCrossRef": 178,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 3816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2330,
                "i": [
                    2330
                ]
            }
        },
        {
            "name": "Stephen Wehrend",
            "value": 84,
            "numPapers": 0,
            "cluster": "5",
            "visible": 1,
            "index": 886,
            "x": -262.59423030674145,
            "y": 140.33627545866406,
            "vy": 0,
            "vx": 0,
            "r": 1.0967184801381693,
            "node": {
                "Conference": "Vis",
                "Year": 1990,
                "Title": "A problem-oriented classification of visualization techniques",
                "DOI": "10.1109/visual.1990.146375",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1990.146375",
                "FirstPage": 139,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "Progress in scientific visualization could be accelerated if workers could more readily find visualization techniques relevant to a given problem. The authors describe an approach to this problem, based on a classification of visualization techniques, that is independent of particular application domains. A user breaks up a problem into subproblems, describes these subproblems in terms of the objects to be represented and the operations to be supported by a representation, locates applicable visualization techniques in a catalog, and combines these representations into a composite representation for the original problem. The catalog and its underlying classification provide a way for workers in different application disciplines to share methods.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Stephen Wehrend;Clayton Lewis",
                "AuthorNames": "S. Wehrend;C. Lewis",
                "AuthorAffiliation": "Center for Advanced Decision Support in Water and Environmental, Systems and Department of Computer Science, University of Colorado, Boulder, CO, USA;Center for Advanced Decision Support in Water and Environmental, Systems and Department of Computer Science, University of Colorado, Boulder, CO, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 483,
                "CitationCountCrossRef": 111,
                "PubsCitedCrossRef": 6,
                "DownloadsXplore": 1146,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3701,
                "i": [
                    3701
                ]
            }
        },
        {
            "name": "Clayton Lewis",
            "value": 84,
            "numPapers": 0,
            "cluster": "5",
            "visible": 1,
            "index": 887,
            "x": 98.88874865941955,
            "y": -281.0178204110446,
            "vy": 0,
            "vx": 0,
            "r": 1.0967184801381693,
            "node": {
                "Conference": "Vis",
                "Year": 1990,
                "Title": "A problem-oriented classification of visualization techniques",
                "DOI": "10.1109/visual.1990.146375",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1990.146375",
                "FirstPage": 139,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "Progress in scientific visualization could be accelerated if workers could more readily find visualization techniques relevant to a given problem. The authors describe an approach to this problem, based on a classification of visualization techniques, that is independent of particular application domains. A user breaks up a problem into subproblems, describes these subproblems in terms of the objects to be represented and the operations to be supported by a representation, locates applicable visualization techniques in a catalog, and combines these representations into a composite representation for the original problem. The catalog and its underlying classification provide a way for workers in different application disciplines to share methods.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Stephen Wehrend;Clayton Lewis",
                "AuthorNames": "S. Wehrend;C. Lewis",
                "AuthorAffiliation": "Center for Advanced Decision Support in Water and Environmental, Systems and Department of Computer Science, University of Colorado, Boulder, CO, USA;Center for Advanced Decision Support in Water and Environmental, Systems and Department of Computer Science, University of Colorado, Boulder, CO, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 483,
                "CitationCountCrossRef": 111,
                "PubsCitedCrossRef": 6,
                "DownloadsXplore": 1146,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3701,
                "i": [
                    3701
                ]
            }
        },
        {
            "name": "Mengyu Zhou",
            "value": 36,
            "numPapers": 24,
            "cluster": "1",
            "visible": 1,
            "index": 888,
            "x": 116.97316928746673,
            "y": 274.1665144886327,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "MultiVision: Designing Analytical Dashboards with Deep Learning Based Recommendation",
                "DOI": "10.1109/tvcg.2021.3114826",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114826",
                "FirstPage": 162,
                "LastPage": 172,
                "PaperType": "J",
                "Abstract": "We contribute a deep-learning-based method that assists in designing analytical dashboards for analyzing a data table. Given a data table, data workers usually need to experience a tedious and time-consuming process to select meaningful combinations of data columns for creating charts. This process is further complicated by the needs of creating dashboards composed of multiple views that unveil different perspectives of data. Existing automated approaches for recommending multiple-view visualizations mainly build on manually crafted design rules, producing sub-optimal or irrelevant suggestions. To address this gap, we present a deep learning approach for selecting data columns and recommending multiple charts. More importantly, we integrate the deep learning models into a mixed-initiative system. Our model could make recommendations given optional user-input selections of data columns. The model, in turn, learns from provenance data of authoring logs in an offline manner. We compare our deep learning model with existing methods for visualization recommendation and conduct a user study to evaluate the usefulness of the system.",
                "AuthorNamesDeduped": "Aoyu Wu;Yun Wang 0012;Mengyu Zhou;Xinyi He;Haidong Zhang;Huamin Qu;Dongmei Zhang 0001",
                "AuthorNames": "Aoyu Wu;Yun Wang;Mengyu Zhou;Xinyi He;Haidong Zhang;Huamin Qu;Dongmei Zhang",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong and Microsoft Research Area, United States;Microsoft Research Area, United States;Microsoft Research Area, United States;Microsoft Research Area, United States;Microsoft Research Area, United States;Hong Kong University of Science and Technology, Hong Kong;Microsoft Research Area, United States",
                "InternalReferences": "0.1109/tvcg.2020.3030338;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934332;10.1109/tvcg.2018.2865138;10.1109/tvcg.2013.119;10.1109/tvcg.2016.2598620;10.1109/tvcg.2017.2744019;10.1109/tvcg.2018.2865235;10.1109/tvcg.2007.70594;10.1109/tvcg.2020.3030430;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030387;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030403;10.1109/tvcg.2020.3030396;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2744843;10.1109/tvcg.2019.2934798;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030423",
                "AuthorKeywords": "Visualization Recommendation,Deep Learning,Multiple-View,Dashboard,Mixed-Initiative,Visualization Provenance",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 1510,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 279,
                "i": [
                    279
                ]
            }
        },
        {
            "name": "Xinyi He",
            "value": 36,
            "numPapers": 24,
            "cluster": "1",
            "visible": 1,
            "index": 889,
            "x": -271.6019081967054,
            "y": -123.2168960163678,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "MultiVision: Designing Analytical Dashboards with Deep Learning Based Recommendation",
                "DOI": "10.1109/tvcg.2021.3114826",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114826",
                "FirstPage": 162,
                "LastPage": 172,
                "PaperType": "J",
                "Abstract": "We contribute a deep-learning-based method that assists in designing analytical dashboards for analyzing a data table. Given a data table, data workers usually need to experience a tedious and time-consuming process to select meaningful combinations of data columns for creating charts. This process is further complicated by the needs of creating dashboards composed of multiple views that unveil different perspectives of data. Existing automated approaches for recommending multiple-view visualizations mainly build on manually crafted design rules, producing sub-optimal or irrelevant suggestions. To address this gap, we present a deep learning approach for selecting data columns and recommending multiple charts. More importantly, we integrate the deep learning models into a mixed-initiative system. Our model could make recommendations given optional user-input selections of data columns. The model, in turn, learns from provenance data of authoring logs in an offline manner. We compare our deep learning model with existing methods for visualization recommendation and conduct a user study to evaluate the usefulness of the system.",
                "AuthorNamesDeduped": "Aoyu Wu;Yun Wang 0012;Mengyu Zhou;Xinyi He;Haidong Zhang;Huamin Qu;Dongmei Zhang 0001",
                "AuthorNames": "Aoyu Wu;Yun Wang;Mengyu Zhou;Xinyi He;Haidong Zhang;Huamin Qu;Dongmei Zhang",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong and Microsoft Research Area, United States;Microsoft Research Area, United States;Microsoft Research Area, United States;Microsoft Research Area, United States;Microsoft Research Area, United States;Hong Kong University of Science and Technology, Hong Kong;Microsoft Research Area, United States",
                "InternalReferences": "0.1109/tvcg.2020.3030338;10.1109/tvcg.2019.2934810;10.1109/tvcg.2019.2934332;10.1109/tvcg.2018.2865138;10.1109/tvcg.2013.119;10.1109/tvcg.2016.2598620;10.1109/tvcg.2017.2744019;10.1109/tvcg.2018.2865235;10.1109/tvcg.2007.70594;10.1109/tvcg.2020.3030430;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030387;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030403;10.1109/tvcg.2020.3030396;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2744843;10.1109/tvcg.2019.2934798;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191;10.1109/tvcg.2020.3030423",
                "AuthorKeywords": "Visualization Recommendation,Deep Learning,Multiple-View,Dashboard,Mixed-Initiative,Visualization Provenance",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 1510,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 279,
                "i": [
                    279
                ]
            }
        },
        {
            "name": "Phoebe Moh",
            "value": 19,
            "numPapers": 8,
            "cluster": "5",
            "visible": 1,
            "index": 890,
            "x": 283.66192757340673,
            "y": -92.66019018618167,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "An Evaluation-Focused Framework for Visualization Recommendation Algorithms",
                "DOI": "10.1109/tvcg.2021.3114814",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114814",
                "FirstPage": 346,
                "LastPage": 356,
                "PaperType": "J",
                "Abstract": "Although we have seen a proliferation of algorithms for recommending visualizations, these algorithms are rarely compared with one another, making it difficult to ascertain which algorithm is best for a given visual analysis scenario. Though several formal frameworks have been proposed in response, we believe this issue persists because visualization recommendation algorithms are inadequately specified from an <i>evaluation</i> perspective. In this paper, we propose an evaluation-focused framework to contextualize and compare a broad range of visualization recommendation algorithms. We present the structure of our framework, where algorithms are specified using three components: (1) a graph representing the full space of possible visualization designs, (2) the method used to traverse the graph for potential candidates for recommendation, and (3) an oracle used to rank candidate designs. To demonstrate how our framework guides the formal comparison of algorithmic performance, we not only theoretically compare five existing representative recommendation algorithms, but also empirically compare four new algorithms generated based on our findings from the theoretical comparison. Our results show that these algorithms behave similarly in terms of user performance, highlighting the need for more rigorous formal comparisons of recommendation algorithms to further clarify their benefits in various analysis scenarios.",
                "AuthorNamesDeduped": "Zehua Zeng;Phoebe Moh;Fan Du;Jane Hoffswell;Tak Yeon Lee;Sana Malik;Eunyee Koh;Leilani Battle",
                "AuthorNames": "Zehua Zeng;Phoebe Moh;Fan Du;Jane Hoffswell;Tak Yeon Lee;Sana Malik;Eunyee Koh;Leilani Battle",
                "AuthorAffiliation": "University of Maryland, United States;University of Maryland, United States;Adobe Research, United States;Adobe Research, United States;Adobe Research, United States and KAIST, South Korea;Adobe Research, United States;Adobe Research, United States;University of Maryland, United States and University of Washington, United States",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2008.137;10.1109/tvcg.2012.219;10.1109/visual.1999.809871;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2007.70577;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization Tools,Visualization Recommendation Algorithms",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 939,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 281,
                "i": [
                    281
                ]
            }
        },
        {
            "name": "Tak Yeon Lee",
            "value": 19,
            "numPapers": 8,
            "cluster": "5",
            "visible": 1,
            "index": 891,
            "x": -146.6546928821344,
            "y": 260.08152770938347,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "An Evaluation-Focused Framework for Visualization Recommendation Algorithms",
                "DOI": "10.1109/tvcg.2021.3114814",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114814",
                "FirstPage": 346,
                "LastPage": 356,
                "PaperType": "J",
                "Abstract": "Although we have seen a proliferation of algorithms for recommending visualizations, these algorithms are rarely compared with one another, making it difficult to ascertain which algorithm is best for a given visual analysis scenario. Though several formal frameworks have been proposed in response, we believe this issue persists because visualization recommendation algorithms are inadequately specified from an <i>evaluation</i> perspective. In this paper, we propose an evaluation-focused framework to contextualize and compare a broad range of visualization recommendation algorithms. We present the structure of our framework, where algorithms are specified using three components: (1) a graph representing the full space of possible visualization designs, (2) the method used to traverse the graph for potential candidates for recommendation, and (3) an oracle used to rank candidate designs. To demonstrate how our framework guides the formal comparison of algorithmic performance, we not only theoretically compare five existing representative recommendation algorithms, but also empirically compare four new algorithms generated based on our findings from the theoretical comparison. Our results show that these algorithms behave similarly in terms of user performance, highlighting the need for more rigorous formal comparisons of recommendation algorithms to further clarify their benefits in various analysis scenarios.",
                "AuthorNamesDeduped": "Zehua Zeng;Phoebe Moh;Fan Du;Jane Hoffswell;Tak Yeon Lee;Sana Malik;Eunyee Koh;Leilani Battle",
                "AuthorNames": "Zehua Zeng;Phoebe Moh;Fan Du;Jane Hoffswell;Tak Yeon Lee;Sana Malik;Eunyee Koh;Leilani Battle",
                "AuthorAffiliation": "University of Maryland, United States;University of Maryland, United States;Adobe Research, United States;Adobe Research, United States;Adobe Research, United States and KAIST, South Korea;Adobe Research, United States;Adobe Research, United States;University of Maryland, United States and University of Washington, United States",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2008.137;10.1109/tvcg.2012.219;10.1109/visual.1999.809871;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2007.70577;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization Tools,Visualization Recommendation Algorithms",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 939,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 281,
                "i": [
                    281
                ]
            }
        },
        {
            "name": "Sana Malik",
            "value": 41,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 892,
            "x": -67.58181289984157,
            "y": -291.00291848222207,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "An Evaluation-Focused Framework for Visualization Recommendation Algorithms",
                "DOI": "10.1109/tvcg.2021.3114814",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114814",
                "FirstPage": 346,
                "LastPage": 356,
                "PaperType": "J",
                "Abstract": "Although we have seen a proliferation of algorithms for recommending visualizations, these algorithms are rarely compared with one another, making it difficult to ascertain which algorithm is best for a given visual analysis scenario. Though several formal frameworks have been proposed in response, we believe this issue persists because visualization recommendation algorithms are inadequately specified from an <i>evaluation</i> perspective. In this paper, we propose an evaluation-focused framework to contextualize and compare a broad range of visualization recommendation algorithms. We present the structure of our framework, where algorithms are specified using three components: (1) a graph representing the full space of possible visualization designs, (2) the method used to traverse the graph for potential candidates for recommendation, and (3) an oracle used to rank candidate designs. To demonstrate how our framework guides the formal comparison of algorithmic performance, we not only theoretically compare five existing representative recommendation algorithms, but also empirically compare four new algorithms generated based on our findings from the theoretical comparison. Our results show that these algorithms behave similarly in terms of user performance, highlighting the need for more rigorous formal comparisons of recommendation algorithms to further clarify their benefits in various analysis scenarios.",
                "AuthorNamesDeduped": "Zehua Zeng;Phoebe Moh;Fan Du;Jane Hoffswell;Tak Yeon Lee;Sana Malik;Eunyee Koh;Leilani Battle",
                "AuthorNames": "Zehua Zeng;Phoebe Moh;Fan Du;Jane Hoffswell;Tak Yeon Lee;Sana Malik;Eunyee Koh;Leilani Battle",
                "AuthorAffiliation": "University of Maryland, United States;University of Maryland, United States;Adobe Research, United States;Adobe Research, United States;Adobe Research, United States and KAIST, South Korea;Adobe Research, United States;Adobe Research, United States;University of Maryland, United States and University of Washington, United States",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2008.137;10.1109/tvcg.2012.219;10.1109/visual.1999.809871;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865240;10.1109/tvcg.2007.70577;10.1109/tvcg.2019.2934398;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization Tools,Visualization Recommendation Algorithms",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 939,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 281,
                "i": [
                    281
                ]
            }
        },
        {
            "name": "P. Samuel Quinan",
            "value": 76,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 893,
            "x": 246.54037449267082,
            "y": 169.02024655352278,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Examining Effort in 1D Uncertainty Communication Using Individual Differences in Working Memory and NASA-TLX",
                "DOI": "10.1109/tvcg.2021.3114803",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114803",
                "FirstPage": 411,
                "LastPage": 421,
                "PaperType": "J",
                "Abstract": "As uncertainty visualizations for general audiences become increasingly common, designers must understand the full impact of uncertainty communication techniques on viewers' decision processes. Prior work demonstrates mixed performance outcomes with respect to how individuals make decisions using various visual and textual depictions of uncertainty. Part of the inconsistency across findings may be due to an over-reliance on task accuracy, which cannot, on its own, provide a comprehensive understanding of how uncertainty visualization techniques support reasoning processes. In this work, we advance the debate surrounding the efficacy of modern 1D uncertainty visualizations by conducting converging quantitative and qualitative analyses of both the effort and strategies used by individuals when provided with quantile dotplots, density plots, interval plots, mean plots, and textual descriptions of uncertainty. We utilize two approaches for examining effort across uncertainty communication techniques: a measure of individual differences in working-memory capacity known as an operation span (OSPAN) task and self-reports of perceived workload via the NASA-TLX. The results reveal that both visualization methods and working-memory capacity impact participants' decisions. Specifically, quantile dotplots and density plots (i.e., distributional annotations) result in more accurate judgments than interval plots, textual descriptions of uncertainty, and mean plots (i.e., summary annotations). Additionally, participants' open-ended responses suggest that individuals viewing distributional annotations are more likely to employ a strategy that explicitly incorporates uncertainty into their judgments than those viewing summary annotations. When comparing quantile dotplots to density plots, this work finds that both methods are equally effective for low-working-memory individuals. However, for individuals with high-working-memory capacity, quantile dotplots evoke more accurate responses with less perceived effort. Given these results, we advocate for the inclusion of converging behavioral and subjective workload metrics in addition to accuracy performance to further disambiguate meaningful differences among visualization techniques.",
                "AuthorNamesDeduped": "Spencer C. Castro;P. Samuel Quinan;Helia Hosseinpour;Lace M. K. Padilla",
                "AuthorNames": "Spencer C. Castro;P. Samuel Quinan;Helia Hosseinpour;Lace Padilla",
                "AuthorAffiliation": "University of California Merced in Management of Complex Systems, United States;University of Utah School of Computing, United States;University of California Merced in Cognitive and Information Sciences, United States;University of California Merced in Cognitive and Information Sciences, United States",
                "InternalReferences": "0.1109/tvcg.2014.2346298;10.1109/tvcg.2017.2743898;10.1109/tvcg.2018.2864889;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865193;10.1109/tvcg.2012.279;10.1109/tvcg.2019.2934286",
                "AuthorKeywords": "Uncertainty Visualization,Working Memory,Individual Differences,Online OSPAN,Effort,Workload,NASA-TLX",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 713,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 282,
                "i": [
                    282
                ]
            }
        },
        {
            "name": "Helia Hosseinpour",
            "value": 21,
            "numPapers": 7,
            "cluster": "5",
            "visible": 1,
            "index": 894,
            "x": -296.12830899249303,
            "y": 41.92880409988545,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Examining Effort in 1D Uncertainty Communication Using Individual Differences in Working Memory and NASA-TLX",
                "DOI": "10.1109/tvcg.2021.3114803",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114803",
                "FirstPage": 411,
                "LastPage": 421,
                "PaperType": "J",
                "Abstract": "As uncertainty visualizations for general audiences become increasingly common, designers must understand the full impact of uncertainty communication techniques on viewers' decision processes. Prior work demonstrates mixed performance outcomes with respect to how individuals make decisions using various visual and textual depictions of uncertainty. Part of the inconsistency across findings may be due to an over-reliance on task accuracy, which cannot, on its own, provide a comprehensive understanding of how uncertainty visualization techniques support reasoning processes. In this work, we advance the debate surrounding the efficacy of modern 1D uncertainty visualizations by conducting converging quantitative and qualitative analyses of both the effort and strategies used by individuals when provided with quantile dotplots, density plots, interval plots, mean plots, and textual descriptions of uncertainty. We utilize two approaches for examining effort across uncertainty communication techniques: a measure of individual differences in working-memory capacity known as an operation span (OSPAN) task and self-reports of perceived workload via the NASA-TLX. The results reveal that both visualization methods and working-memory capacity impact participants' decisions. Specifically, quantile dotplots and density plots (i.e., distributional annotations) result in more accurate judgments than interval plots, textual descriptions of uncertainty, and mean plots (i.e., summary annotations). Additionally, participants' open-ended responses suggest that individuals viewing distributional annotations are more likely to employ a strategy that explicitly incorporates uncertainty into their judgments than those viewing summary annotations. When comparing quantile dotplots to density plots, this work finds that both methods are equally effective for low-working-memory individuals. However, for individuals with high-working-memory capacity, quantile dotplots evoke more accurate responses with less perceived effort. Given these results, we advocate for the inclusion of converging behavioral and subjective workload metrics in addition to accuracy performance to further disambiguate meaningful differences among visualization techniques.",
                "AuthorNamesDeduped": "Spencer C. Castro;P. Samuel Quinan;Helia Hosseinpour;Lace M. K. Padilla",
                "AuthorNames": "Spencer C. Castro;P. Samuel Quinan;Helia Hosseinpour;Lace Padilla",
                "AuthorAffiliation": "University of California Merced in Management of Complex Systems, United States;University of Utah School of Computing, United States;University of California Merced in Cognitive and Information Sciences, United States;University of California Merced in Cognitive and Information Sciences, United States",
                "InternalReferences": "0.1109/tvcg.2014.2346298;10.1109/tvcg.2017.2743898;10.1109/tvcg.2018.2864889;10.1109/tvcg.2020.3030335;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2865193;10.1109/tvcg.2012.279;10.1109/tvcg.2019.2934286",
                "AuthorKeywords": "Uncertainty Visualization,Working Memory,Individual Differences,Online OSPAN,Effort,Workload,NASA-TLX",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 93,
                "DownloadsXplore": 713,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 282,
                "i": [
                    282
                ]
            }
        },
        {
            "name": "Jian Chen 0006",
            "value": 340,
            "numPapers": 63,
            "cluster": "5",
            "visible": 1,
            "index": 895,
            "x": 190.1394922748914,
            "y": -231.07785155104443,
            "vy": 0,
            "vx": 0,
            "r": 1.3914795624640184,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "A Systematic Review on the Practice of Evaluating Visualization",
                "DOI": "10.1109/tvcg.2013.126",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.126",
                "FirstPage": 2818,
                "LastPage": 2827,
                "PaperType": "J",
                "Abstract": "We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic understanding of the characteristics and goals of presented evaluations. For this purpose we conducted a systematic review of ten years of evaluations in the published papers using and extending a coding scheme previously established by Lam et al. [2012]. The results of our review include an overview of the most common evaluation goals in the community, how they evolved over time, and how they contrast or align to those of the IEEE Information Visualization conference. In particular, we found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent (with consistently 80-90% of all papers since 1997). However, especially over the last six years there is a steady increase in evaluation methods that include participants, either by evaluating their performances and subjective feedback or by evaluating their work practices and their improved analysis and reasoning capabilities using visual tools. Up to 2010, this trend in the IEEE Visualization conference was much more pronounced than in the IEEE Information Visualization conference which only showed an increasing percentage of evaluation through user performance and experience testing. Since 2011, however, also papers in IEEE Information Visualization show such an increase of evaluations of work practices and analysis as well as reasoning using visual tools. Further, we found that generally the studies reporting requirements analyses and domain-specific work practices are too informally reported which hinders cross-comparison and lowers external validity.",
                "AuthorNamesDeduped": "Tobias Isenberg 0001;Petra Isenberg;Jian Chen 0006;Michael Sedlmair;Torsten Möller",
                "AuthorNames": "Tobias Isenberg;Petra Isenberg;Jian Chen;Michael Sedlmair;Torsten Möller",
                "AuthorAffiliation": "INRIA, France;INRIA, France;University of Maryland, Baltimore, USA;University of Vienna, Austria;University of Vienna, Austria",
                "InternalReferences": "0.1109/tvcg.2009.121;10.1109/visual.2005.1532781;10.1109/tvcg.2006.143;10.1109/tvcg.2011.224;10.1109/tvcg.2010.199;10.1109/tvcg.2010.223;10.1109/tvcg.2012.213;10.1109/tvcg.2010.134;10.1109/tvcg.2009.194;10.1109/tvcg.2011.174;10.1109/tvcg.2009.111;10.1109/tvcg.2011.206;10.1109/tvcg.2012.234;10.1109/tvcg.2012.292;10.1109/tvcg.2008.128;10.1109/tvcg.2009.167;10.1109/tvcg.2012.223;10.1109/visual.1994.346285",
                "AuthorKeywords": "Evaluation, validation, systematic review, visualization, scientific visualization, information visualization",
                "AminerCitationCount": 353,
                "CitationCountCrossRef": 217,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 6302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1334,
                "i": [
                    1334
                ]
            }
        },
        {
            "name": "Nuo Chen",
            "value": 21,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 896,
            "x": 15.896770336229308,
            "y": 298.99380042548904,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards Visual Explainable Active Learning for Zero-Shot Classification",
                "DOI": "10.1109/tvcg.2021.3114793",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114793",
                "FirstPage": 791,
                "LastPage": 801,
                "PaperType": "J",
                "Abstract": "Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.",
                "AuthorNamesDeduped": "Shichao Jia;Zeyu Li 0003;Nuo Chen;Jiawan Zhang",
                "AuthorNames": "Shichao Jia;Zeyu Li;Nuo Chen;Jiawan Zhang",
                "AuthorAffiliation": "College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China;College of Intelligence and Computing, Tianjin University, China and Tianjin cultural heritage conservation and inheritance engineering technology center and Key Research Center for Surface Monitoring and Analysis of Relics, State Administration of Cultural Heritage, China",
                "InternalReferences": "0.1109/tvcg.2017.2744818;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865047;10.1109/tvcg.2012.260;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/tvcg.2017.2744378;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2019.2934267;10.1109/tvcg.2017.2744805;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2015.2467191;10.1109/vast47406.2019.8986943;10.1109/vast.2012.6400486",
                "AuthorKeywords": "Active Learning,Explainable Artificial Intelligence,Human-AI Teaming,Mixed-Initiative Visual Analytics",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1559,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 284,
                "i": [
                    284
                ]
            }
        },
        {
            "name": "Huihua Lu",
            "value": 35,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 897,
            "x": -213.8083404189332,
            "y": -209.84754839478487,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Seek for Success: A Visualization Approach for Understanding the Dynamics of Academic Careers",
                "DOI": "10.1109/tvcg.2021.3114790",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114790",
                "FirstPage": 475,
                "LastPage": 485,
                "PaperType": "J",
                "Abstract": "How to achieve academic career success has been a long-standing research question in social science research. With the growing availability of large-scale well-documented academic profiles and career trajectories, scholarly interest in career success has been reinvigorated, which has emerged to be an active research domain called the Science of Science (i.e., SciSci). In this study, we adopt an innovative dynamic perspective to examine how individual and social factors will influence career success over time. We propose <i>ACSeeker</i>, an interactive visual analytics approach to explore the potential factors of success and how the influence of multiple factors changes at different stages of academic careers. We first applied a Multi-factor Impact Analysis framework to estimate the effect of different factors on academic career success over time. We then developed a visual analytics system to understand the dynamic effects interactively. A novel timeline is designed to reveal and compare the factor impacts based on the whole population. A customized career line showing the individual career development is provided to allow a detailed inspection. To validate the effectiveness and usability of <i>ACSeeker</i>, we report two case studies and interviews with a social scientist and general researchers.",
                "AuthorNamesDeduped": "Yifang Wang 0001;Tai-Quan Peng;Huihua Lu;Haoren Wang;Xiao Xie;Huamin Qu;Yingcai Wu",
                "AuthorNames": "Yifang Wang;Tai-Quan Peng;Huihua Lu;Haoren Wang;Xiao Xie;Huamin Qu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University and the Hong Kong University of Science and Technology, China;Michigan State University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sport Science, Zhejiang University, China;Hong Kong University of Science and Technology, Hong Kong;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2020.3030442;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2015.2467620;10.1109/tvcg.2019.2934267;10.1109/tvcg.2009.111;10.1109/vast47406.2019.8986934;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864899;10.1109/vast50239.2020.00009;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2015.2468151;10.1109/tvcg.2014.2346913;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2019.2934656;10.1109/tvcg.2019.2934630",
                "AuthorKeywords": "Career Analysis,Academic Profiles,Science of Science,Publication Data,Citation Data,Sequence Analysis",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 1149,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 285,
                "i": [
                    285
                ]
            }
        },
        {
            "name": "Haoren Wang",
            "value": 35,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 898,
            "x": 299.57235165350124,
            "y": 10.315334448818758,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Seek for Success: A Visualization Approach for Understanding the Dynamics of Academic Careers",
                "DOI": "10.1109/tvcg.2021.3114790",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114790",
                "FirstPage": 475,
                "LastPage": 485,
                "PaperType": "J",
                "Abstract": "How to achieve academic career success has been a long-standing research question in social science research. With the growing availability of large-scale well-documented academic profiles and career trajectories, scholarly interest in career success has been reinvigorated, which has emerged to be an active research domain called the Science of Science (i.e., SciSci). In this study, we adopt an innovative dynamic perspective to examine how individual and social factors will influence career success over time. We propose <i>ACSeeker</i>, an interactive visual analytics approach to explore the potential factors of success and how the influence of multiple factors changes at different stages of academic careers. We first applied a Multi-factor Impact Analysis framework to estimate the effect of different factors on academic career success over time. We then developed a visual analytics system to understand the dynamic effects interactively. A novel timeline is designed to reveal and compare the factor impacts based on the whole population. A customized career line showing the individual career development is provided to allow a detailed inspection. To validate the effectiveness and usability of <i>ACSeeker</i>, we report two case studies and interviews with a social scientist and general researchers.",
                "AuthorNamesDeduped": "Yifang Wang 0001;Tai-Quan Peng;Huihua Lu;Haoren Wang;Xiao Xie;Huamin Qu;Yingcai Wu",
                "AuthorNames": "Yifang Wang;Tai-Quan Peng;Huihua Lu;Haoren Wang;Xiao Xie;Huamin Qu;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University and the Hong Kong University of Science and Technology, China;Michigan State University, China;State Key Lab of CAD&CG, Zhejiang University, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sport Science, Zhejiang University, China;Hong Kong University of Science and Technology, Hong Kong;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2020.3030442;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2015.2467620;10.1109/tvcg.2019.2934267;10.1109/tvcg.2009.111;10.1109/vast47406.2019.8986934;10.1109/tvcg.2020.3030467;10.1109/tvcg.2018.2864899;10.1109/vast50239.2020.00009;10.1109/tvcg.2021.3114832;10.1109/tvcg.2017.2744218;10.1109/tvcg.2015.2468151;10.1109/tvcg.2014.2346913;10.1109/tvcg.2020.3028957;10.1109/tvcg.2020.3030359;10.1109/tvcg.2019.2934656;10.1109/tvcg.2019.2934630",
                "AuthorKeywords": "Career Analysis,Academic Profiles,Science of Science,Publication Data,Citation Data,Sequence Analysis",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 1149,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 285,
                "i": [
                    285
                ]
            }
        },
        {
            "name": "Sixiao Yang",
            "value": 132,
            "numPapers": 8,
            "cluster": "1",
            "visible": 1,
            "index": 899,
            "x": -227.99000390630707,
            "y": 194.86035543127315,
            "vy": 0,
            "vx": 0,
            "r": 1.151986183074266,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "egoSlider: Visual Analysis of Egocentric Network Evolution",
                "DOI": "10.1109/tvcg.2015.2468151",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2468151",
                "FirstPage": 260,
                "LastPage": 269,
                "PaperType": "J",
                "Abstract": "Ego-network, which represents relationships between a specific individual, i.e., the ego, and people connected to it, i.e., alters, is a critical target to study in social network analysis. Evolutionary patterns of ego-networks along time provide huge insights to many domains such as sociology, anthropology, and psychology. However, the analysis of dynamic ego-networks remains challenging due to its complicated time-varying graph structures, for example: alters come and leave, ties grow stronger and fade away, and alter communities merge and split. Most of the existing dynamic graph visualization techniques mainly focus on topological changes of the entire network, which is not adequate for egocentric analytical tasks. In this paper, we present egoSlider, a visual analysis system for exploring and comparing dynamic ego-networks. egoSlider provides a holistic picture of the data through multiple interactively coordinated views, revealing ego-network evolutionary patterns at three different layers: a macroscopic level for summarizing the entire ego-network data, a mesoscopic level for overviewing specific individuals' ego-network evolutions, and a microscopic level for displaying detailed temporal information of egos and their alters. We demonstrate the effectiveness of egoSlider with a usage scenario with the DBLP publication records. Also, a controlled user study indicates that in general egoSlider outperforms a baseline visualization of dynamic networks for completing egocentric analytical tasks.",
                "AuthorNamesDeduped": "Yanhong Wu;Naveen Pitipornvivat;Jian Zhao 0010;Sixiao Yang;Guowei Huang 0002;Huamin Qu",
                "AuthorNames": "Yanhong Wu;Naveen Pitipornvivat;Jian Zhao;Sixiao Yang;Guowei Huang;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Hong Kong University of Science and Technology;Autodesk Research;Huawei Technologies Co. Ltd.;Huawei Technologies Co. Ltd.;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2011.169;10.1109/tvcg.2011.226;10.1109/tvcg.2006.147;10.1109/tvcg.2013.149",
                "AuthorKeywords": "Egocentric network, dynamic graph, network visualization, glyph-based design, visual analytics",
                "AminerCitationCount": 97,
                "CitationCountCrossRef": 70,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 2440,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1106,
                "i": [
                    1106
                ]
            }
        },
        {
            "name": "Ryan Wesslen",
            "value": 61,
            "numPapers": 43,
            "cluster": "5",
            "visible": 1,
            "index": 900,
            "x": 36.5067304495805,
            "y": -297.8544252346801,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VITALITY: Promoting Serendipitous Discovery of Academic Literature with Transformers &amp; Visual Analytics",
                "DOI": "10.1109/tvcg.2021.3114820",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114820",
                "FirstPage": 486,
                "LastPage": 496,
                "PaperType": "J",
                "Abstract": "There are a few prominent practices for conducting reviews of academic literature, including searching for specific keywords on Google Scholar or checking citations from some initial seed paper(s). These approaches serve a critical purpose for academic literature reviews, yet there remain challenges in identifying relevant literature when similar work may utilize different terminology (e.g., mixed-initiative visual analytics papers may not use the same terminology as papers on model-steering, yet the two topics are relevant to one another). In this paper, we introduce a system, VITALITY, intended to complement existing practices. In particular, VITALITY promotes serendipitous discovery of relevant literature using transformer language models, allowing users to find semantically similar papers in a word embedding space given (1) a list of input paper(s) or (2) a working abstract. VITALITY visualizes this document-level embedding space in an interactive 2-D scatterplot using dimension reduction. VITALITY also summarizes meta information about the document corpus or search query, including keywords and co-authors, and allows users to save and export papers for use in a literature review. We present qualitative findings from an evaluation of VITALITY, suggesting it can be a promising complementary technique for conducting academic literature reviews. Furthermore, we contribute data from 38 popular data visualization publication venues in VITALITY, and we provide scrapers for the open-source community to continue to grow the list of supported venues.",
                "AuthorNamesDeduped": "Arpit Narechania;Alireza Karduni;Ryan Wesslen;Emily Wall",
                "AuthorNames": "Arpit Narechania;Alireza Karduni;Ryan Wesslen;Emily Wall",
                "AuthorAffiliation": "Georgia Tech., United States;UNC-Charlotte, United States;UNC-Charlotte, United States;Emory University, United States and Northwestern University, United States",
                "InternalReferences": "0.1109/vast.2014.7042493;10.1109/tvcg.2015.2467757;10.1109/tvcg.2018.2865233;10.1109/tvcg.2016.2598594;10.1109/tvcg.2013.162;10.1109/tvcg.2017.2745080;10.1109/vast.2011.6102449;10.1109/tvcg.2017.2746018;10.1109/tvcg.2015.2467621;10.1109/tvcg.2015.2467452;10.1109/tvcg.2019.2934287;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598827;10.1109/tvcg.2021.3114827;10.1109/tvcg.2017.2744478;10.1109/tvcg.2017.2744138;10.1109/vast.2017.8585669;10.1109/tvcg.2021.3114862",
                "AuthorKeywords": "transformers,word embeddings,literature review,web scraper,dataset,visual analytics",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 902,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 290,
                "i": [
                    290
                ]
            }
        },
        {
            "name": "Alireza Karduni",
            "value": 50,
            "numPapers": 41,
            "cluster": "5",
            "visible": 1,
            "index": 901,
            "x": 174.37558744631093,
            "y": 244.4241283154222,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VITALITY: Promoting Serendipitous Discovery of Academic Literature with Transformers &amp; Visual Analytics",
                "DOI": "10.1109/tvcg.2021.3114820",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114820",
                "FirstPage": 486,
                "LastPage": 496,
                "PaperType": "J",
                "Abstract": "There are a few prominent practices for conducting reviews of academic literature, including searching for specific keywords on Google Scholar or checking citations from some initial seed paper(s). These approaches serve a critical purpose for academic literature reviews, yet there remain challenges in identifying relevant literature when similar work may utilize different terminology (e.g., mixed-initiative visual analytics papers may not use the same terminology as papers on model-steering, yet the two topics are relevant to one another). In this paper, we introduce a system, VITALITY, intended to complement existing practices. In particular, VITALITY promotes serendipitous discovery of relevant literature using transformer language models, allowing users to find semantically similar papers in a word embedding space given (1) a list of input paper(s) or (2) a working abstract. VITALITY visualizes this document-level embedding space in an interactive 2-D scatterplot using dimension reduction. VITALITY also summarizes meta information about the document corpus or search query, including keywords and co-authors, and allows users to save and export papers for use in a literature review. We present qualitative findings from an evaluation of VITALITY, suggesting it can be a promising complementary technique for conducting academic literature reviews. Furthermore, we contribute data from 38 popular data visualization publication venues in VITALITY, and we provide scrapers for the open-source community to continue to grow the list of supported venues.",
                "AuthorNamesDeduped": "Arpit Narechania;Alireza Karduni;Ryan Wesslen;Emily Wall",
                "AuthorNames": "Arpit Narechania;Alireza Karduni;Ryan Wesslen;Emily Wall",
                "AuthorAffiliation": "Georgia Tech., United States;UNC-Charlotte, United States;UNC-Charlotte, United States;Emory University, United States and Northwestern University, United States",
                "InternalReferences": "0.1109/vast.2014.7042493;10.1109/tvcg.2015.2467757;10.1109/tvcg.2018.2865233;10.1109/tvcg.2016.2598594;10.1109/tvcg.2013.162;10.1109/tvcg.2017.2745080;10.1109/vast.2011.6102449;10.1109/tvcg.2017.2746018;10.1109/tvcg.2015.2467621;10.1109/tvcg.2015.2467452;10.1109/tvcg.2019.2934287;10.1109/tvcg.2011.175;10.1109/tvcg.2016.2598827;10.1109/tvcg.2021.3114827;10.1109/tvcg.2017.2744478;10.1109/tvcg.2017.2744138;10.1109/vast.2017.8585669;10.1109/tvcg.2021.3114862",
                "AuthorKeywords": "transformers,word embeddings,literature review,web scraper,dataset,visual analytics",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 902,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 290,
                "i": [
                    290
                ]
            }
        },
        {
            "name": "Chao Han",
            "value": 117,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 902,
            "x": -293.8480994920042,
            "y": -62.476350925267575,
            "vy": 0,
            "vx": 0,
            "r": 1.1347150259067358,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Observation-level interaction with statistical models for visual analytics",
                "DOI": "10.1109/vast.2011.6102449",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102449",
                "FirstPage": 121,
                "LastPage": 130,
                "PaperType": "C",
                "Abstract": "In visual analytics, sensemaking is facilitated through interactive visual exploration of data. Throughout this dynamic process, users combine their domain knowledge with the dataset to create insight. Therefore, visual analytic tools exist that aid sensemaking by providing various interaction techniques that focus on allowing users to change the visual representation through adjusting parameters of the underlying statistical model. However, we postulate that the process of sensemaking is not focused on a series of parameter adjustments, but instead, a series of perceived connections and patterns within the data. Thus, how can models for visual analytic tools be designed, so that users can express their reasoning on observations (the data), instead of directly on the model or tunable parameters? Observation level (and thus “observation”) in this paper refers to the data points within a visualization. In this paper, we explore two possible observation-level interactions, namely exploratory and expressive, within the context of three statistical methods, Probabilistic Principal Component Analysis (PPCA), Multidimensional Scaling (MDS), and Generative Topographic Mapping (GTM). We discuss the importance of these two types of observation level interactions, in terms of how they occur within the sensemaking process. Further, we present use cases for GTM, MDS, and PPCA, illustrating how observation level interaction can be incorporated into visual analytic tools.",
                "AuthorNamesDeduped": "Alex Endert;Chao Han;Dipayan Maiti;Leanna House;Scotland Leman;Chris North 0001",
                "AuthorNames": "Alex Endert;Chao Han;Dipayan Maiti;Leanna House;Scotland Leman;Chris North",
                "AuthorAffiliation": "Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA",
                "InternalReferences": null,
                "AuthorKeywords": "observation-level interaction, visual analytics, statistical models",
                "AminerCitationCount": 170,
                "CitationCountCrossRef": 99,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 1100,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1580,
                "i": [
                    1580
                ]
            }
        },
        {
            "name": "Dipayan Maiti",
            "value": 147,
            "numPapers": 5,
            "cluster": "4",
            "visible": 1,
            "index": 903,
            "x": 259.01999435745785,
            "y": -152.50784413617063,
            "vy": 0,
            "vx": 0,
            "r": 1.1692573402417963,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Observation-level interaction with statistical models for visual analytics",
                "DOI": "10.1109/vast.2011.6102449",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102449",
                "FirstPage": 121,
                "LastPage": 130,
                "PaperType": "C",
                "Abstract": "In visual analytics, sensemaking is facilitated through interactive visual exploration of data. Throughout this dynamic process, users combine their domain knowledge with the dataset to create insight. Therefore, visual analytic tools exist that aid sensemaking by providing various interaction techniques that focus on allowing users to change the visual representation through adjusting parameters of the underlying statistical model. However, we postulate that the process of sensemaking is not focused on a series of parameter adjustments, but instead, a series of perceived connections and patterns within the data. Thus, how can models for visual analytic tools be designed, so that users can express their reasoning on observations (the data), instead of directly on the model or tunable parameters? Observation level (and thus “observation”) in this paper refers to the data points within a visualization. In this paper, we explore two possible observation-level interactions, namely exploratory and expressive, within the context of three statistical methods, Probabilistic Principal Component Analysis (PPCA), Multidimensional Scaling (MDS), and Generative Topographic Mapping (GTM). We discuss the importance of these two types of observation level interactions, in terms of how they occur within the sensemaking process. Further, we present use cases for GTM, MDS, and PPCA, illustrating how observation level interaction can be incorporated into visual analytic tools.",
                "AuthorNamesDeduped": "Alex Endert;Chao Han;Dipayan Maiti;Leanna House;Scotland Leman;Chris North 0001",
                "AuthorNames": "Alex Endert;Chao Han;Dipayan Maiti;Leanna House;Scotland Leman;Chris North",
                "AuthorAffiliation": "Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Statistics, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA;Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA",
                "InternalReferences": null,
                "AuthorKeywords": "observation-level interaction, visual analytics, statistical models",
                "AminerCitationCount": 170,
                "CitationCountCrossRef": 99,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 1100,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1580,
                "i": [
                    1580
                ]
            }
        },
        {
            "name": "Adam Coscia",
            "value": 54,
            "numPapers": 23,
            "cluster": "5",
            "visible": 1,
            "index": 904,
            "x": -88.02438672446385,
            "y": 287.5790453801912,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Left, Right, and Gender: Exploring Interaction Traces to Mitigate Human Biases",
                "DOI": "10.1109/tvcg.2021.3114862",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114862",
                "FirstPage": 966,
                "LastPage": 975,
                "PaperType": "J",
                "Abstract": "Human biases impact the way people analyze data and make decisions. Recent work has shown that some visualization designs can better support cognitive processes and mitigate cognitive biases (i.e., errors that occur due to the use of mental “shortcuts”). In this work, we explore how visualizing a user's interaction history (i.e., which data points and attributes a user has interacted with) can be used to mitigate potential biases that drive decision making by promoting conscious reflection of one's analysis process. Given an interactive scatterplot-based visualization tool, we showed interaction history in real-time while exploring data (by coloring points in the scatterplot that the user has interacted with), and in a summative format after a decision has been made (by comparing the distribution of user interactions to the underlying distribution of the data). We conducted a series of in-lab experiments and a crowd-sourced experiment to evaluate the effectiveness of interaction history interventions toward mitigating bias. We contextualized this work in a political scenario in which participants were instructed to choose a committee of 10 fictitious politicians to review a recent bill passed in the U.S. state of Georgia banning abortion after 6 weeks, where things like gender bias or political party bias may drive one's analysis process. We demonstrate the generalizability of this approach by evaluating a second decision making scenario related to movies. Our results are inconclusive for the effectiveness of interaction history (henceforth referred to as interaction traces) toward mitigating biased decision making. However, we find some mixed support that interaction traces, particularly in a summative format, can increase awareness of potential unconscious biases.",
                "AuthorNamesDeduped": "Emily Wall;Arpit Narechania;Adam Coscia;Jamal Paden;Alex Endert",
                "AuthorNames": "Emily Wall;Arpit Narechania;Adam Coscia;Jamal Paden;Alex Endert",
                "AuthorAffiliation": "Emory University, US;Georgia Tech, US;Georgia Tech, US;Georgia Tech, US;Georgia Tech, US",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/tvcg.2016.2598468;10.1109/vast.2017.8585665;10.1109/tvcg.2018.2865233;10.1109/tvcg.2016.2598594;10.1109/tvcg.2016.2599058;10.1109/tvcg.2018.2865117;10.1109/visual.2000.885678;10.1109/tvcg.2020.3030430;10.1109/tvcg.2021.3114827;10.1109/tvcg.2017.2744138;10.1109/vast.2017.8585669;10.1109/tvcg.2007.70589",
                "AuthorKeywords": "Human bias,bias mitigation,decision making,visual data analysis",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 648,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 299,
                "i": [
                    299
                ]
            }
        },
        {
            "name": "Leslie M. Blaha",
            "value": 56,
            "numPapers": 18,
            "cluster": "5",
            "visible": 1,
            "index": 905,
            "x": -129.4218947725043,
            "y": -271.66150473244244,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Warning, Bias May Occur: A Proposed Approach to Detecting Cognitive Bias in Interactive Visual Analytics",
                "DOI": "10.1109/vast.2017.8585669",
                "Link": "http://dx.doi.org/10.1109/VAST.2017.8585669",
                "FirstPage": 104,
                "LastPage": 115,
                "PaperType": "C",
                "Abstract": "Visual analytic tools combine the complementary strengths of humans and machines in human-in-the-loop systems. Humans provide invaluable domain expertise and sensemaking capabilities to this discourse with analytic models; however, little consideration has yet been given to the ways inherent human biases might shape the visual analytic process. In this paper, we establish a conceptual framework for considering bias assessment through human-in-the-loop systems and lay the theoretical foundations for bias measurement. We propose six preliminary metrics to systematically detect and quantify bias from user interactions and demonstrate how the metrics might be implemented in an existing visual analytic system, InterAxis. We discuss how our proposed metrics could be used by visual analytic systems to mitigate the negative effects of cognitive biases by making users aware of biased processes throughout their analyses.",
                "AuthorNamesDeduped": "Emily Wall;Leslie M. Blaha;Lyndsey Franklin;Alex Endert",
                "AuthorNames": "Emily Wall;Leslie M. Blaha;Lyndsey Franklin;Alex Endert",
                "AuthorAffiliation": "Georgia Tech;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Georgia Tech",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2015.7347625;10.1109/tvcg.2016.2598594;10.1109/vast.2011.6102449;10.1109/tvcg.2016.2599058;10.1109/vast.2008.4677365;10.1109/vast.2008.4677361;10.1109/visual.2000.885678;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598446;10.1109/tvcg.2012.273;10.1109/tvcg.2015.2467551;10.1109/tvcg.2015.2467591;10.1109/tvcg.2014.2346481;10.1109/tvcg.2016.2598466;10.1109/tvcg.2017.2745078;10.1109/tvcg.2007.70589;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "cognitive bias,visual analytics,human-in-the-loop,mixed initiative,user interaction,H.5.0 [Information Systems]: Human-Computer Interaction-General",
                "AminerCitationCount": 115,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 1610,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 847,
                "i": [
                    847
                ]
            }
        },
        {
            "name": "Lyndsey Franklin",
            "value": 95,
            "numPapers": 31,
            "cluster": "5",
            "visible": 1,
            "index": 906,
            "x": 279.0903680228409,
            "y": 112.95382453407784,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Warning, Bias May Occur: A Proposed Approach to Detecting Cognitive Bias in Interactive Visual Analytics",
                "DOI": "10.1109/vast.2017.8585669",
                "Link": "http://dx.doi.org/10.1109/VAST.2017.8585669",
                "FirstPage": 104,
                "LastPage": 115,
                "PaperType": "C",
                "Abstract": "Visual analytic tools combine the complementary strengths of humans and machines in human-in-the-loop systems. Humans provide invaluable domain expertise and sensemaking capabilities to this discourse with analytic models; however, little consideration has yet been given to the ways inherent human biases might shape the visual analytic process. In this paper, we establish a conceptual framework for considering bias assessment through human-in-the-loop systems and lay the theoretical foundations for bias measurement. We propose six preliminary metrics to systematically detect and quantify bias from user interactions and demonstrate how the metrics might be implemented in an existing visual analytic system, InterAxis. We discuss how our proposed metrics could be used by visual analytic systems to mitigate the negative effects of cognitive biases by making users aware of biased processes throughout their analyses.",
                "AuthorNamesDeduped": "Emily Wall;Leslie M. Blaha;Lyndsey Franklin;Alex Endert",
                "AuthorNames": "Emily Wall;Leslie M. Blaha;Lyndsey Franklin;Alex Endert",
                "AuthorAffiliation": "Georgia Tech;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Georgia Tech",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2015.7347625;10.1109/tvcg.2016.2598594;10.1109/vast.2011.6102449;10.1109/tvcg.2016.2599058;10.1109/vast.2008.4677365;10.1109/vast.2008.4677361;10.1109/visual.2000.885678;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598446;10.1109/tvcg.2012.273;10.1109/tvcg.2015.2467551;10.1109/tvcg.2015.2467591;10.1109/tvcg.2014.2346481;10.1109/tvcg.2016.2598466;10.1109/tvcg.2017.2745078;10.1109/tvcg.2007.70589;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "cognitive bias,visual analytics,human-in-the-loop,mixed initiative,user interaction,H.5.0 [Information Systems]: Human-Computer Interaction-General",
                "AminerCitationCount": 115,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 1610,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 847,
                "i": [
                    847
                ]
            }
        },
        {
            "name": "Timothy F. Brady",
            "value": 22,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 907,
            "x": -282.2473147496866,
            "y": 105.29222818703826,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Rethinking the Ranks of Visual Channels",
                "DOI": "10.1109/tvcg.2021.3114684",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114684",
                "FirstPage": 707,
                "LastPage": 717,
                "PaperType": "J",
                "Abstract": "Data can be visually represented using visual channels like position, length or luminance. An existing ranking of these visual channels is based on how accurately participants could report the ratio between two depicted values. There is an assumption that this ranking should hold for different tasks and for different numbers of marks. However, there is surprisingly little existing work that tests this assumption, especially given that visually computing ratios is relatively unimportant in real-world visualizations, compared to seeing, remembering, and comparing trends and motifs, across displays that almost universally depict more than two values. To simulate the information extracted from a glance at a visualization, we instead asked participants to immediately reproduce a set of values from memory after they were shown the visualization. These values could be shown in a bar graph (position (bar)), line graph (position (line)), heat map (luminance), bubble chart (area), misaligned bar graph (length), or ‘wind map’ (angle). With a Bayesian multilevel modeling approach, we show how the rank positions of visual channels shift across different numbers of marks (2, 4 or 8) and for bias, precision, and error measures. The ranking did not hold, even for reproductions of only 2 marks, and the new probabilistic ranking was highly inconsistent for reproductions of different numbers of marks. Other factors besides channel choice had an order of magnitude more influence on performance, such as the number of values in the series (e.g., more marks led to larger errors), or the value of each mark (e.g., small values were systematically overestimated). Every visual channel was worse for displays with 8 marks than 4, consistent with established limits on visual memory. These results point to the need for a body of empirical studies that move beyond two-value ratio judgments as a baseline for reliably ranking the quality of a visual channel, including testing new tasks (detection of trends or motifs), timescales (immediate computation, or later comparison), and the number of values (from a handful, to thousands).",
                "AuthorNamesDeduped": "Caitlyn M. McColeman;Fumeng Yang;Timothy F. Brady;Steven Franconeri",
                "AuthorNames": "Caitlyn M. McColeman;Fumeng Yang;Timothy F. Brady;Steven Franconeri",
                "AuthorAffiliation": "Northwestern University, USA;Brown University, USA;University of San Diego, USA;Northwestern University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467732;10.1109/tvcg.2020.3030422;10.1109/tvcg.2010.132;10.1109/tvcg.2014.2346979;10.1109/tvcg.2019.2934786;10.1109/tvcg.2020.3030335;10.1109/tvcg.2015.2467671;10.1109/tvcg.2020.3030345;10.1109/tvcg.2018.2865240;10.1109/tvcg.2019.2934801;10.1109/tvcg.2018.2864884;10.1109/tvcg.2020.3030429;10.1109/tvcg.2015.2467758;10.1109/tvcg.2020.3030421;10.1109/tvcg.2018.2865264;10.1109/tvcg.2017.2744359;10.1109/tvcg.2014.2346320",
                "AuthorKeywords": "DataType Agnostic,Human-Subjects Quantitative Studies,Perception & Cognition,Charts, Diagrams, and Plots",
                "AminerCitationCount": 10,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 87,
                "DownloadsXplore": 878,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 291,
                "i": [
                    291
                ]
            }
        },
        {
            "name": "Kylie Lin",
            "value": 27,
            "numPapers": 8,
            "cluster": "5",
            "visible": 1,
            "index": 908,
            "x": 137.0719670035803,
            "y": -268.44231384371835,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Visual Arrangements of Bar Charts Influence Comparisons in Viewer Takeaways",
                "DOI": "10.1109/tvcg.2021.3114823",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114823",
                "FirstPage": 955,
                "LastPage": 965,
                "PaperType": "J",
                "Abstract": "Well-designed data visualizations can lead to more powerful and intuitive processing by a viewer. To help a viewer intuitively compare values to quickly generate key takeaways, visualization designers can manipulate how data values are arranged in a chart to afford particular comparisons. Using simple bar charts as a case study, we empirically tested the comparison affordances of four common arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. We asked participants to type out what patterns they perceived in a chart and we coded their takeaways into types of comparisons. In a second study, we asked data visualization design experts to predict which arrangement they would use to afford each type of comparison and found both alignments and mismatches with our findings. These results provide concrete guidelines for how both human designers and automatic chart recommendation systems can make visualizations that help viewers extract the “right” takeaway.",
                "AuthorNamesDeduped": "Cindy Xiong;Vidya Setlur;Benjamin Bach;Eunyee Koh;Kylie Lin;Steven Franconeri",
                "AuthorNames": "Cindy Xiong;Vidya Setlur;Benjamin Bach;Eunyee Koh;Kylie Lin;Steven Franconeri",
                "AuthorAffiliation": "UMass Amherst, United States;Tableau Research, United States;University of Edinburgh, United Kingdom;Adobe Research, United States;Northwestern University, United States;Northwestern University, United States",
                "InternalReferences": "0.1109/tvcg.2007.70556;10.1109/tvcg.2019.2934786;10.1109/tvcg.2016.2598920;10.1109/tvcg.2011.194;10.1109/tvcg.2007.70594;10.1109/tvcg.2019.2934801;10.1109/tvcg.2018.2864884;10.1109/tvcg.2017.2744198;10.1109/tvcg.2019.2934399",
                "AuthorKeywords": "Comparison,perception,visual grouping,bar charts,recommendation systems,natural language interaction",
                "AminerCitationCount": 10,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 75,
                "DownloadsXplore": 1056,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 292,
                "i": [
                    292
                ]
            }
        },
        {
            "name": "Mikayla Biggs",
            "value": 22,
            "numPapers": 17,
            "cluster": "0",
            "visible": 1,
            "index": 909,
            "x": 80.30173327406904,
            "y": 290.69164355581376,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Lisanne van Dijk",
            "value": 22,
            "numPapers": 17,
            "cluster": "0",
            "visible": 1,
            "index": 910,
            "x": -255.7118452528222,
            "y": -160.19192300923518,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy",
                "DOI": "10.1109/tvcg.2021.3114810",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114810",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.",
                "AuthorNamesDeduped": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne van Dijk;Abdallah Sherif Radwan Mohamed;Clifton David Fuller;G. Elisabeta Marai",
                "AuthorNames": "Carla Floricel;Nafiul Nipu;Mikayla Biggs;Andrew Wentzel;Guadalupe Canahuate;Lisanne Van Dijk;Abdallah Mohamed;C.David Fuller;G.Elisabeta Marai",
                "AuthorAffiliation": "University of Illinois, Chicago, USA;University of Illinois, Chicago, USA;University of Iowa, USA;University of Illinois, Chicago, USA;University of Iowa, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;MD Anderson Cancer Center at the University of Texas, USA;University of Illinois, Chicago, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030437;10.1109/tvcg.2011.185;10.1109/tvcg.2018.2864477;10.1109/tvcg.2018.2865043;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2745280;10.1109/tvcg.2014.2346682;10.1109/infvis.1997.636793;10.1109/tvcg.2014.2346591;10.1109/tvcg.2018.2864849;10.1109/tvcg.2017.2744459;10.1109/visual.2005.1532781;10.1109/tvcg.2008.155;10.1109/tvcg.2009.187;10.1109/tvcg.2019.2934546;10.1109/tvcg.2018.2865027;10.1109/tvcg.2013.161;10.1109/tvcg.2015.2467325",
                "AuthorKeywords": "Temporal Data,Application Motivated Visualization,Life Sciences,Mixed Initiative Human-Machine Analysis",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 105,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 293,
                "i": [
                    293
                ]
            }
        },
        {
            "name": "Jules Vidal",
            "value": 30,
            "numPapers": 42,
            "cluster": "11",
            "visible": 1,
            "index": 911,
            "x": 296.9249674623504,
            "y": -54.64031201852835,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Wasserstein Distances, Geodesics and Barycenters of Merge Trees",
                "DOI": "10.1109/tvcg.2021.3114839",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114839",
                "FirstPage": 291,
                "LastPage": 301,
                "PaperType": "J",
                "Abstract": "This paper presents a unified computational framework for the estimation of distances, geodesics and barycenters of merge trees. We extend recent work on the edit distance [104] and introduce a new metric, called the Wasserstein distance between merge trees, which is purposely designed to enable efficient computations of geodesics and barycenters. Specifically, our new distance is strictly equivalent to the $L$2-Wasserstein distance between extremum persistence diagrams, but it is restricted to a smaller solution space, namely, the space of rooted partial isomorphisms between branch decomposition trees. This enables a simple extension of existing optimization frameworks [110] for geodesics and barycenters from persistence diagrams to merge trees. We introduce a task-based algorithm which can be generically applied to distance, geodesic, barycenter or cluster computation. The task-based nature of our approach enables further accelerations with shared-memory parallelism. Extensive experiments on public ensembles and SciVis contest benchmarks demonstrate the efficiency of our approach - with barycenter computations in the orders of minutes for the largest examples - as well as its qualitative ability to generate representative barycenter merge trees, visually summarizing the features of interest found in the ensemble. We show the utility of our contributions with dedicated visualization applications: feature tracking, temporal reduction and ensemble clustering. We provide a lightweight C++ implementation that can be used to reproduce our results.",
                "AuthorNamesDeduped": "Mathieu Pont;Jules Vidal;Julie Delon;Julien Tierny",
                "AuthorNames": "Mathieu Pont;Jules Vidal;Julie Delon;Julien Tierny",
                "AuthorAffiliation": "Sorbonne Université and CNRS, France;Sorbonne Université and CNRS, France;University of Paris, France;Sorbonne Université and CNRS, France",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2018.2864505;10.1109/tvcg.2015.2467958;10.1109/tvcg.2017.2743980;10.1109/visual.2004.96;10.1109/tvcg.2018.2864432;10.1109/tvcg.2015.2467204;10.1109/tvcg.2014.2346403;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2013.141;10.1109/tvcg.2011.249;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346455;10.1109/tvcg.2010.181;10.1109/tvcg.2012.249;10.1109/tvcg.2009.163;10.1109/tvcg.2019.2934256;10.1109/tvcg.2013.143;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Topological data analysis,merge trees,scalar data,ensemble data",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 119,
                "DownloadsXplore": 548,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 294,
                "i": [
                    294
                ]
            }
        },
        {
            "name": "Hamish A. Carr",
            "value": 215,
            "numPapers": 74,
            "cluster": "11",
            "visible": 1,
            "index": 912,
            "x": -182.13405652756356,
            "y": 240.99208587174448,
            "vy": 0,
            "vx": 0,
            "r": 1.2475532527345998,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Interactive Comparison of Scalar fields Based on Largest Contours with Applications to Flow Visualization",
                "DOI": "10.1109/tvcg.2008.143",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.143",
                "FirstPage": 1475,
                "LastPage": 1482,
                "PaperType": "J",
                "Abstract": "Understanding fluid flow data, especially vortices, is still a challenging task. Sophisticated visualization tools help to gain insight. In this paper, we present a novel approach for the interactive comparison of scalar fields using isosurfaces, and its application to fluid flow datasets. Features in two scalar fields are defined by largest contour segmentation after topological simplification. These features are matched using a volumetric similarity measure based on spatial overlap of individual features. The relationships defined by this similarity measure are ranked and presented in a thumbnail gallery of feature pairs and a graph representation showing all relationships between individual contours. Additionally, linked views of the contour trees are provided to ease navigation. The main render view shows the selected features overlapping each other. Thus, by displaying individual features and their relationships in a structured fashion, we enable exploratory visualization of correlations between similar structures in two scalar fields. We demonstrate the utility of our approach by applying it to a number of complex fluid flow datasets, where the emphasis is put on the comparison of vortex related scalar quantities.",
                "AuthorNamesDeduped": "Dominic Schneider;Alexander Wiebel;Hamish A. Carr;Mario Hlawitschka;Gerik Scheuermann",
                "AuthorNames": "Dominic Schneider;Alexander Wiebel;Hamish Carr;Mario Hlawitschka;Gerik Scheuermann",
                "AuthorAffiliation": "University of Leipzig, Germany;University of Leipzig, Germany;University College Dublin, Ireland;University of Leipzig, Germany;University of Leipzig, Germany",
                "InternalReferences": "0.1109/tvcg.2006.164;10.1109/visual.2001.964519;10.1109/visual.2004.107;10.1109/tvcg.2007.70615;10.1109/visual.2005.1532830;10.1109/tvcg.2006.165;10.1109/visual.2004.96;10.1109/visual.2003.1250374;10.1109/tvcg.2007.70519;10.1109/visual.2005.1532848;10.1109/visual.1997.663875;10.1109/visual.2005.1532835",
                "AuthorKeywords": "Scalar topology, comparative visualization, contour tree, largest contours, flow visualization",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 625,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2047,
                "i": [
                    2047
                ]
            }
        },
        {
            "name": "Jack Snoeyink",
            "value": 90,
            "numPapers": 13,
            "cluster": "11",
            "visible": 1,
            "index": 913,
            "x": -28.503435574218212,
            "y": -300.89458978264526,
            "vy": 0,
            "vx": 0,
            "r": 1.1036269430051813,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Simplifying flexible isosurfaces using local geometric measures",
                "DOI": "10.1109/visual.2004.96",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.96",
                "FirstPage": 497,
                "LastPage": 504,
                "PaperType": "C",
                "Abstract": "The contour tree, an abstraction of a scalar field that encodes the nesting relationships of isosurfaces, can be used to accelerate isosurface extraction, to identify important isovalues for volume-rendering transfer functions, and to guide exploratory visualization through a flexible isosurface interface. Many real-world data sets produce unmanageably large contour trees which require meaningful simplification. We define local geometric measures for individual contours, such as surface area and contained volume, and provide an algorithm to compute these measures in a contour tree. We then use these geometric measures to simplify the contour trees, suppressing minor topological features of the data. We combine this with a flexible isosurface interface to allow users to explore individual contours of a dataset interactively.",
                "AuthorNamesDeduped": "Hamish A. Carr;Jack Snoeyink;Michiel van de Panne",
                "AuthorNames": "H. Carr;J. Snoeyink;M. van de Panne",
                "AuthorAffiliation": "Department of Computer Science, University of British Columbia, Canada;Department of Computer Science, University of North Carolina, Chapel Hill, USA;Department of Computer Science, University of British Columbia, Canada",
                "InternalReferences": "0.1109/visual.2001.964499;10.1109/visual.2002.1183774;10.1109/visual.2003.1250365;10.1109/visual.1997.663875",
                "AuthorKeywords": "Isosurfaces, contourtrees, topological simplification",
                "AminerCitationCount": 286,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 357,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2513,
                "i": [
                    2513
                ]
            }
        },
        {
            "name": "Michiel van de Panne",
            "value": 69,
            "numPapers": 3,
            "cluster": "11",
            "visible": 1,
            "index": 914,
            "x": 224.39164028588795,
            "y": 202.72738288107175,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Simplifying flexible isosurfaces using local geometric measures",
                "DOI": "10.1109/visual.2004.96",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.96",
                "FirstPage": 497,
                "LastPage": 504,
                "PaperType": "C",
                "Abstract": "The contour tree, an abstraction of a scalar field that encodes the nesting relationships of isosurfaces, can be used to accelerate isosurface extraction, to identify important isovalues for volume-rendering transfer functions, and to guide exploratory visualization through a flexible isosurface interface. Many real-world data sets produce unmanageably large contour trees which require meaningful simplification. We define local geometric measures for individual contours, such as surface area and contained volume, and provide an algorithm to compute these measures in a contour tree. We then use these geometric measures to simplify the contour trees, suppressing minor topological features of the data. We combine this with a flexible isosurface interface to allow users to explore individual contours of a dataset interactively.",
                "AuthorNamesDeduped": "Hamish A. Carr;Jack Snoeyink;Michiel van de Panne",
                "AuthorNames": "H. Carr;J. Snoeyink;M. van de Panne",
                "AuthorAffiliation": "Department of Computer Science, University of British Columbia, Canada;Department of Computer Science, University of North Carolina, Chapel Hill, USA;Department of Computer Science, University of British Columbia, Canada",
                "InternalReferences": "0.1109/visual.2001.964499;10.1109/visual.2002.1183774;10.1109/visual.2003.1250365;10.1109/visual.1997.663875",
                "AuthorKeywords": "Isosurfaces, contourtrees, topological simplification",
                "AminerCitationCount": 286,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 357,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2513,
                "i": [
                    2513
                ]
            }
        },
        {
            "name": "Julie Delon",
            "value": 13,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 915,
            "x": -302.5650825265074,
            "y": 2.090654377886265,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Wasserstein Distances, Geodesics and Barycenters of Merge Trees",
                "DOI": "10.1109/tvcg.2021.3114839",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114839",
                "FirstPage": 291,
                "LastPage": 301,
                "PaperType": "J",
                "Abstract": "This paper presents a unified computational framework for the estimation of distances, geodesics and barycenters of merge trees. We extend recent work on the edit distance [104] and introduce a new metric, called the Wasserstein distance between merge trees, which is purposely designed to enable efficient computations of geodesics and barycenters. Specifically, our new distance is strictly equivalent to the $L$2-Wasserstein distance between extremum persistence diagrams, but it is restricted to a smaller solution space, namely, the space of rooted partial isomorphisms between branch decomposition trees. This enables a simple extension of existing optimization frameworks [110] for geodesics and barycenters from persistence diagrams to merge trees. We introduce a task-based algorithm which can be generically applied to distance, geodesic, barycenter or cluster computation. The task-based nature of our approach enables further accelerations with shared-memory parallelism. Extensive experiments on public ensembles and SciVis contest benchmarks demonstrate the efficiency of our approach - with barycenter computations in the orders of minutes for the largest examples - as well as its qualitative ability to generate representative barycenter merge trees, visually summarizing the features of interest found in the ensemble. We show the utility of our contributions with dedicated visualization applications: feature tracking, temporal reduction and ensemble clustering. We provide a lightweight C++ implementation that can be used to reproduce our results.",
                "AuthorNamesDeduped": "Mathieu Pont;Jules Vidal;Julie Delon;Julien Tierny",
                "AuthorNames": "Mathieu Pont;Jules Vidal;Julie Delon;Julien Tierny",
                "AuthorAffiliation": "Sorbonne Université and CNRS, France;Sorbonne Université and CNRS, France;University of Paris, France;Sorbonne Université and CNRS, France",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2018.2864505;10.1109/tvcg.2015.2467958;10.1109/tvcg.2017.2743980;10.1109/visual.2004.96;10.1109/tvcg.2018.2864432;10.1109/tvcg.2015.2467204;10.1109/tvcg.2014.2346403;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2013.141;10.1109/tvcg.2011.249;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346455;10.1109/tvcg.2010.181;10.1109/tvcg.2012.249;10.1109/tvcg.2009.163;10.1109/tvcg.2019.2934256;10.1109/tvcg.2013.143;10.1109/tvcg.2019.2934242",
                "AuthorKeywords": "Topological data analysis,merge trees,scalar data,ensemble data",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 119,
                "DownloadsXplore": 548,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 294,
                "i": [
                    294
                ]
            }
        },
        {
            "name": "David Günther",
            "value": 70,
            "numPapers": 18,
            "cluster": "11",
            "visible": 1,
            "index": 916,
            "x": 221.8109015002436,
            "y": -206.03379328558998,
            "vy": 0,
            "vx": 0,
            "r": 1.0805987334484743,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Conforming Morse-Smale Complexes",
                "DOI": "10.1109/tvcg.2014.2346434",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346434",
                "FirstPage": 2595,
                "LastPage": 2603,
                "PaperType": "J",
                "Abstract": "Morse-Smale (MS) complexes have been gaining popularity as a tool for feature-driven data analysis and visualization. However, the quality of their geometric embedding and the sole dependence on the input scalar field data can limit their applicability when expressing application-dependent features. In this paper we introduce a new combinatorial technique to compute an MS complex that conforms to both an input scalar field and an additional, prior segmentation of the domain. The segmentation constrains the MS complex computation guaranteeing that boundaries in the segmentation are captured as separatrices of the MS complex. We demonstrate the utility and versatility of our approach with two applications. First, we use streamline integration to determine numerically computed basins/mountains and use the resulting segmentation as an input to our algorithm. This strategy enables the incorporation of prior flow path knowledge, effectively resulting in an MS complex that is as geometrically accurate as the employed numerical integration. Our second use case is motivated by the observation that often the data itself does not explicitly contain features known to be present by a domain expert. We introduce edit operations for MS complexes so that a user can directly modify their features while maintaining all the advantages of a robust topology-based representation.",
                "AuthorNamesDeduped": "Attila Gyulassy;David Günther;Joshua A. Levine;Julien Tierny;Valerio Pascucci",
                "AuthorNames": "Attila Gyulassy;David Günther;Joshua A. Levine;Julien Tierny;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;Institut Mines-Télécom, Télécom ParisTech, CNRS LTCI, Paris, France;School of Computing, Visual Computing Division, Clemson University, Clemson, SC, USA;Institut Mines-Télécom, Télécom ParisTech, CNRS LTCI, CNRS LIP6, UPMC, Sorbonne Universites, Paris, France;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2011.249;10.1109/tvcg.2008.110;10.1109/tvcg.2007.70603;10.1109/tvcg.2006.186;10.1109/tvcg.2012.228;10.1109/tvcg.2012.209;10.1109/visual.2005.1532839",
                "AuthorKeywords": "Computational Topology, Morse-Smale Complex, Data Analysis",
                "AminerCitationCount": 60,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 513,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1216,
                "i": [
                    1216
                ]
            }
        },
        {
            "name": "Roberto Álvarez Boto",
            "value": 42,
            "numPapers": 7,
            "cluster": "11",
            "visible": 1,
            "index": 917,
            "x": -24.395926337752325,
            "y": 301.9185962774121,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Characterizing Molecular Interactions in Chemical Systems",
                "DOI": "10.1109/tvcg.2014.2346403",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346403",
                "FirstPage": 2476,
                "LastPage": 2485,
                "PaperType": "J",
                "Abstract": "Interactions between atoms have a major influence on the chemical properties of molecular systems. While covalent interactions impose the structural integrity of molecules, noncovalent interactions govern more subtle phenomena such as protein folding, bonding or self assembly. The understanding of these types of interactions is necessary for the interpretation of many biological processes and chemical design tasks. While traditionally the electron density is analyzed to interpret the quantum chemistry of a molecular system, noncovalent interactions are characterized by low electron densities and only slight variations of them - challenging their extraction and characterization. Recently, the signed electron density and the reduced gradient, two scalar fields derived from the electron density, have drawn much attention in quantum chemistry since they enable a qualitative visualization of these interactions even in complex molecular systems and experimental measurements. In this work, we present the first combinatorial algorithm for the automated extraction and characterization of covalent and noncovalent interactions in molecular systems. The proposed algorithm is based on a joint topological analysis of the signed electron density and the reduced gradient. Combining the connectivity information of the critical points of these two scalar fields enables to visualize, enumerate, classify and investigate molecular interactions in a robust manner. Experiments on a variety of molecular systems, from simple dimers to proteins or DNA, demonstrate the ability of our technique to robustly extract these interactions and to reveal their structural relations to the atoms and bonds forming the molecules. For simple systems, our analysis corroborates the observations made by the chemists while it provides new visual and quantitative insights on chemical interactions for larger molecular systems.",
                "AuthorNamesDeduped": "David Günther;Roberto Álvarez Boto;Julia Contreras-García;Jean-Philip Piquemal;Julien Tierny",
                "AuthorNames": "David Günther;Roberto A. Boto;Juila Contreras-Garcia;Jean-Philip Piquemal;Julien Tierny",
                "AuthorAffiliation": "Institut-Mines-Télécom, Télécom Paris'Iech, CNRS LTCI, Paris, France;Sorbonne Universités, UMR 7616, Laboratoire de Chimie Théorique, Paris, France;Sorbonne Universités, UMR 7616, LCT, Paris, France;Sorbonne Universités, UMR 7616, LCT, Paris, France;CNRS LIP6, UPMC, Télécom Paris'Iech, Paris, France",
                "InternalReferences": "0.1109/tvcg.2009.163;10.1109/visual.2004.96;10.1109/visual.2003.1250376;10.1109/tvcg.2008.110;10.1109/tvcg.2009.157;10.1109/tvcg.2011.259;10.1109/tvcg.2007.70578;10.1109/tvcg.2013.158",
                "AuthorKeywords": "Molecular Chemistry, Topological Data Analysis, Morse-Smale Complex, Join Tree",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 711,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1211,
                "i": [
                    1211
                ]
            }
        },
        {
            "name": "Julia Contreras-García",
            "value": 42,
            "numPapers": 7,
            "cluster": "11",
            "visible": 1,
            "index": 918,
            "x": -186.05559449654933,
            "y": -239.2348548111993,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Characterizing Molecular Interactions in Chemical Systems",
                "DOI": "10.1109/tvcg.2014.2346403",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346403",
                "FirstPage": 2476,
                "LastPage": 2485,
                "PaperType": "J",
                "Abstract": "Interactions between atoms have a major influence on the chemical properties of molecular systems. While covalent interactions impose the structural integrity of molecules, noncovalent interactions govern more subtle phenomena such as protein folding, bonding or self assembly. The understanding of these types of interactions is necessary for the interpretation of many biological processes and chemical design tasks. While traditionally the electron density is analyzed to interpret the quantum chemistry of a molecular system, noncovalent interactions are characterized by low electron densities and only slight variations of them - challenging their extraction and characterization. Recently, the signed electron density and the reduced gradient, two scalar fields derived from the electron density, have drawn much attention in quantum chemistry since they enable a qualitative visualization of these interactions even in complex molecular systems and experimental measurements. In this work, we present the first combinatorial algorithm for the automated extraction and characterization of covalent and noncovalent interactions in molecular systems. The proposed algorithm is based on a joint topological analysis of the signed electron density and the reduced gradient. Combining the connectivity information of the critical points of these two scalar fields enables to visualize, enumerate, classify and investigate molecular interactions in a robust manner. Experiments on a variety of molecular systems, from simple dimers to proteins or DNA, demonstrate the ability of our technique to robustly extract these interactions and to reveal their structural relations to the atoms and bonds forming the molecules. For simple systems, our analysis corroborates the observations made by the chemists while it provides new visual and quantitative insights on chemical interactions for larger molecular systems.",
                "AuthorNamesDeduped": "David Günther;Roberto Álvarez Boto;Julia Contreras-García;Jean-Philip Piquemal;Julien Tierny",
                "AuthorNames": "David Günther;Roberto A. Boto;Juila Contreras-Garcia;Jean-Philip Piquemal;Julien Tierny",
                "AuthorAffiliation": "Institut-Mines-Télécom, Télécom Paris'Iech, CNRS LTCI, Paris, France;Sorbonne Universités, UMR 7616, Laboratoire de Chimie Théorique, Paris, France;Sorbonne Universités, UMR 7616, LCT, Paris, France;Sorbonne Universités, UMR 7616, LCT, Paris, France;CNRS LIP6, UPMC, Télécom Paris'Iech, Paris, France",
                "InternalReferences": "0.1109/tvcg.2009.163;10.1109/visual.2004.96;10.1109/visual.2003.1250376;10.1109/tvcg.2008.110;10.1109/tvcg.2009.157;10.1109/tvcg.2011.259;10.1109/tvcg.2007.70578;10.1109/tvcg.2013.158",
                "AuthorKeywords": "Molecular Chemistry, Topological Data Analysis, Morse-Smale Complex, Join Tree",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 711,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1211,
                "i": [
                    1211
                ]
            }
        },
        {
            "name": "Jean-Philip Piquemal",
            "value": 42,
            "numPapers": 7,
            "cluster": "11",
            "visible": 1,
            "index": 919,
            "x": 298.95503558396325,
            "y": 50.75319397822426,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Characterizing Molecular Interactions in Chemical Systems",
                "DOI": "10.1109/tvcg.2014.2346403",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346403",
                "FirstPage": 2476,
                "LastPage": 2485,
                "PaperType": "J",
                "Abstract": "Interactions between atoms have a major influence on the chemical properties of molecular systems. While covalent interactions impose the structural integrity of molecules, noncovalent interactions govern more subtle phenomena such as protein folding, bonding or self assembly. The understanding of these types of interactions is necessary for the interpretation of many biological processes and chemical design tasks. While traditionally the electron density is analyzed to interpret the quantum chemistry of a molecular system, noncovalent interactions are characterized by low electron densities and only slight variations of them - challenging their extraction and characterization. Recently, the signed electron density and the reduced gradient, two scalar fields derived from the electron density, have drawn much attention in quantum chemistry since they enable a qualitative visualization of these interactions even in complex molecular systems and experimental measurements. In this work, we present the first combinatorial algorithm for the automated extraction and characterization of covalent and noncovalent interactions in molecular systems. The proposed algorithm is based on a joint topological analysis of the signed electron density and the reduced gradient. Combining the connectivity information of the critical points of these two scalar fields enables to visualize, enumerate, classify and investigate molecular interactions in a robust manner. Experiments on a variety of molecular systems, from simple dimers to proteins or DNA, demonstrate the ability of our technique to robustly extract these interactions and to reveal their structural relations to the atoms and bonds forming the molecules. For simple systems, our analysis corroborates the observations made by the chemists while it provides new visual and quantitative insights on chemical interactions for larger molecular systems.",
                "AuthorNamesDeduped": "David Günther;Roberto Álvarez Boto;Julia Contreras-García;Jean-Philip Piquemal;Julien Tierny",
                "AuthorNames": "David Günther;Roberto A. Boto;Juila Contreras-Garcia;Jean-Philip Piquemal;Julien Tierny",
                "AuthorAffiliation": "Institut-Mines-Télécom, Télécom Paris'Iech, CNRS LTCI, Paris, France;Sorbonne Universités, UMR 7616, Laboratoire de Chimie Théorique, Paris, France;Sorbonne Universités, UMR 7616, LCT, Paris, France;Sorbonne Universités, UMR 7616, LCT, Paris, France;CNRS LIP6, UPMC, Télécom Paris'Iech, Paris, France",
                "InternalReferences": "0.1109/tvcg.2009.163;10.1109/visual.2004.96;10.1109/visual.2003.1250376;10.1109/tvcg.2008.110;10.1109/tvcg.2009.157;10.1109/tvcg.2011.259;10.1109/tvcg.2007.70578;10.1109/tvcg.2013.158",
                "AuthorKeywords": "Molecular Chemistry, Topological Data Analysis, Morse-Smale Complex, Join Tree",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 711,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1211,
                "i": [
                    1211
                ]
            }
        },
        {
            "name": "Aaron Knoll",
            "value": 167,
            "numPapers": 36,
            "cluster": "11",
            "visible": 1,
            "index": 920,
            "x": -254.86190340484214,
            "y": 164.60683519483908,
            "vy": 0,
            "vx": 0,
            "r": 1.1922855497985032,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "OSPRay - A CPU Ray Tracing Framework for Scientific Visualization",
                "DOI": "10.1109/tvcg.2016.2599041",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599041",
                "FirstPage": 931,
                "LastPage": 940,
                "PaperType": "J",
                "Abstract": "Scientific data is continually increasing in complexity, variety and size, making efficient visualization and specifically rendering an ongoing challenge. Traditional rasterization-based visualization approaches encounter performance and quality limitations, particularly in HPC environments without dedicated rendering hardware. In this paper, we present OSPRay, a turn-key CPU ray tracing framework oriented towards production-use scientific visualization which can utilize varying SIMD widths and multiple device backends found across diverse HPC resources. This framework provides a high-quality, efficient CPU-based solution for typical visualization workloads, which has already been integrated into several prevalent visualization packages. We show that this system delivers the performance, high-level API simplicity, and modular device support needed to provide a compelling new rendering framework for implementing efficient scientific visualization workflows.",
                "AuthorNamesDeduped": "Ingo Wald;Gregory P. Johnson;Jefferson Amstutz;Carson Brownlee;Aaron Knoll;Jim Jeffers;Johannes Günther 0001;Paul A. Navrátil",
                "AuthorNames": "I Wald;GP Johnson;J Amstutz;C Brownlee;A Knoll;J Jeffers;J Günther;P Navratil",
                "AuthorAffiliation": "Intel Corp;Intel Corp;Intel Corp;Texas Advanced Computing Center and Intel Corp;Argonne National Laboratory and SCI Insitute, University of Utah;Intel Corp;Intel Corp;Texas Advanced Computing Center",
                "InternalReferences": "0.1109/scivis.2015.7429492;10.1109/tvcg.2010.173;10.1109/tvcg.2015.2467963",
                "AuthorKeywords": null,
                "AminerCitationCount": 190,
                "CitationCountCrossRef": 114,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2017,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 925,
                "i": [
                    925
                ]
            }
        },
        {
            "name": "Kah Chun Lau",
            "value": 56,
            "numPapers": 4,
            "cluster": "11",
            "visible": 1,
            "index": 921,
            "x": 76.77858711334348,
            "y": -293.69209822683126,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Interstitial and Interlayer Ion Diffusion Geometry Extraction in Graphitic Nanosphere Battery Materials",
                "DOI": "10.1109/tvcg.2015.2467432",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467432",
                "FirstPage": 916,
                "LastPage": 925,
                "PaperType": "J",
                "Abstract": "Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermally annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.",
                "AuthorNamesDeduped": "Attila Gyulassy;Aaron Knoll;Kah Chun Lau;Bei Wang 0001;Peer-Timo Bremer;Michael E. Papka;Larry A. Curtiss;Valerio Pascucci",
                "AuthorNames": "Attila Gyulassy;Aaron Knoll;Kah Chun Lau;Bei Wang;Peer-Timo Bremer;Michael E. Papka;Larry A. Curtiss;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;Materials Science Division, Argonne National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Materials Science Division, Argonne National Laboratory;Materials Science Division, Argonne National Laboratory;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/visual.2005.1532795;10.1109/tvcg.2011.244;10.1109/tvcg.2014.2346403;10.1109/visual.2005.1532839;10.1109/tvcg.2011.259",
                "AuthorKeywords": "materials science, morse-smale, topology, Delaunay, computational geometry",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1061,
                "i": [
                    1061
                ]
            }
        },
        {
            "name": "Michael E. Papka",
            "value": 66,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 922,
            "x": 141.84892494636438,
            "y": 268.5682082666537,
            "vy": 0,
            "vx": 0,
            "r": 1.075993091537133,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Interstitial and Interlayer Ion Diffusion Geometry Extraction in Graphitic Nanosphere Battery Materials",
                "DOI": "10.1109/tvcg.2015.2467432",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467432",
                "FirstPage": 916,
                "LastPage": 925,
                "PaperType": "J",
                "Abstract": "Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermally annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.",
                "AuthorNamesDeduped": "Attila Gyulassy;Aaron Knoll;Kah Chun Lau;Bei Wang 0001;Peer-Timo Bremer;Michael E. Papka;Larry A. Curtiss;Valerio Pascucci",
                "AuthorNames": "Attila Gyulassy;Aaron Knoll;Kah Chun Lau;Bei Wang;Peer-Timo Bremer;Michael E. Papka;Larry A. Curtiss;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;Materials Science Division, Argonne National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Materials Science Division, Argonne National Laboratory;Materials Science Division, Argonne National Laboratory;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/visual.2005.1532795;10.1109/tvcg.2011.244;10.1109/tvcg.2014.2346403;10.1109/visual.2005.1532839;10.1109/tvcg.2011.259",
                "AuthorKeywords": "materials science, morse-smale, topology, Delaunay, computational geometry",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1061,
                "i": [
                    1061
                ]
            }
        },
        {
            "name": "Larry A. Curtiss",
            "value": 56,
            "numPapers": 4,
            "cluster": "11",
            "visible": 1,
            "index": 923,
            "x": -286.16517776676943,
            "y": -102.27165312887657,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Interstitial and Interlayer Ion Diffusion Geometry Extraction in Graphitic Nanosphere Battery Materials",
                "DOI": "10.1109/tvcg.2015.2467432",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467432",
                "FirstPage": 916,
                "LastPage": 925,
                "PaperType": "J",
                "Abstract": "Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermally annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.",
                "AuthorNamesDeduped": "Attila Gyulassy;Aaron Knoll;Kah Chun Lau;Bei Wang 0001;Peer-Timo Bremer;Michael E. Papka;Larry A. Curtiss;Valerio Pascucci",
                "AuthorNames": "Attila Gyulassy;Aaron Knoll;Kah Chun Lau;Bei Wang;Peer-Timo Bremer;Michael E. Papka;Larry A. Curtiss;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;Materials Science Division, Argonne National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Materials Science Division, Argonne National Laboratory;Materials Science Division, Argonne National Laboratory;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/visual.2005.1532795;10.1109/tvcg.2011.244;10.1109/tvcg.2014.2346403;10.1109/visual.2005.1532839;10.1109/tvcg.2011.259",
                "AuthorKeywords": "materials science, morse-smale, topology, Delaunay, computational geometry",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1061,
                "i": [
                    1061
                ]
            }
        },
        {
            "name": "Jens Kasten",
            "value": 53,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 924,
            "x": 280.244411542497,
            "y": -117.9536764971723,
            "vy": 0,
            "vx": 0,
            "r": 1.0610247553252734,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Two-Dimensional Time-Dependent Vortex Regions Based on the Acceleration Magnitude",
                "DOI": "10.1109/tvcg.2011.249",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.249",
                "FirstPage": 2080,
                "LastPage": 2087,
                "PaperType": "J",
                "Abstract": "Acceleration is a fundamental quantity of flow fields that captures Galilean invariant properties of particle motion. Considering the magnitude of this field, minima represent characteristic structures of the flow that can be classified as saddle- or vortex-like. We made the interesting observation that vortex-like minima are enclosed by particularly pronounced ridges. This makes it possible to define boundaries of vortex regions in a parameter-free way. Utilizing scalar field topology, a robust algorithm can be designed to extract such boundaries. They can be arbitrarily shaped. An efficient tracking algorithm allows us to display the temporal evolution of vortices. Various vortex models are used to evaluate the method. We apply our method to two-dimensional model systems from computational fluid dynamics and compare the results to those arising from existing definitions.",
                "AuthorNamesDeduped": "Jens Kasten;Jan Reininghaus;Ingrid Hotz;Hans-Christian Hege",
                "AuthorNames": "Jens Kasten;Jan Reininghaus;Ingrid Hotz;Hans-Christian Hege",
                "AuthorAffiliation": "Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany",
                "InternalReferences": "0.1109/visual.2005.1532830;10.1109/visual.2004.107;10.1109/tvcg.2008.143;10.1109/visual.2002.1183821;10.1109/tvcg.2006.201",
                "AuthorKeywords": "Vortex regions, time-dependent flow fields, feature extraction",
                "AminerCitationCount": 101,
                "CitationCountCrossRef": 52,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 573,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1641,
                "i": [
                    1641
                ]
            }
        },
        {
            "name": "Jan Reininghaus",
            "value": 57,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 925,
            "x": -127.03559301976495,
            "y": 276.42712983011756,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Two-Dimensional Time-Dependent Vortex Regions Based on the Acceleration Magnitude",
                "DOI": "10.1109/tvcg.2011.249",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.249",
                "FirstPage": 2080,
                "LastPage": 2087,
                "PaperType": "J",
                "Abstract": "Acceleration is a fundamental quantity of flow fields that captures Galilean invariant properties of particle motion. Considering the magnitude of this field, minima represent characteristic structures of the flow that can be classified as saddle- or vortex-like. We made the interesting observation that vortex-like minima are enclosed by particularly pronounced ridges. This makes it possible to define boundaries of vortex regions in a parameter-free way. Utilizing scalar field topology, a robust algorithm can be designed to extract such boundaries. They can be arbitrarily shaped. An efficient tracking algorithm allows us to display the temporal evolution of vortices. Various vortex models are used to evaluate the method. We apply our method to two-dimensional model systems from computational fluid dynamics and compare the results to those arising from existing definitions.",
                "AuthorNamesDeduped": "Jens Kasten;Jan Reininghaus;Ingrid Hotz;Hans-Christian Hege",
                "AuthorNames": "Jens Kasten;Jan Reininghaus;Ingrid Hotz;Hans-Christian Hege",
                "AuthorAffiliation": "Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany",
                "InternalReferences": "0.1109/visual.2005.1532830;10.1109/visual.2004.107;10.1109/tvcg.2008.143;10.1109/visual.2002.1183821;10.1109/tvcg.2006.201",
                "AuthorKeywords": "Vortex regions, time-dependent flow fields, feature extraction",
                "AminerCitationCount": 101,
                "CitationCountCrossRef": 52,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 573,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1641,
                "i": [
                    1641
                ]
            }
        },
        {
            "name": "Ingrid Hotz",
            "value": 102,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 926,
            "x": -93.10200806771158,
            "y": -289.79650807723647,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Two-Dimensional Time-Dependent Vortex Regions Based on the Acceleration Magnitude",
                "DOI": "10.1109/tvcg.2011.249",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.249",
                "FirstPage": 2080,
                "LastPage": 2087,
                "PaperType": "J",
                "Abstract": "Acceleration is a fundamental quantity of flow fields that captures Galilean invariant properties of particle motion. Considering the magnitude of this field, minima represent characteristic structures of the flow that can be classified as saddle- or vortex-like. We made the interesting observation that vortex-like minima are enclosed by particularly pronounced ridges. This makes it possible to define boundaries of vortex regions in a parameter-free way. Utilizing scalar field topology, a robust algorithm can be designed to extract such boundaries. They can be arbitrarily shaped. An efficient tracking algorithm allows us to display the temporal evolution of vortices. Various vortex models are used to evaluate the method. We apply our method to two-dimensional model systems from computational fluid dynamics and compare the results to those arising from existing definitions.",
                "AuthorNamesDeduped": "Jens Kasten;Jan Reininghaus;Ingrid Hotz;Hans-Christian Hege",
                "AuthorNames": "Jens Kasten;Jan Reininghaus;Ingrid Hotz;Hans-Christian Hege",
                "AuthorAffiliation": "Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany;Zuse Institute Berlin, Germany",
                "InternalReferences": "0.1109/visual.2005.1532830;10.1109/visual.2004.107;10.1109/tvcg.2008.143;10.1109/visual.2002.1183821;10.1109/tvcg.2006.201",
                "AuthorKeywords": "Vortex regions, time-dependent flow fields, feature extraction",
                "AminerCitationCount": 101,
                "CitationCountCrossRef": 52,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 573,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1641,
                "i": [
                    1641
                ]
            }
        },
        {
            "name": "Eddie Simon",
            "value": 23,
            "numPapers": 2,
            "cluster": "11",
            "visible": 1,
            "index": 927,
            "x": 264.5479036389365,
            "y": 150.8787814115822,
            "vy": 0,
            "vx": 0,
            "r": 1.0264824409902131,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Loop surgery for volumetric meshes: Reeb graphs reduced to contour trees",
                "DOI": "10.1109/tvcg.2009.163",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.163",
                "FirstPage": 1177,
                "LastPage": 1184,
                "PaperType": "J",
                "Abstract": "This paper introduces an efficient algorithm for computing the Reeb graph of a scalar function f defined on a volumetric mesh M in Ropf&lt;sup&gt;3&lt;/sup&gt;. We introduce a procedure called \"loop surgery\" that transforms M into a mesh M' by a sequence of cuts and guarantees the Reeb graph of f(M') to be loop free. Therefore, loop surgery reduces Reeb graph computation to the simpler problem of computing a contour tree, for which well-known algorithms exist that are theoretically efficient (O(n log n)) and fast in practice. Inverse cuts reconstruct the loops removed at the beginning. The time complexity of our algorithm is that of a contour tree computation plus a loop surgery overhead, which depends on the number of handles of the mesh. Our systematic experiments confirm that for real-life data, this overhead is comparable to the computation of the contour tree, demonstrating virtually linear scalability on meshes ranging from 70 thousand to 3.5 million tetrahedra. Performance numbers show that our algorithm, although restricted to volumetric data, has an average speedup factor of 6,500 over the previous fastest techniques, handling larger and more complex data-sets. We demonstrate the verstility of our approach by extending fast topologically clean isosurface extraction to non simply-connected domains. We apply this technique in the context of pressure analysis for mechanical design. In this case, our technique produces results in matter of seconds even for the largest meshes. For the same models, previous Reeb graph techniques do not produce a result.",
                "AuthorNamesDeduped": "Julien Tierny;Attila Gyulassy;Eddie Simon;Valerio Pascucci",
                "AuthorNames": "Julien Tierny;Attila Gyulassy;Eddie Simon;Valerio Pascucci",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Dassault Systèmes, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.2004.96;10.1109/tvcg.2007.70601;10.1109/visual.1997.663875",
                "AuthorKeywords": "Reeb graph, scalar field topology, isosurfaces, topological simplification",
                "AminerCitationCount": 121,
                "CitationCountCrossRef": 65,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 423,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1915,
                "i": [
                    1915
                ]
            }
        },
        {
            "name": "Yuhua Zhou",
            "value": 15,
            "numPapers": 22,
            "cluster": "1",
            "visible": 1,
            "index": 928,
            "x": -297.1466007896584,
            "y": 67.4825728551558,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "A Visualization Approach for Monitoring Order Processing in E-Commerce Warehouse",
                "DOI": "10.1109/tvcg.2021.3114878",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114878",
                "FirstPage": 857,
                "LastPage": 867,
                "PaperType": "J",
                "Abstract": "The efficiency of warehouses is vital to e-commerce. Fast order processing at the warehouses ensures timely deliveries and improves customer satisfaction. However, monitoring, analyzing, and manipulating order processing in the warehouses in real time are challenging for traditional methods due to the sheer volume of incoming orders, the fuzzy definition of delayed order patterns, and the complex decision-making of order handling priorities. In this paper, we adopt a data-driven approach and propose OrderMonitor, a visual analytics system that assists warehouse managers in analyzing and improving order processing efficiency in real time based on streaming warehouse event data. Specifically, the order processing pipeline is visualized with a novel pipeline design based on the sedimentation metaphor to facilitate real-time order monitoring and suggest potentially abnormal orders. We also design a novel visualization that depicts order timelines based on the Gantt charts and Marey's graphs. Such a visualization helps the managers gain insights into the performance of order processing and find major blockers for delayed orders. Furthermore, an evaluating view is provided to assist users in inspecting order details and assigning priorities to improve the processing performance. The effectiveness of OrderMonitor is evaluated with two case studies on a real-world warehouse dataset.",
                "AuthorNamesDeduped": "Junxiu Tang;Yuhua Zhou;Tan Tang;Di Weng;Boyang Xie;Lingyun Yu 0001;Huaqiang Zhang;Yingcai Wu",
                "AuthorNames": "Junxiu Tang;Yuhua Zhou;Tan Tang;Di Weng;Boyang Xie;Lingyun Yu;Huaqiang Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computing, Xi' an Jiaotong-Liverpool University, Suzhou, China;Alibaba Group, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2017.2744419;10.1109/tvcg.2012.291;10.1109/tvcg.2019.2934433;10.1109/tvcg.2013.173;10.1109/tvcg.2013.227;10.1109/tvcg.2014.2346454;10.1109/tvcg.2011.179;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865018;10.1109/infvis.2002.1173149;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467592;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114781;10.1109/tvcg.2017.2745078;10.1109/tvcg.2020.3030337;10.1109/tvcg.2020.3030359;10.1109/tvcg.2019.2934613;10.1109/tvcg.2016.2598664;10.1109/tvcg.2020.3030392;10.1109/tvcg.2020.3030458",
                "AuthorKeywords": "Streaming data,time-series data,e-commerce warehouse,order processing",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 2012,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 296,
                "i": [
                    296
                ]
            }
        },
        {
            "name": "Boyang Xie",
            "value": 15,
            "numPapers": 22,
            "cluster": "1",
            "visible": 1,
            "index": 929,
            "x": 173.61625006691332,
            "y": -250.61404133189149,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "A Visualization Approach for Monitoring Order Processing in E-Commerce Warehouse",
                "DOI": "10.1109/tvcg.2021.3114878",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114878",
                "FirstPage": 857,
                "LastPage": 867,
                "PaperType": "J",
                "Abstract": "The efficiency of warehouses is vital to e-commerce. Fast order processing at the warehouses ensures timely deliveries and improves customer satisfaction. However, monitoring, analyzing, and manipulating order processing in the warehouses in real time are challenging for traditional methods due to the sheer volume of incoming orders, the fuzzy definition of delayed order patterns, and the complex decision-making of order handling priorities. In this paper, we adopt a data-driven approach and propose OrderMonitor, a visual analytics system that assists warehouse managers in analyzing and improving order processing efficiency in real time based on streaming warehouse event data. Specifically, the order processing pipeline is visualized with a novel pipeline design based on the sedimentation metaphor to facilitate real-time order monitoring and suggest potentially abnormal orders. We also design a novel visualization that depicts order timelines based on the Gantt charts and Marey's graphs. Such a visualization helps the managers gain insights into the performance of order processing and find major blockers for delayed orders. Furthermore, an evaluating view is provided to assist users in inspecting order details and assigning priorities to improve the processing performance. The effectiveness of OrderMonitor is evaluated with two case studies on a real-world warehouse dataset.",
                "AuthorNamesDeduped": "Junxiu Tang;Yuhua Zhou;Tan Tang;Di Weng;Boyang Xie;Lingyun Yu 0001;Huaqiang Zhang;Yingcai Wu",
                "AuthorNames": "Junxiu Tang;Yuhua Zhou;Tan Tang;Di Weng;Boyang Xie;Lingyun Yu;Huaqiang Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computing, Xi' an Jiaotong-Liverpool University, Suzhou, China;Alibaba Group, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2017.2744419;10.1109/tvcg.2012.291;10.1109/tvcg.2019.2934433;10.1109/tvcg.2013.173;10.1109/tvcg.2013.227;10.1109/tvcg.2014.2346454;10.1109/tvcg.2011.179;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865018;10.1109/infvis.2002.1173149;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467592;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114781;10.1109/tvcg.2017.2745078;10.1109/tvcg.2020.3030337;10.1109/tvcg.2020.3030359;10.1109/tvcg.2019.2934613;10.1109/tvcg.2016.2598664;10.1109/tvcg.2020.3030392;10.1109/tvcg.2020.3030458",
                "AuthorKeywords": "Streaming data,time-series data,e-commerce warehouse,order processing",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 2012,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 296,
                "i": [
                    296
                ]
            }
        },
        {
            "name": "Huaqiang Zhang",
            "value": 15,
            "numPapers": 22,
            "cluster": "1",
            "visible": 1,
            "index": 930,
            "x": 41.290326133122115,
            "y": 302.23353382412813,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "A Visualization Approach for Monitoring Order Processing in E-Commerce Warehouse",
                "DOI": "10.1109/tvcg.2021.3114878",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114878",
                "FirstPage": 857,
                "LastPage": 867,
                "PaperType": "J",
                "Abstract": "The efficiency of warehouses is vital to e-commerce. Fast order processing at the warehouses ensures timely deliveries and improves customer satisfaction. However, monitoring, analyzing, and manipulating order processing in the warehouses in real time are challenging for traditional methods due to the sheer volume of incoming orders, the fuzzy definition of delayed order patterns, and the complex decision-making of order handling priorities. In this paper, we adopt a data-driven approach and propose OrderMonitor, a visual analytics system that assists warehouse managers in analyzing and improving order processing efficiency in real time based on streaming warehouse event data. Specifically, the order processing pipeline is visualized with a novel pipeline design based on the sedimentation metaphor to facilitate real-time order monitoring and suggest potentially abnormal orders. We also design a novel visualization that depicts order timelines based on the Gantt charts and Marey's graphs. Such a visualization helps the managers gain insights into the performance of order processing and find major blockers for delayed orders. Furthermore, an evaluating view is provided to assist users in inspecting order details and assigning priorities to improve the processing performance. The effectiveness of OrderMonitor is evaluated with two case studies on a real-world warehouse dataset.",
                "AuthorNamesDeduped": "Junxiu Tang;Yuhua Zhou;Tan Tang;Di Weng;Boyang Xie;Lingyun Yu 0001;Huaqiang Zhang;Yingcai Wu",
                "AuthorNames": "Junxiu Tang;Yuhua Zhou;Tan Tang;Di Weng;Boyang Xie;Lingyun Yu;Huaqiang Zhang;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China;Department of Computing, Xi' an Jiaotong-Liverpool University, Suzhou, China;Alibaba Group, Hangzhou, China;State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2017.2744419;10.1109/tvcg.2012.291;10.1109/tvcg.2019.2934433;10.1109/tvcg.2013.173;10.1109/tvcg.2013.227;10.1109/tvcg.2014.2346454;10.1109/tvcg.2011.179;10.1109/tvcg.2016.2598432;10.1109/tvcg.2018.2865018;10.1109/infvis.2002.1173149;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467592;10.1109/tvcg.2012.213;10.1109/tvcg.2019.2934275;10.1109/tvcg.2021.3114781;10.1109/tvcg.2017.2745078;10.1109/tvcg.2020.3030337;10.1109/tvcg.2020.3030359;10.1109/tvcg.2019.2934613;10.1109/tvcg.2016.2598664;10.1109/tvcg.2020.3030392;10.1109/tvcg.2020.3030458",
                "AuthorKeywords": "Streaming data,time-series data,e-commerce warehouse,order processing",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 2012,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 296,
                "i": [
                    296
                ]
            }
        },
        {
            "name": "Xiaohua Sun",
            "value": 107,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 931,
            "x": -234.72804864640815,
            "y": -195.07112338490654,
            "vy": 0,
            "vx": 0,
            "r": 1.1232009211283822,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Whisper: Tracing the Spatiotemporal Process of Information Diffusion in Real Time",
                "DOI": "10.1109/tvcg.2012.291",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.291",
                "FirstPage": 2649,
                "LastPage": 2658,
                "PaperType": "J",
                "Abstract": "When and where is an idea dispersed? Social media, like Twitter, has been increasingly used for exchanging information, opinions and emotions about events that are happening across the world. Here we propose a novel visualization design, “Whisper”, for tracing the process of information diffusion in social media in real time. Our design highlights three major characteristics of diffusion processes in social media: the temporal trend, social-spatial extent, and community response of a topic of interest. Such social, spatiotemporal processes are conveyed based on a sunflower metaphor whose seeds are often dispersed far away. In Whisper, we summarize the collective responses of communities on a given topic based on how tweets were retweeted by groups of users, through representing the sentiments extracted from the tweets, and tracing the pathways of retweets on a spatial hierarchical layout. We use an efficient flux line-drawing algorithm to trace multiple pathways so the temporal and spatial patterns can be identified even for a bursty event. A focused diffusion series highlights key roles such as opinion leaders in the diffusion process. We demonstrate how our design facilitates the understanding of when and where a piece of information is dispersed and what are the social responses of the crowd, for large-scale events including political campaigns and natural disasters. Initial feedback from domain experts suggests promising use for today's information consumption and dispersion in the wild.",
                "AuthorNamesDeduped": "Nan Cao 0001;Yu-Ru Lin;Xiaohua Sun;David Lazer;Shixia Liu;Huamin Qu",
                "AuthorNames": "Nan Cao;Yu-Ru Lin;Xiaohua Sun;David Lazer;Shixia Liu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Northeastern University and Harvard University, USA;TongJi University, China;Northeastern University and Harvard University, USA;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/tvcg.2009.171;10.1109/tvcg.2006.147;10.1109/infvis.2000.885098;10.1109/tvcg.2006.202;10.1109/tvcg.2007.70535;10.1109/tvcg.2010.129;10.1109/tvcg.2008.125;10.1109/tvcg.2011.188",
                "AuthorKeywords": "Information visualization, Information diffusion, Contagion, Social media, Microblogging, Spatiotemporal patterns",
                "AminerCitationCount": 200,
                "CitationCountCrossRef": 125,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 2596,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1403,
                "i": [
                    1403
                ]
            }
        },
        {
            "name": "David Lazer",
            "value": 107,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 932,
            "x": 305.01339834886164,
            "y": -14.725040837927283,
            "vy": 0,
            "vx": 0,
            "r": 1.1232009211283822,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Whisper: Tracing the Spatiotemporal Process of Information Diffusion in Real Time",
                "DOI": "10.1109/tvcg.2012.291",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.291",
                "FirstPage": 2649,
                "LastPage": 2658,
                "PaperType": "J",
                "Abstract": "When and where is an idea dispersed? Social media, like Twitter, has been increasingly used for exchanging information, opinions and emotions about events that are happening across the world. Here we propose a novel visualization design, “Whisper”, for tracing the process of information diffusion in social media in real time. Our design highlights three major characteristics of diffusion processes in social media: the temporal trend, social-spatial extent, and community response of a topic of interest. Such social, spatiotemporal processes are conveyed based on a sunflower metaphor whose seeds are often dispersed far away. In Whisper, we summarize the collective responses of communities on a given topic based on how tweets were retweeted by groups of users, through representing the sentiments extracted from the tweets, and tracing the pathways of retweets on a spatial hierarchical layout. We use an efficient flux line-drawing algorithm to trace multiple pathways so the temporal and spatial patterns can be identified even for a bursty event. A focused diffusion series highlights key roles such as opinion leaders in the diffusion process. We demonstrate how our design facilitates the understanding of when and where a piece of information is dispersed and what are the social responses of the crowd, for large-scale events including political campaigns and natural disasters. Initial feedback from domain experts suggests promising use for today's information consumption and dispersion in the wild.",
                "AuthorNamesDeduped": "Nan Cao 0001;Yu-Ru Lin;Xiaohua Sun;David Lazer;Shixia Liu;Huamin Qu",
                "AuthorNames": "Nan Cao;Yu-Ru Lin;Xiaohua Sun;David Lazer;Shixia Liu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Northeastern University and Harvard University, USA;TongJi University, China;Northeastern University and Harvard University, USA;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/tvcg.2009.171;10.1109/tvcg.2006.147;10.1109/infvis.2000.885098;10.1109/tvcg.2006.202;10.1109/tvcg.2007.70535;10.1109/tvcg.2010.129;10.1109/tvcg.2008.125;10.1109/tvcg.2011.188",
                "AuthorKeywords": "Information visualization, Information diffusion, Contagion, Social media, Microblogging, Spatiotemporal patterns",
                "AminerCitationCount": 200,
                "CitationCountCrossRef": 125,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 2596,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1403,
                "i": [
                    1403
                ]
            }
        },
        {
            "name": "Tiankai Xie",
            "value": 61,
            "numPapers": 37,
            "cluster": "1",
            "visible": 1,
            "index": 933,
            "x": -215.0759947502594,
            "y": 217.00764152947795,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics",
                "DOI": "10.1109/tvcg.2019.2934631",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934631",
                "FirstPage": 1075,
                "LastPage": 1085,
                "PaperType": "J",
                "Abstract": "Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.",
                "AuthorNamesDeduped": "Yuxin Ma;Tiankai Xie;Jundong Li;Ross Maciejewski",
                "AuthorNames": "Yuxin Ma;Tiankai Xie;Jundong Li;Ross Maciejewski",
                "AuthorAffiliation": "School of Computing, Informatics & Decision Systems Engineering, Arizona State University;School of Computing, Informatics & Decision Systems Engineering, Arizona State University;Department of Electrical and Computer Engineering, University of Virginia;School of Computing, Informatics & Decision Systems Engineering, Arizona State University",
                "InternalReferences": "0.1109/tvcg.2014.2346660;10.1109/tvcg.2014.2346594;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/vast.2017.8585720;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2017.2744378;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2014.2346578;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2014.2346574;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/vast.2011.6102453;10.1109/tvcg.2018.2864504;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475",
                "AuthorKeywords": "Adversarial machine learning,data poisoning,visual analytics",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 2113,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 606,
                "i": [
                    606
                ]
            }
        },
        {
            "name": "Mi Feng",
            "value": 78,
            "numPapers": 36,
            "cluster": "5",
            "visible": 1,
            "index": 934,
            "x": 12.01021704918891,
            "y": -305.45990683955785,
            "vy": 0,
            "vx": 0,
            "r": 1.0898100172711571,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "HindSight: Encouraging Exploration through Direct Encoding of Personal Interaction History",
                "DOI": "10.1109/tvcg.2016.2599058",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599058",
                "FirstPage": 351,
                "LastPage": 360,
                "PaperType": "J",
                "Abstract": "Physical and digital objects often leave markers of our use. Website links turn purple after we visit them, for example, showing us information we have yet to explore. These “footprints” of interaction offer substantial benefits in information saturated environments - they enable us to easily revisit old information, systematically explore new information, and quickly resume tasks after interruption. While applying these design principles have been successful in HCI contexts, direct encodings of personal interaction history have received scarce attention in data visualization. One reason is that there is little guidance for integrating history into visualizations where many visual channels are already occupied by data. More importantly, there is not firm evidence that making users aware of their interaction history results in benefits with regards to exploration or insights. Following these observations, we propose HindSight - an umbrella term for the design space of representing interaction history directly in existing data visualizations. In this paper, we examine the value of HindSight principles by augmenting existing visualizations with visual indicators of user interaction history (e.g. How the Recession Shaped the Economy in 255 Charts, NYTimes). In controlled experiments of over 400 participants, we found that HindSight designs generally encouraged people to visit more data and recall different insights after interaction. The results of our experiments suggest that simple additions to visualizations can make users aware of their interaction history, and that these additions significantly impact users' exploration and insights.",
                "AuthorNamesDeduped": "Mi Feng;Cheng Deng;Evan M. Peck;Lane Harrison",
                "AuthorNames": "Mi Feng;Cheng Deng;Evan M. Peck;Lane Harrison",
                "AuthorAffiliation": "Worcester Polytechnic Institute;Worcester Polytechnic Institute;Bucknell University;Worcester Polytechnic Institute",
                "InternalReferences": "0.1109/visual.2002.1183791;10.1109/visual.2005.1532788;10.1109/tvcg.2014.2346452;10.1109/tvcg.2008.137;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.109",
                "AuthorKeywords": "History;Visualization;Interaction",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 955,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 914,
                "i": [
                    914
                ]
            }
        },
        {
            "name": "Evan M. Peck",
            "value": 95,
            "numPapers": 27,
            "cluster": "5",
            "visible": 1,
            "index": 935,
            "x": 197.58487420932065,
            "y": 233.47423301873576,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "HindSight: Encouraging Exploration through Direct Encoding of Personal Interaction History",
                "DOI": "10.1109/tvcg.2016.2599058",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599058",
                "FirstPage": 351,
                "LastPage": 360,
                "PaperType": "J",
                "Abstract": "Physical and digital objects often leave markers of our use. Website links turn purple after we visit them, for example, showing us information we have yet to explore. These “footprints” of interaction offer substantial benefits in information saturated environments - they enable us to easily revisit old information, systematically explore new information, and quickly resume tasks after interruption. While applying these design principles have been successful in HCI contexts, direct encodings of personal interaction history have received scarce attention in data visualization. One reason is that there is little guidance for integrating history into visualizations where many visual channels are already occupied by data. More importantly, there is not firm evidence that making users aware of their interaction history results in benefits with regards to exploration or insights. Following these observations, we propose HindSight - an umbrella term for the design space of representing interaction history directly in existing data visualizations. In this paper, we examine the value of HindSight principles by augmenting existing visualizations with visual indicators of user interaction history (e.g. How the Recession Shaped the Economy in 255 Charts, NYTimes). In controlled experiments of over 400 participants, we found that HindSight designs generally encouraged people to visit more data and recall different insights after interaction. The results of our experiments suggest that simple additions to visualizations can make users aware of their interaction history, and that these additions significantly impact users' exploration and insights.",
                "AuthorNamesDeduped": "Mi Feng;Cheng Deng;Evan M. Peck;Lane Harrison",
                "AuthorNames": "Mi Feng;Cheng Deng;Evan M. Peck;Lane Harrison",
                "AuthorAffiliation": "Worcester Polytechnic Institute;Worcester Polytechnic Institute;Bucknell University;Worcester Polytechnic Institute",
                "InternalReferences": "0.1109/visual.2002.1183791;10.1109/visual.2005.1532788;10.1109/tvcg.2014.2346452;10.1109/tvcg.2008.137;10.1109/tvcg.2014.2346424;10.1109/tvcg.2007.70589;10.1109/tvcg.2008.109",
                "AuthorKeywords": "History;Visualization;Interaction",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 955,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 914,
                "i": [
                    914
                ]
            }
        },
        {
            "name": "Nicolas Heulot",
            "value": 67,
            "numPapers": 14,
            "cluster": "3",
            "visible": 1,
            "index": 936,
            "x": -303.56463269067103,
            "y": -38.710641681299506,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Time Curves: Folding Time to Visualize Patterns of Temporal Evolution in Data",
                "DOI": "10.1109/tvcg.2015.2467851",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467851",
                "FirstPage": 559,
                "LastPage": 568,
                "PaperType": "J",
                "Abstract": "We introduce time curves as a general approach for visualizing patterns of evolution in temporal data. Examples of such patterns include slow and regular progressions, large sudden changes, and reversals to previous states. These patterns can be of interest in a range of domains, such as collaborative document editing, dynamic network analysis, and video analysis. Time curves employ the metaphor of folding a timeline visualization into itself so as to bring similar time points close to each other. This metaphor can be applied to any dataset where a similarity metric between temporal snapshots can be defined, thus it is largely datatype-agnostic. We illustrate how time curves can visually reveal informative patterns in a range of different datasets.",
                "AuthorNamesDeduped": "Benjamin Bach;Conglei Shi;Nicolas Heulot;Tara M. Madhyastha;Thomas J. Grabowski;Pierre Dragicevic",
                "AuthorNames": "Benjamin Bach;Conglei Shi;Nicolas Heulot;Tara Madhyastha;Tom Grabowski;Pierre Dragicevic",
                "AuthorAffiliation": "Microsoft Research-Inria Joint Centre;IBM T.J, Watson Research Center, Yorktown Height, NY;IRT SystemX;Department of Radiology, University of Washington;Department of Radiology and Neurology, University of Washington;Inria",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2007.70535;10.1109/infvis.2004.1;10.1109/tvcg.2014.2346325;10.1109/tvcg.2013.192;10.1109/infvis.2002.1173155",
                "AuthorKeywords": "Temporal data visualization, information visualization, multidimensional scaling",
                "AminerCitationCount": 188,
                "CitationCountCrossRef": 128,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 3408,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1006,
                "i": [
                    1006
                ]
            }
        },
        {
            "name": "Mingming Fan 0001",
            "value": 34,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 937,
            "x": 250.12120894161234,
            "y": -176.60515518406115,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "CoUX: Collaborative Visual Analysis of Think-Aloud Usability Test Videos for Digital Interfaces",
                "DOI": "10.1109/tvcg.2021.3114822",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114822",
                "FirstPage": 643,
                "LastPage": 653,
                "PaperType": "J",
                "Abstract": "Reviewing a think-aloud video is both time-consuming and demanding as it requires UX (user experience) professionals to attend to many behavioral signals of the user in the video. Moreover, challenges arise when multiple UX professionals need to collaborate to reduce bias and errors. We propose a collaborative visual analytics tool, CoUX, to facilitate UX evaluators collectively reviewing think-aloud usability test videos of digital interfaces. CoUX seamlessly supports usability problem identification, annotation, and discussion in an integrated environment. To ease the discovery of usability problems, CoUX visualizes a set of problem-indicators based on acoustic, textual, and visual features extracted from the video and audio of a think-aloud session with machine learning. CoUX further enables collaboration amongst UX evaluators for logging, commenting, and consolidating the discovered problems with a chatbox-like user interface. We designed CoUX based on a formative study with two UX experts and insights derived from the literature. We conducted a user study with six pairs of UX practitioners on collaborative think-aloud video analysis tasks. The results indicate that CoUX is useful and effective in facilitating both problem identification and collaborative teamwork. We provide insights into how different features of CoUX were used to support both independent analysis and collaboration. Furthermore, our work highlights opportunities to improve collaborative usability test video analysis.",
                "AuthorNamesDeduped": "Ehsan Jahangirzadeh Soure;Emily Kuang;Mingming Fan 0001;Jian Zhao 0010",
                "AuthorNames": "Ehsan Jahangirzadeh Soure;Emily Kuang;Mingming Fan;Jian Zhao",
                "AuthorAffiliation": "University of Waterloo, Canada;Rochester Institute of Technology, USA;Rochester Institute of Technology, USA and Hong Kong University of Science and Technology, China;University of Waterloo, Canada",
                "InternalReferences": "0.1109/tvcg.2019.2934797;10.1109/tvcg.2014.2346573;10.1109/vast.2010.5653598;10.1109/infvis.2005.1532152;10.1109/vast.2008.4677358;10.1109/tvcg.2016.2598466;10.1109/tvcg.2007.70577;10.1109/tvcg.2016.2598543;10.1109/tvcg.2017.2745279;10.1109/tvcg.2015.2467871",
                "AuthorKeywords": "User experience,usability problems,think-aloud,video analysis,machine learning,visual analytics,collaboration",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 1100,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 301,
                "i": [
                    301
                ]
            }
        },
        {
            "name": "Michael Glueck",
            "value": 109,
            "numPapers": 57,
            "cluster": "5",
            "visible": 1,
            "index": 938,
            "x": -65.17125719961726,
            "y": 299.3371130264694,
            "vy": 0,
            "vx": 0,
            "r": 1.125503742084053,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "PhenoStacks: Cross-Sectional Cohort Phenotype Comparison Visualizations",
                "DOI": "10.1109/tvcg.2016.2598469",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598469",
                "FirstPage": 191,
                "LastPage": 200,
                "PaperType": "J",
                "Abstract": "Cross-sectional phenotype studies are used by genetics researchers to better understand how phenotypes vary across patients with genetic diseases, both within and between cohorts. Analyses within cohorts identify patterns between phenotypes and patients (e.g., co-occurrence) and isolate special cases (e.g., potential outliers). Comparing the variation of phenotypes between two cohorts can help distinguish how different factors affect disease manifestation (e.g., causal genes, age of onset, etc.). PhenoStacks is a novel visual analytics tool that supports the exploration of phenotype variation within and between cross-sectional patient cohorts. By leveraging the semantic hierarchy of the Human Phenotype Ontology, phenotypes are presented in context, can be grouped and clustered, and are summarized via overviews to support the exploration of phenotype distributions. The design of PhenoStacks was motivated by formative interviews with genetics researchers: we distil high-level tasks, present an algorithm for simplifying ontology topologies for visualization, and report the results of a deployment evaluation with four expert genetics researchers. The results suggest that PhenoStacks can help identify phenotype patterns, investigate data quality issues, and inform data collection design.",
                "AuthorNamesDeduped": "Michael Glueck;Alina Gvozdik;Fanny Chevalier;Azam Khan;Michael Brudno;Daniel Wigdor",
                "AuthorNames": "Michael Glueck;Alina Gvozdik;Fanny Chevalier;Azam Khan;Michael Brudno;Daniel Wigdor",
                "AuthorAffiliation": "Autodesk Research, University of Toronto;University of Toronto;Inria;Autodesk Research;Hospital for Sick Children, University of Toronto, Toronto;University of Toronto",
                "InternalReferences": "0.1109/tvcg.2014.2346248;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346279;10.1109/tvcg.2009.167;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467622;10.1109/tvcg.2015.2467733;10.1109/tvcg.2009.116",
                "AuthorKeywords": "Cross-sectional cohort analysis;Phenotypes;Human Phenotype Ontology (HPO)",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 872,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 991,
                "i": [
                    991
                ]
            }
        },
        {
            "name": "Azam Khan",
            "value": 109,
            "numPapers": 57,
            "cluster": "5",
            "visible": 1,
            "index": 939,
            "x": -154.22615845977197,
            "y": -264.8854319262222,
            "vy": 0,
            "vx": 0,
            "r": 1.125503742084053,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "PhenoStacks: Cross-Sectional Cohort Phenotype Comparison Visualizations",
                "DOI": "10.1109/tvcg.2016.2598469",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598469",
                "FirstPage": 191,
                "LastPage": 200,
                "PaperType": "J",
                "Abstract": "Cross-sectional phenotype studies are used by genetics researchers to better understand how phenotypes vary across patients with genetic diseases, both within and between cohorts. Analyses within cohorts identify patterns between phenotypes and patients (e.g., co-occurrence) and isolate special cases (e.g., potential outliers). Comparing the variation of phenotypes between two cohorts can help distinguish how different factors affect disease manifestation (e.g., causal genes, age of onset, etc.). PhenoStacks is a novel visual analytics tool that supports the exploration of phenotype variation within and between cross-sectional patient cohorts. By leveraging the semantic hierarchy of the Human Phenotype Ontology, phenotypes are presented in context, can be grouped and clustered, and are summarized via overviews to support the exploration of phenotype distributions. The design of PhenoStacks was motivated by formative interviews with genetics researchers: we distil high-level tasks, present an algorithm for simplifying ontology topologies for visualization, and report the results of a deployment evaluation with four expert genetics researchers. The results suggest that PhenoStacks can help identify phenotype patterns, investigate data quality issues, and inform data collection design.",
                "AuthorNamesDeduped": "Michael Glueck;Alina Gvozdik;Fanny Chevalier;Azam Khan;Michael Brudno;Daniel Wigdor",
                "AuthorNames": "Michael Glueck;Alina Gvozdik;Fanny Chevalier;Azam Khan;Michael Brudno;Daniel Wigdor",
                "AuthorAffiliation": "Autodesk Research, University of Toronto;University of Toronto;Inria;Autodesk Research;Hospital for Sick Children, University of Toronto, Toronto;University of Toronto",
                "InternalReferences": "0.1109/tvcg.2014.2346248;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346279;10.1109/tvcg.2009.167;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467622;10.1109/tvcg.2015.2467733;10.1109/tvcg.2009.116",
                "AuthorKeywords": "Cross-sectional cohort analysis;Phenotypes;Human Phenotype Ontology (HPO)",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 872,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 991,
                "i": [
                    991
                ]
            }
        },
        {
            "name": "Hanseung Lee",
            "value": 302,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 940,
            "x": 292.8048136324252,
            "y": 91.18849222177526,
            "vy": 0,
            "vx": 0,
            "r": 1.3477259643062751,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Temporal Event Sequence Simplification",
                "DOI": "10.1109/tvcg.2013.200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.200",
                "FirstPage": 2227,
                "LastPage": 2236,
                "PaperType": "J",
                "Abstract": "Electronic Health Records (EHRs) have emerged as a cost-effective data source for conducting medical research. The difficulty in using EHRs for research purposes, however, is that both patient selection and record analysis must be conducted across very large, and typically very noisy datasets. Our previous work introduced EventFlow, a visualization tool that transforms an entire dataset of temporal event records into an aggregated display, allowing researchers to analyze population-level patterns and trends. As datasets become larger and more varied, however, it becomes increasingly difficult to provide a succinct, summarizing display. This paper presents a series of user-driven data simplifications that allow researchers to pare event records down to their core elements. Furthermore, we present a novel metric for measuring visual complexity, and a language for codifying disjoint strategies into an overarching simplification framework. These simplifications were used by real-world researchers to gain new and valuable insights from initially overwhelming datasets.",
                "AuthorNamesDeduped": "Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman",
                "AuthorNames": "Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman",
                "AuthorAffiliation": "University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA",
                "InternalReferences": "0.1109/tvcg.2009.117;10.1109/tvcg.2012.213;10.1109/vast.2010.5652890",
                "AuthorKeywords": "Event sequences, simplification, electronic heath records, temporal query",
                "AminerCitationCount": 318,
                "CitationCountCrossRef": 193,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 2567,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1367,
                "i": [
                    1367
                ]
            }
        },
        {
            "name": "Smiti Kaul",
            "value": 8,
            "numPapers": 27,
            "cluster": "1",
            "visible": 1,
            "index": 941,
            "x": -277.64958819972117,
            "y": 130.61663819179114,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Improving Visualization Interpretation Using Counterfactuals",
                "DOI": "10.1109/tvcg.2021.3114779",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114779",
                "FirstPage": 998,
                "LastPage": 1008,
                "PaperType": "J",
                "Abstract": "Complex, high-dimensional data is used in a wide range of domains to explore problems and make decisions. Analysis of high-dimensional data, however, is vulnerable to the hidden influence of confounding variables, especially as users apply ad hoc filtering operations to visualize only specific subsets of an entire dataset. Thus, visual data-driven analysis can mislead users and encourage mistaken assumptions about causality or the strength of relationships between features. This work introduces a novel visual approach designed to reveal the presence of confounding variables via counterfactual possibilities during visual data analysis. It is implemented in CoFact, an interactive visualization prototype that determines and visualizes <i>counterfactual subsets</i> to better support user exploration of feature relationships. Using publicly available datasets, we conducted a controlled user study to demonstrate the effectiveness of our approach; the results indicate that users exposed to counterfactual visualizations formed more careful judgments about feature-to-outcome relationships.",
                "AuthorNamesDeduped": "Smiti Kaul;David Borland;Nan Cao 0001;David Gotz",
                "AuthorNames": "Smiti Kaul;David Borland;Nan Cao;David Gotz",
                "AuthorAffiliation": "Dept. of Computer Science, University of North Carolina at Chapel Hill, USA;RENCI, University of North Carolina at Chapel Hill, USA;Intelligent Big Data Visualization Lab, Tongji University, China;School of Information and Library Science, University of North Carolina at Chapel Hill, USA",
                "InternalReferences": "0.1109/tvcg.2020.3030342;10.1109/tvcg.2015.2467552;10.1109/vast.2010.5652443;10.1109/tvcg.2010.209;10.1109/vast.2007.4389013;10.1109/infvis.2003.1249025;10.1109/visual.1997.663916;10.1109/vast.2010.5652392;10.1109/tvcg.2020.3030465;10.1109/visual.1991.175815;10.1109/tvcg.2007.70528;10.1109/visual.1990.146386;10.1109/tvcg.2016.2598831;10.1109/vast.2011.6102448;10.1109/tvcg.2017.2744158;10.1109/tvcg.2015.2467931;10.1109/tvcg.2019.2934619;10.1109/tvcg.2007.70515;10.1109/tvcg.2019.2934629",
                "AuthorKeywords": "visualization,counterfactuals,human-computer interaction,human-centered computing,empirical study",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 851,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 304,
                "i": [
                    304
                ]
            }
        },
        {
            "name": "David Borland",
            "value": 47,
            "numPapers": 56,
            "cluster": "1",
            "visible": 1,
            "index": 942,
            "x": 116.56174686355115,
            "y": -284.0129559863729,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analysis of High-Dimensional Event Sequence Data via Dynamic Hierarchical Aggregation",
                "DOI": "10.1109/tvcg.2019.2934661",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934661",
                "FirstPage": 440,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "Temporal event data are collected across a broad range of domains, and a variety of visual analytics techniques have been developed to empower analysts working with this form of data. These techniques generally display aggregate statistics computed over sets of event sequences that share common patterns. Such techniques are often hindered, however, by the high-dimensionality of many real-world event sequence datasets which can prevent effective aggregation. A common coping strategy for this challenge is to group event types together prior to visualization, as a pre-process, so that each group can be represented within an analysis as a single event type. However, computing these event groupings as a pre-process also places significant constraints on the analysis. This paper presents a new visual analytics approach for dynamic hierarchical dimension aggregation. The approach leverages a predefined hierarchy of dimensions to computationally quantify the informativeness, with respect to a measure of interest, of alternative levels of grouping within the hierarchy at runtime. This information is then interactively visualized, enabling users to dynamically explore the hierarchy to select the most appropriate level of grouping to use at any individual step within an analysis. Key contributions include an algorithm for interactively determining the most informative set of event groupings for a specific analysis context, and a scented scatter-plus-focus visualization design with an optimization-based layout algorithm that supports interactive hierarchical exploration of alternative event type groupings. We apply these techniques to high-dimensional event sequence data from the medical domain and report findings from domain expert interviews.",
                "AuthorNamesDeduped": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorNames": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorAffiliation": "School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Biostatistics, University of North Carolina, Chapel Hill;School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Computer Science, University of North Carolina, Chapel Hill;RENCI, University of North Carolina, Chapel Hill",
                "InternalReferences": "0.1109/tvcg.2019.2934209;10.1109/tvcg.2017.2745278;10.1109/tvcg.2014.2346433;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2017.2745320;10.1109/tvcg.2018.2864886;10.1109/tvcg.2013.200;10.1109/vast.2011.6102443;10.1109/infvis.2005.1532152;10.1109/infvis.2000.885091;10.1109/tvcg.2017.2744686;10.1109/tvcg.2009.108;10.1109/tvcg.2007.70589;10.1109/vast.2014.7042487;10.1109/tvcg.2012.238",
                "AuthorKeywords": "Temporal event sequence visualization,visual analytics,hierarchical aggregation,medical informatics",
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1035,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 627,
                "i": [
                    627
                ]
            }
        },
        {
            "name": "Tali Mazor",
            "value": 26,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 943,
            "x": 105.95515564369683,
            "y": 288.31147218333155,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "ThreadStates: State-based Visual Analysis of Disease Progression",
                "DOI": "10.1109/tvcg.2021.3114840",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114840",
                "FirstPage": 238,
                "LastPage": 247,
                "PaperType": "J",
                "Abstract": "A growing number of longitudinal cohort studies are generating data with extensive patient observations across multiple timepoints. Such data offers promising opportunities to better understand the progression of diseases. However, these observations are usually treated as general events in existing visual analysis tools. As a result, their capabilities in modeling disease progression are not fully utilized. To fill this gap, we designed and implemented ThreadStates, an interactive visual analytics tool for the exploration of longitudinal patient cohort data. The focus of ThreadStates is to identify the states of disease progression by learning from observation data in a human-in-the-loop manner. We propose a novel Glyph Matrix design and combine it with a scatter plot to enable seamless identification, observation, and refinement of states. The disease progression patterns are then revealed in terms of state transitions using Sankey-based visualizations. We employ sequence clustering techniques to find patient groups with distinctive progression patterns, and to reveal the association between disease progression and patient-level features. The design and development were driven by a requirement analysis and iteratively refined based on feedback from domain experts over the course of a 10-month design study. Case studies and expert interviews demonstrate that ThreadStates can successively summarize disease states, reveal disease progression, and compare patient groups.",
                "AuthorNamesDeduped": "Qianwen Wang;Tali Mazor;Theresa Anisja Harbig;Ethan Cerami;Nils Gehlenborg",
                "AuthorNames": "Qianwen Wang;Tali Mazor;Theresa A Harbig;Ethan Cerami;Nils Gehlenborg",
                "AuthorAffiliation": "Harvard University, USA;Dana-Farber Cancer Institute, USA;University of Tübingen, Germany;Dana-Farber Cancer Institute, USA;Harvard University, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2014.2346575;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2014.2346682;10.1109/tvcg.2013.173;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2020.3030465;10.1109/tvcg.2011.179;10.1109/tvcg.2013.200;10.1109/tvcg.2012.225;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Disease Progression,State Identification,Sequence Visualization",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 830,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 305,
                "i": [
                    305
                ]
            }
        },
        {
            "name": "Theresa Anisja Harbig",
            "value": 26,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 944,
            "x": -273.0242074301198,
            "y": -141.09494022520764,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "ThreadStates: State-based Visual Analysis of Disease Progression",
                "DOI": "10.1109/tvcg.2021.3114840",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114840",
                "FirstPage": 238,
                "LastPage": 247,
                "PaperType": "J",
                "Abstract": "A growing number of longitudinal cohort studies are generating data with extensive patient observations across multiple timepoints. Such data offers promising opportunities to better understand the progression of diseases. However, these observations are usually treated as general events in existing visual analysis tools. As a result, their capabilities in modeling disease progression are not fully utilized. To fill this gap, we designed and implemented ThreadStates, an interactive visual analytics tool for the exploration of longitudinal patient cohort data. The focus of ThreadStates is to identify the states of disease progression by learning from observation data in a human-in-the-loop manner. We propose a novel Glyph Matrix design and combine it with a scatter plot to enable seamless identification, observation, and refinement of states. The disease progression patterns are then revealed in terms of state transitions using Sankey-based visualizations. We employ sequence clustering techniques to find patient groups with distinctive progression patterns, and to reveal the association between disease progression and patient-level features. The design and development were driven by a requirement analysis and iteratively refined based on feedback from domain experts over the course of a 10-month design study. Case studies and expert interviews demonstrate that ThreadStates can successively summarize disease states, reveal disease progression, and compare patient groups.",
                "AuthorNamesDeduped": "Qianwen Wang;Tali Mazor;Theresa Anisja Harbig;Ethan Cerami;Nils Gehlenborg",
                "AuthorNames": "Qianwen Wang;Tali Mazor;Theresa A Harbig;Ethan Cerami;Nils Gehlenborg",
                "AuthorAffiliation": "Harvard University, USA;Dana-Farber Cancer Institute, USA;University of Tübingen, Germany;Dana-Farber Cancer Institute, USA;Harvard University, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2014.2346575;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2014.2346682;10.1109/tvcg.2013.173;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2020.3030465;10.1109/tvcg.2011.179;10.1109/tvcg.2013.200;10.1109/tvcg.2012.225;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Disease Progression,State Identification,Sequence Visualization",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 830,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 305,
                "i": [
                    305
                ]
            }
        },
        {
            "name": "Ethan Cerami",
            "value": 26,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 945,
            "x": 296.7848036890772,
            "y": -80.42872807172775,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "ThreadStates: State-based Visual Analysis of Disease Progression",
                "DOI": "10.1109/tvcg.2021.3114840",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114840",
                "FirstPage": 238,
                "LastPage": 247,
                "PaperType": "J",
                "Abstract": "A growing number of longitudinal cohort studies are generating data with extensive patient observations across multiple timepoints. Such data offers promising opportunities to better understand the progression of diseases. However, these observations are usually treated as general events in existing visual analysis tools. As a result, their capabilities in modeling disease progression are not fully utilized. To fill this gap, we designed and implemented ThreadStates, an interactive visual analytics tool for the exploration of longitudinal patient cohort data. The focus of ThreadStates is to identify the states of disease progression by learning from observation data in a human-in-the-loop manner. We propose a novel Glyph Matrix design and combine it with a scatter plot to enable seamless identification, observation, and refinement of states. The disease progression patterns are then revealed in terms of state transitions using Sankey-based visualizations. We employ sequence clustering techniques to find patient groups with distinctive progression patterns, and to reveal the association between disease progression and patient-level features. The design and development were driven by a requirement analysis and iteratively refined based on feedback from domain experts over the course of a 10-month design study. Case studies and expert interviews demonstrate that ThreadStates can successively summarize disease states, reveal disease progression, and compare patient groups.",
                "AuthorNamesDeduped": "Qianwen Wang;Tali Mazor;Theresa Anisja Harbig;Ethan Cerami;Nils Gehlenborg",
                "AuthorNames": "Qianwen Wang;Tali Mazor;Theresa A Harbig;Ethan Cerami;Nils Gehlenborg",
                "AuthorAffiliation": "Harvard University, USA;Dana-Farber Cancer Institute, USA;University of Tübingen, Germany;Dana-Farber Cancer Institute, USA;Harvard University, USA",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2014.2346575;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2014.2346682;10.1109/tvcg.2013.173;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2020.3030465;10.1109/tvcg.2011.179;10.1109/tvcg.2013.200;10.1109/tvcg.2012.225;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Disease Progression,State Identification,Sequence Visualization",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 830,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 305,
                "i": [
                    305
                ]
            }
        },
        {
            "name": "He Liu",
            "value": 104,
            "numPapers": 9,
            "cluster": "3",
            "visible": 1,
            "index": 946,
            "x": -164.59802639880158,
            "y": 259.9182365776196,
            "vy": 0,
            "vx": 0,
            "r": 1.1197466896948762,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Visual analysis of route diversity",
                "DOI": "10.1109/vast.2011.6102455",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102455",
                "FirstPage": 171,
                "LastPage": 180,
                "PaperType": "C",
                "Abstract": "Route suggestion is an important feature of GPS navigation systems. Recently, Microsoft T-drive has been enabled to suggest routes chosen by experienced taxi drivers for given source/destination pairs in given time periods, which often take less time than the routes calculated according to distance. However, in real environments, taxi drivers may use different routes to reach the same destination, which we call route diversity. In this paper we first propose a trajectory visualization method that examines the regions where the diversity exists and then develop several novel visualization techniques to display the high dimensional attributes and statistics associated with different routes to help users analyze diversity patterns. Our techniques have been applied to the real trajectory data of thousands of taxis and some interesting findings about route diversity have been obtained. We further demonstrate that our system can be used not only to suggest better routes for drivers but also to analyze traffic bottlenecks for transportation management.",
                "AuthorNamesDeduped": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorNames": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorAffiliation": "The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2008.149;10.1109/tvcg.2007.70570;10.1109/tvcg.2007.70574;10.1109/tvcg.2006.202;10.1109/vast.2009.5332593;10.1109/tvcg.2007.70561;10.1109/tvcg.2009.145;10.1109/tvcg.2010.180",
                "AuthorKeywords": null,
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 81,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1811,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1582,
                "i": [
                    1582
                ]
            }
        },
        {
            "name": "Yuan Gao",
            "value": 104,
            "numPapers": 9,
            "cluster": "3",
            "visible": 1,
            "index": 947,
            "x": -54.23140976330094,
            "y": -302.99992441432215,
            "vy": 0,
            "vx": 0,
            "r": 1.1197466896948762,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Visual analysis of route diversity",
                "DOI": "10.1109/vast.2011.6102455",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102455",
                "FirstPage": 171,
                "LastPage": 180,
                "PaperType": "C",
                "Abstract": "Route suggestion is an important feature of GPS navigation systems. Recently, Microsoft T-drive has been enabled to suggest routes chosen by experienced taxi drivers for given source/destination pairs in given time periods, which often take less time than the routes calculated according to distance. However, in real environments, taxi drivers may use different routes to reach the same destination, which we call route diversity. In this paper we first propose a trajectory visualization method that examines the regions where the diversity exists and then develop several novel visualization techniques to display the high dimensional attributes and statistics associated with different routes to help users analyze diversity patterns. Our techniques have been applied to the real trajectory data of thousands of taxis and some interesting findings about route diversity have been obtained. We further demonstrate that our system can be used not only to suggest better routes for drivers but also to analyze traffic bottlenecks for transportation management.",
                "AuthorNamesDeduped": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorNames": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorAffiliation": "The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2008.149;10.1109/tvcg.2007.70570;10.1109/tvcg.2007.70574;10.1109/tvcg.2006.202;10.1109/vast.2009.5332593;10.1109/tvcg.2007.70561;10.1109/tvcg.2009.145;10.1109/tvcg.2010.180",
                "AuthorKeywords": null,
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 81,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1811,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1582,
                "i": [
                    1582
                ]
            }
        },
        {
            "name": "Lu Lu",
            "value": 104,
            "numPapers": 9,
            "cluster": "3",
            "visible": 1,
            "index": 948,
            "x": 244.79113709729702,
            "y": 186.88846727022067,
            "vy": 0,
            "vx": 0,
            "r": 1.1197466896948762,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Visual analysis of route diversity",
                "DOI": "10.1109/vast.2011.6102455",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102455",
                "FirstPage": 171,
                "LastPage": 180,
                "PaperType": "C",
                "Abstract": "Route suggestion is an important feature of GPS navigation systems. Recently, Microsoft T-drive has been enabled to suggest routes chosen by experienced taxi drivers for given source/destination pairs in given time periods, which often take less time than the routes calculated according to distance. However, in real environments, taxi drivers may use different routes to reach the same destination, which we call route diversity. In this paper we first propose a trajectory visualization method that examines the regions where the diversity exists and then develop several novel visualization techniques to display the high dimensional attributes and statistics associated with different routes to help users analyze diversity patterns. Our techniques have been applied to the real trajectory data of thousands of taxis and some interesting findings about route diversity have been obtained. We further demonstrate that our system can be used not only to suggest better routes for drivers but also to analyze traffic bottlenecks for transportation management.",
                "AuthorNamesDeduped": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorNames": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorAffiliation": "The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2008.149;10.1109/tvcg.2007.70570;10.1109/tvcg.2007.70574;10.1109/tvcg.2006.202;10.1109/vast.2009.5332593;10.1109/tvcg.2007.70561;10.1109/tvcg.2009.145;10.1109/tvcg.2010.180",
                "AuthorKeywords": null,
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 81,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1811,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1582,
                "i": [
                    1582
                ]
            }
        },
        {
            "name": "Siyuan Liu",
            "value": 141,
            "numPapers": 13,
            "cluster": "1",
            "visible": 1,
            "index": 949,
            "x": -306.90436812476077,
            "y": 27.562816001659808,
            "vy": 0,
            "vx": 0,
            "r": 1.162348877374784,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Visual analysis of route diversity",
                "DOI": "10.1109/vast.2011.6102455",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102455",
                "FirstPage": 171,
                "LastPage": 180,
                "PaperType": "C",
                "Abstract": "Route suggestion is an important feature of GPS navigation systems. Recently, Microsoft T-drive has been enabled to suggest routes chosen by experienced taxi drivers for given source/destination pairs in given time periods, which often take less time than the routes calculated according to distance. However, in real environments, taxi drivers may use different routes to reach the same destination, which we call route diversity. In this paper we first propose a trajectory visualization method that examines the regions where the diversity exists and then develop several novel visualization techniques to display the high dimensional attributes and statistics associated with different routes to help users analyze diversity patterns. Our techniques have been applied to the real trajectory data of thousands of taxis and some interesting findings about route diversity have been obtained. We further demonstrate that our system can be used not only to suggest better routes for drivers but also to analyze traffic bottlenecks for transportation management.",
                "AuthorNamesDeduped": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorNames": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorAffiliation": "The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2008.149;10.1109/tvcg.2007.70570;10.1109/tvcg.2007.70574;10.1109/tvcg.2006.202;10.1109/vast.2009.5332593;10.1109/tvcg.2007.70561;10.1109/tvcg.2009.145;10.1109/tvcg.2010.180",
                "AuthorKeywords": null,
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 81,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1811,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1582,
                "i": [
                    1582
                ]
            }
        },
        {
            "name": "Lionel M. Ni",
            "value": 148,
            "numPapers": 32,
            "cluster": "3",
            "visible": 1,
            "index": 950,
            "x": 207.79265069784836,
            "y": -227.7547240255666,
            "vy": 0,
            "vx": 0,
            "r": 1.1704087507196315,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Visual analysis of route diversity",
                "DOI": "10.1109/vast.2011.6102455",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102455",
                "FirstPage": 171,
                "LastPage": 180,
                "PaperType": "C",
                "Abstract": "Route suggestion is an important feature of GPS navigation systems. Recently, Microsoft T-drive has been enabled to suggest routes chosen by experienced taxi drivers for given source/destination pairs in given time periods, which often take less time than the routes calculated according to distance. However, in real environments, taxi drivers may use different routes to reach the same destination, which we call route diversity. In this paper we first propose a trajectory visualization method that examines the regions where the diversity exists and then develop several novel visualization techniques to display the high dimensional attributes and statistics associated with different routes to help users analyze diversity patterns. Our techniques have been applied to the real trajectory data of thousands of taxis and some interesting findings about route diversity have been obtained. We further demonstrate that our system can be used not only to suggest better routes for drivers but also to analyze traffic bottlenecks for transportation management.",
                "AuthorNamesDeduped": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorNames": "He Liu;Yuan Gao;Lu Lu;Siyuan Liu;Huamin Qu;Lionel M. Ni",
                "AuthorAffiliation": "The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China;The Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2008.149;10.1109/tvcg.2007.70570;10.1109/tvcg.2007.70574;10.1109/tvcg.2006.202;10.1109/vast.2009.5332593;10.1109/tvcg.2007.70561;10.1109/tvcg.2009.145;10.1109/tvcg.2010.180",
                "AuthorKeywords": null,
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 81,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1811,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1582,
                "i": [
                    1582
                ]
            }
        },
        {
            "name": "Junping Zhang",
            "value": 136,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 951,
            "x": 0.6266011902475749,
            "y": 308.4632998769033,
            "vy": 0,
            "vx": 0,
            "r": 1.1565918249856073,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Visual Traffic Jam Analysis Based on Trajectory Data",
                "DOI": "10.1109/tvcg.2013.228",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.228",
                "FirstPage": 2159,
                "LastPage": 2168,
                "PaperType": "J",
                "Abstract": "In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.",
                "AuthorNamesDeduped": "Zuchao Wang;Min Lu 0002;Xiaoru Yuan;Junping Zhang;Huub van de Wetering",
                "AuthorNames": "Zuchao Wang;Min Lu;Xiaoru Yuan;Junping Zhang;Huub van de Wetering",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), Peking University, China;Shanghai Key Laboratory of Intelligent Information Processing, and School of Computer Science, Fudan University, China and Key Laboratory of Machine Perception (Ministry of Education), Peking University;Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, China;Department of Mathematics and Computer Science, Technische Universiteit Eindhoven, Eindhoven, Noord-Brabant, NL",
                "InternalReferences": "0.1109/visual.1997.663866;10.1109/vast.2011.6102454;10.1109/tvcg.2009.145;10.1109/vast.2012.6400556;10.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/tvcg.2011.202;10.1109/vast.2012.6400553;10.1109/tvcg.2012.265;10.1109/tvcg.2011.181;10.1109/vast.2009.5332593;10.1109/tvcg.2008.125;10.1109/vast.2011.6102455;10.1109/vast.2010.5653580",
                "AuthorKeywords": "Traffic visualization, traffic jam propagation",
                "AminerCitationCount": 401,
                "CitationCountCrossRef": 258,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 7486,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1366,
                "i": [
                    1366
                ]
            }
        },
        {
            "name": "Dennis Thom",
            "value": 148,
            "numPapers": 27,
            "cluster": "1",
            "visible": 1,
            "index": 952,
            "x": -208.93570775814206,
            "y": -227.1472430464527,
            "vy": 0,
            "vx": 0,
            "r": 1.1704087507196315,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "ScatterBlogs2: Real-Time Monitoring of Microblog Messages through User-Guided filtering",
                "DOI": "10.1109/tvcg.2013.186",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.186",
                "FirstPage": 2022,
                "LastPage": 2031,
                "PaperType": "J",
                "Abstract": "The number of microblog posts published daily has reached a level that hampers the effective retrieval of relevant messages, and the amount of information conveyed through services such as Twitter is still increasing. Analysts require new methods for monitoring their topic of interest, dealing with the data volume and its dynamic nature. It is of particular importance to provide situational awareness for decision making in time-critical tasks. Current tools for monitoring microblogs typically filter messages based on user-defined keyword queries and metadata restrictions. Used on their own, such methods can have drawbacks with respect to filter accuracy and adaptability to changes in trends and topic structure. We suggest ScatterBlogs2, a new approach to let analysts build task-tailored message filters in an interactive and visual manner based on recorded messages of well-understood previous events. These message filters include supervised classification and query creation backed by the statistical distribution of terms and their co-occurrences. The created filter methods can be orchestrated and adapted afterwards for interactive, visual real-time monitoring and analysis of microblog feeds. We demonstrate the feasibility of our approach for analyzing the Twitter stream in emergency management scenarios.",
                "AuthorNamesDeduped": "Harald Bosch;Dennis Thom;Florian Heimerl;Edwin Puttmann;Steffen Koch 0001;Robert Krüger;Michael Wörner 0001;Thomas Ertl",
                "AuthorNames": "Harald Bosch;Dennis Thom;Florian Heimerl;Edwin Püttmann;Steffen Koch;Robert Krüger;Michael Wörner;Thomas Ertl",
                "AuthorAffiliation": "Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/visual.2005.1532781;10.1109/vast.2012.6400492;10.1109/vast.2012.6400557;10.1109/tvcg.2012.291;10.1109/tvcg.2012.277;10.1109/vast.2012.6400485;10.1109/vast.2007.4389013;10.1109/vast.2007.4389006;10.1109/vast.2011.6102456;10.1109/tvcg.2008.175;10.1109/infvis.2004.37;10.1109/vast.2011.6102488",
                "AuthorKeywords": "Microblog analysis, Twitter, text analytics, social media monitoring, live monitoring, visual analytics, information visualization, filter construction, query construction, text classification",
                "AminerCitationCount": 178,
                "CitationCountCrossRef": 101,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 1656,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1369,
                "i": [
                    1369
                ]
            }
        },
        {
            "name": "Edwin Puttmann",
            "value": 69,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 953,
            "x": 307.65982062928214,
            "y": 26.371097253583255,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "ScatterBlogs2: Real-Time Monitoring of Microblog Messages through User-Guided filtering",
                "DOI": "10.1109/tvcg.2013.186",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.186",
                "FirstPage": 2022,
                "LastPage": 2031,
                "PaperType": "J",
                "Abstract": "The number of microblog posts published daily has reached a level that hampers the effective retrieval of relevant messages, and the amount of information conveyed through services such as Twitter is still increasing. Analysts require new methods for monitoring their topic of interest, dealing with the data volume and its dynamic nature. It is of particular importance to provide situational awareness for decision making in time-critical tasks. Current tools for monitoring microblogs typically filter messages based on user-defined keyword queries and metadata restrictions. Used on their own, such methods can have drawbacks with respect to filter accuracy and adaptability to changes in trends and topic structure. We suggest ScatterBlogs2, a new approach to let analysts build task-tailored message filters in an interactive and visual manner based on recorded messages of well-understood previous events. These message filters include supervised classification and query creation backed by the statistical distribution of terms and their co-occurrences. The created filter methods can be orchestrated and adapted afterwards for interactive, visual real-time monitoring and analysis of microblog feeds. We demonstrate the feasibility of our approach for analyzing the Twitter stream in emergency management scenarios.",
                "AuthorNamesDeduped": "Harald Bosch;Dennis Thom;Florian Heimerl;Edwin Puttmann;Steffen Koch 0001;Robert Krüger;Michael Wörner 0001;Thomas Ertl",
                "AuthorNames": "Harald Bosch;Dennis Thom;Florian Heimerl;Edwin Püttmann;Steffen Koch;Robert Krüger;Michael Wörner;Thomas Ertl",
                "AuthorAffiliation": "Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany;Institute for Visualization and Interactive Systems, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/visual.2005.1532781;10.1109/vast.2012.6400492;10.1109/vast.2012.6400557;10.1109/tvcg.2012.291;10.1109/tvcg.2012.277;10.1109/vast.2012.6400485;10.1109/vast.2007.4389013;10.1109/vast.2007.4389006;10.1109/vast.2011.6102456;10.1109/tvcg.2008.175;10.1109/infvis.2004.37;10.1109/vast.2011.6102488",
                "AuthorKeywords": "Microblog analysis, Twitter, text analytics, social media monitoring, live monitoring, visual analytics, information visualization, filter construction, query construction, text classification",
                "AminerCitationCount": 178,
                "CitationCountCrossRef": 101,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 1656,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1369,
                "i": [
                    1369
                ]
            }
        },
        {
            "name": "Michael Wörner 0001",
            "value": 101,
            "numPapers": 29,
            "cluster": "1",
            "visible": 1,
            "index": 954,
            "x": -244.80046553726132,
            "y": 188.47475181769082,
            "vy": 0,
            "vx": 0,
            "r": 1.1162924582613702,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "VarifocalReader -- In-Depth Visual Analysis of Large Text Documents",
                "DOI": "10.1109/tvcg.2014.2346677",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346677",
                "FirstPage": 1723,
                "LastPage": 1732,
                "PaperType": "J",
                "Abstract": "Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.",
                "AuthorNamesDeduped": "Steffen Koch 0001;Markus John;Michael Wörner 0001;Andreas Müller 0012;Thomas Ertl",
                "AuthorNames": "Steffen Koch;Markus John;Michael Wörner;Andreas Müller;Thomas Ertl",
                "AuthorAffiliation": "Institute of Visualization and Interactive Systems (VIS), University of Stuttgart;Institute of Visualization and Interactive Systems (VIS), University of Stuttgart;Institute of Visualization and Interactive Systems (VIS), University of Stuttgart;Institute for Natural Language Processing (IMS), University of Stuttgart;Institute of Visualization and Interactive Systems (VIS), University of Stuttgart",
                "InternalReferences": "0.1109/vast.2010.5652926;10.1109/tvcg.2008.172;10.1109/vast.2012.6400485;10.1109/tvcg.2013.188;10.1109/tvcg.2007.70577;10.1109/vast.2012.6400486;10.1109/tvcg.2012.277;10.1109/tvcg.2009.165;10.1109/tvcg.2013.162;10.1109/infvis.1995.528686;10.1109/vast.2009.5333248;10.1109/tvcg.2012.260;10.1109/vast.2007.4389006;10.1109/vast.2009.5333919;10.1109/vast.2007.4389004;10.1109/infvis.1997.636787",
                "AuthorKeywords": "visual analytics, document analysis, literary analysis, natural language processing, text mining, machine learning, distant reading",
                "AminerCitationCount": 100,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1835,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1264,
                "i": [
                    1264
                ]
            }
        },
        {
            "name": "Changhyun Lee",
            "value": 138,
            "numPapers": 8,
            "cluster": "1",
            "visible": 1,
            "index": 955,
            "x": 53.22323733851499,
            "y": -304.495134619928,
            "vy": 0,
            "vx": 0,
            "r": 1.1588946459412781,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization",
                "DOI": "10.1109/tvcg.2013.212",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.212",
                "FirstPage": 1992,
                "LastPage": 2001,
                "PaperType": "J",
                "Abstract": "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.",
                "AuthorNamesDeduped": "Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park",
                "AuthorNames": "Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park",
                "AuthorAffiliation": "Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/tvcg.2012.258;10.1109/vast.2009.5332629;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485;10.1109/vast.2007.4388999;10.1109/vast.2007.4389006;10.1109/tvcg.2008.138;10.1109/vast.2010.5652443",
                "AuthorKeywords": "Latent Dirichlet allocation, nonnegative matrix factorization, topic modeling, visual analytics, interactive clustering, text analytics",
                "AminerCitationCount": 317,
                "CitationCountCrossRef": 179,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 3014,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1368,
                "i": [
                    1368
                ]
            }
        },
        {
            "name": "Chandan K. Reddy",
            "value": 141,
            "numPapers": 8,
            "cluster": "1",
            "visible": 1,
            "index": 956,
            "x": 166.52542140333938,
            "y": 260.6132844396852,
            "vy": 0,
            "vx": 0,
            "r": 1.162348877374784,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization",
                "DOI": "10.1109/tvcg.2013.212",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.212",
                "FirstPage": 1992,
                "LastPage": 2001,
                "PaperType": "J",
                "Abstract": "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.",
                "AuthorNamesDeduped": "Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park",
                "AuthorNames": "Jaegul Choo;Changhyun Lee;Chandan K. Reddy;Haesun Park",
                "AuthorAffiliation": "Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA;Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/tvcg.2012.258;10.1109/vast.2009.5332629;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485;10.1109/vast.2007.4388999;10.1109/vast.2007.4389006;10.1109/tvcg.2008.138;10.1109/vast.2010.5652443",
                "AuthorKeywords": "Latent Dirichlet allocation, nonnegative matrix factorization, topic modeling, visual analytics, interactive clustering, text analytics",
                "AminerCitationCount": 317,
                "CitationCountCrossRef": 179,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 3014,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1368,
                "i": [
                    1368
                ]
            }
        },
        {
            "name": "Zhuofeng Wu 0002",
            "value": 135,
            "numPapers": 10,
            "cluster": "1",
            "visible": 1,
            "index": 957,
            "x": -298.9885779383,
            "y": -79.72346117946132,
            "vy": 0,
            "vx": 0,
            "r": 1.155440414507772,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "How Hierarchical Topics Evolve in Large Text Corpora",
                "DOI": "10.1109/tvcg.2014.2346433",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346433",
                "FirstPage": 2281,
                "LastPage": 2290,
                "PaperType": "J",
                "Abstract": "Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.",
                "AuthorNamesDeduped": "Weiwei Cui;Shixia Liu;Zhuofeng Wu 0002;Hao Wei",
                "AuthorNames": "Weiwei Cui;Shixia Liu;Zhuofeng Wu;Hao Wei",
                "AuthorAffiliation": "Microsoft Research;Microsoft Research;Nankai University;Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.196;10.1109/tvcg.2009.108;10.1109/vast.2014.7042494;10.1109/tvcg.2009.111;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346920;10.1109/tvcg.2012.212;10.1109/tvcg.2013.221;10.1109/tvcg.2012.225;10.1109/tvcg.2013.162;10.1109/tvcg.2013.200",
                "AuthorKeywords": "Hierarchical topic visualization, evolutionary tree clustering, data transformation",
                "AminerCitationCount": 144,
                "CitationCountCrossRef": 90,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 1394,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1171,
                "i": [
                    1171
                ]
            }
        },
        {
            "name": "Hao Wei",
            "value": 135,
            "numPapers": 10,
            "cluster": "1",
            "visible": 1,
            "index": 958,
            "x": 274.4605057726062,
            "y": -143.25303058241107,
            "vy": 0,
            "vx": 0,
            "r": 1.155440414507772,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "How Hierarchical Topics Evolve in Large Text Corpora",
                "DOI": "10.1109/tvcg.2014.2346433",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346433",
                "FirstPage": 2281,
                "LastPage": 2290,
                "PaperType": "J",
                "Abstract": "Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.",
                "AuthorNamesDeduped": "Weiwei Cui;Shixia Liu;Zhuofeng Wu 0002;Hao Wei",
                "AuthorNames": "Weiwei Cui;Shixia Liu;Zhuofeng Wu;Hao Wei",
                "AuthorAffiliation": "Microsoft Research;Microsoft Research;Nankai University;Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.196;10.1109/tvcg.2009.108;10.1109/vast.2014.7042494;10.1109/tvcg.2009.111;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346920;10.1109/tvcg.2012.212;10.1109/tvcg.2013.221;10.1109/tvcg.2012.225;10.1109/tvcg.2013.162;10.1109/tvcg.2013.200",
                "AuthorKeywords": "Hierarchical topic visualization, evolutionary tree clustering, data transformation",
                "AminerCitationCount": 144,
                "CitationCountCrossRef": 90,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 1394,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1171,
                "i": [
                    1171
                ]
            }
        },
        {
            "name": "Carey Williamson",
            "value": 181,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 959,
            "x": -105.66768172093279,
            "y": 291.17750778472515,
            "vy": 0,
            "vx": 0,
            "r": 1.208405296488198,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "A Visual Backchannel for Large-Scale Events",
                "DOI": "10.1109/tvcg.2010.129",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.129",
                "FirstPage": 1129,
                "LastPage": 1138,
                "PaperType": "J",
                "Abstract": "We introduce the concept of a Visual Backchannel as a novel way of following and exploring online conversations about large-scale events. Microblogging communities, such as Twitter, are increasingly used as digital backchannels for timely exchange of brief comments and impressions during political speeches, sport competitions, natural disasters, and other large events. Currently, shared updates are typically displayed in the form of a simple list, making it difficult to get an overview of the fast-paced discussions as it happens in the moment and how it evolves over time. In contrast, our Visual Backchannel design provides an evolving, interactive, and multi-faceted visual overview of large-scale ongoing conversations on Twitter. To visualize a continuously updating information stream, we include visual saliency for what is happening now and what has just happened, set in the context of the evolving conversation. As part of a fully web-based coordinated-view system we introduce Topic Streams, a temporally adjustable stacked graph visualizing topics over time, a People Spiral representing participants and their activity, and an Image Cloud encoding the popularity of event photos by size. Together with a post listing, these mutually linked views support cross-filtering along topics, participants, and time ranges. We discuss our design considerations, in particular with respect to evolving visualizations of dynamically changing data. Initial feedback indicates significant interest and suggests several unanticipated uses.",
                "AuthorNamesDeduped": "Marian Dörk;Daniel M. Gruen;Carey Williamson;Sheelagh Carpendale",
                "AuthorNames": "Marian Dörk;Daniel Gruen;Carey Williamson;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Calgary, Canada;IBM Research Division, IBM Thomas J. Watson Research Center, USA;University of Calgary, Canada;University of Calgary, Canada",
                "InternalReferences": "0.1109/vast.2009.5333443;10.1109/tvcg.2007.70541;10.1109/tvcg.2008.166;10.1109/tvcg.2008.175;10.1109/infvis.2005.1532133;10.1109/infvis.2003.1249028;10.1109/vast.2008.4677364;10.1109/vast.2009.5333437",
                "AuthorKeywords": "Backchannel, information visualization, events, multiple views, microblogging, information retrieval, World Wide Web",
                "AminerCitationCount": 266,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 1924,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1687,
                "i": [
                    1687
                ]
            }
        },
        {
            "name": "Daniel M. Gruen",
            "value": 112,
            "numPapers": 7,
            "cluster": "1",
            "visible": 1,
            "index": 960,
            "x": -118.83339678087643,
            "y": -286.23176589875345,
            "vy": 0,
            "vx": 0,
            "r": 1.128957973517559,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "A Visual Backchannel for Large-Scale Events",
                "DOI": "10.1109/tvcg.2010.129",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.129",
                "FirstPage": 1129,
                "LastPage": 1138,
                "PaperType": "J",
                "Abstract": "We introduce the concept of a Visual Backchannel as a novel way of following and exploring online conversations about large-scale events. Microblogging communities, such as Twitter, are increasingly used as digital backchannels for timely exchange of brief comments and impressions during political speeches, sport competitions, natural disasters, and other large events. Currently, shared updates are typically displayed in the form of a simple list, making it difficult to get an overview of the fast-paced discussions as it happens in the moment and how it evolves over time. In contrast, our Visual Backchannel design provides an evolving, interactive, and multi-faceted visual overview of large-scale ongoing conversations on Twitter. To visualize a continuously updating information stream, we include visual saliency for what is happening now and what has just happened, set in the context of the evolving conversation. As part of a fully web-based coordinated-view system we introduce Topic Streams, a temporally adjustable stacked graph visualizing topics over time, a People Spiral representing participants and their activity, and an Image Cloud encoding the popularity of event photos by size. Together with a post listing, these mutually linked views support cross-filtering along topics, participants, and time ranges. We discuss our design considerations, in particular with respect to evolving visualizations of dynamically changing data. Initial feedback indicates significant interest and suggests several unanticipated uses.",
                "AuthorNamesDeduped": "Marian Dörk;Daniel M. Gruen;Carey Williamson;Sheelagh Carpendale",
                "AuthorNames": "Marian Dörk;Daniel Gruen;Carey Williamson;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Calgary, Canada;IBM Research Division, IBM Thomas J. Watson Research Center, USA;University of Calgary, Canada;University of Calgary, Canada",
                "InternalReferences": "0.1109/vast.2009.5333443;10.1109/tvcg.2007.70541;10.1109/tvcg.2008.166;10.1109/tvcg.2008.175;10.1109/infvis.2005.1532133;10.1109/infvis.2003.1249028;10.1109/vast.2008.4677364;10.1109/vast.2009.5333437",
                "AuthorKeywords": "Backchannel, information visualization, events, multiple views, microblogging, information retrieval, World Wide Web",
                "AminerCitationCount": 266,
                "CitationCountCrossRef": 152,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 1924,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1687,
                "i": [
                    1687
                ]
            }
        },
        {
            "name": "Xiaoyu Wang 0001",
            "value": 359,
            "numPapers": 45,
            "cluster": "5",
            "visible": 1,
            "index": 961,
            "x": 281.117052981359,
            "y": 130.8556552964975,
            "vy": 0,
            "vx": 0,
            "r": 1.4133563615428901,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "HierarchicalTopics: Visually Exploring Large Text Collections Using Topic Hierarchies",
                "DOI": "10.1109/tvcg.2013.162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.162",
                "FirstPage": 2002,
                "LastPage": 2011,
                "PaperType": "J",
                "Abstract": "Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system - HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.",
                "AuthorNamesDeduped": "Wenwen Dou;Li Yu;Xiaoyu Wang 0001;Zhiqiang Ma 0004;William Ribarsky",
                "AuthorNames": "Wenwen Dou;Li Yu;Xiaoyu Wang;Zhiqiang Ma;William Ribarsky",
                "AuthorAffiliation": "University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/vast.2012.6400557;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485",
                "AuthorKeywords": "Hierarchical topic representation, topic modeling, visual analytics, rose tree",
                "AminerCitationCount": 189,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2934,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1370,
                "i": [
                    1370
                ]
            }
        },
        {
            "name": "Li Yu",
            "value": 161,
            "numPapers": 4,
            "cluster": "1",
            "visible": 1,
            "index": 962,
            "x": -295.83241017910115,
            "y": 93.45151196007508,
            "vy": 0,
            "vx": 0,
            "r": 1.185377086931491,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "HierarchicalTopics: Visually Exploring Large Text Collections Using Topic Hierarchies",
                "DOI": "10.1109/tvcg.2013.162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.162",
                "FirstPage": 2002,
                "LastPage": 2011,
                "PaperType": "J",
                "Abstract": "Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system - HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.",
                "AuthorNamesDeduped": "Wenwen Dou;Li Yu;Xiaoyu Wang 0001;Zhiqiang Ma 0004;William Ribarsky",
                "AuthorNames": "Wenwen Dou;Li Yu;Xiaoyu Wang;Zhiqiang Ma;William Ribarsky",
                "AuthorAffiliation": "University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/vast.2012.6400557;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485",
                "AuthorKeywords": "Hierarchical topic representation, topic modeling, visual analytics, rose tree",
                "AminerCitationCount": 189,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2934,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1370,
                "i": [
                    1370
                ]
            }
        },
        {
            "name": "Zhiqiang Ma 0004",
            "value": 161,
            "numPapers": 4,
            "cluster": "1",
            "visible": 1,
            "index": 963,
            "x": 155.09252790123477,
            "y": -268.87972736746946,
            "vy": 0,
            "vx": 0,
            "r": 1.185377086931491,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "HierarchicalTopics: Visually Exploring Large Text Collections Using Topic Hierarchies",
                "DOI": "10.1109/tvcg.2013.162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.162",
                "FirstPage": 2002,
                "LastPage": 2011,
                "PaperType": "J",
                "Abstract": "Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system - HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.",
                "AuthorNamesDeduped": "Wenwen Dou;Li Yu;Xiaoyu Wang 0001;Zhiqiang Ma 0004;William Ribarsky",
                "AuthorNames": "Wenwen Dou;Li Yu;Xiaoyu Wang;Zhiqiang Ma;William Ribarsky",
                "AuthorAffiliation": "University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA;University of North Carolina, Charlotte, USA",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/vast.2012.6400557;10.1109/tvcg.2011.239;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485",
                "AuthorKeywords": "Hierarchical topic representation, topic modeling, visual analytics, rose tree",
                "AminerCitationCount": 189,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 2934,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1370,
                "i": [
                    1370
                ]
            }
        },
        {
            "name": "Alan M. MacEachren",
            "value": 184,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 964,
            "x": 67.30014052756049,
            "y": 303.18425269952695,
            "vy": 0,
            "vx": 0,
            "r": 1.2118595279217041,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Visual Semiotics & Uncertainty Visualization: An Empirical Study",
                "DOI": "10.1109/tvcg.2012.279",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.279",
                "FirstPage": 2496,
                "LastPage": 2505,
                "PaperType": "J",
                "Abstract": "This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.",
                "AuthorNamesDeduped": "Alan M. MacEachren;Robert E. Roth;James O'Brien;Bonan Li;Derek Swingley;Mark Gahegan",
                "AuthorNames": "Alan M. MacEachren;Robert E. Roth;James O'Brien;Bonan Li;Derek Swingley;Mark Gahegan",
                "AuthorAffiliation": "Pennsylvania State University, USA;University of Wisconsin-Madison, USA;Risk Frontiers, Macquarie University, Australia;ZillionInfo, USA;Pennsylvania State University, USA;University of Auckland, New Zealand",
                "InternalReferences": "0.1109/visual.1992.235199;10.1109/tvcg.2011.197;10.1109/tvcg.2009.114;10.1109/tvcg.2011.209",
                "AuthorKeywords": "Uncertainty visualization, uncertainty categories, visual variables, semiotics",
                "AminerCitationCount": 295,
                "CitationCountCrossRef": 172,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 5567,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1399,
                "i": [
                    1399
                ]
            }
        },
        {
            "name": "Anuj R. Jaiswal",
            "value": 36,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 965,
            "x": -254.5549087702339,
            "y": -178.1903432315508,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "SensePlace2: GeoTwitter analytics support for situational awareness",
                "DOI": "10.1109/vast.2011.6102456",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102456",
                "FirstPage": 181,
                "LastPage": 190,
                "PaperType": "C",
                "Abstract": "Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media.",
                "AuthorNamesDeduped": "Alan M. MacEachren;Anuj R. Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang 0019;Justine I. Blanford",
                "AuthorNames": "Alan M. MacEachren;Anuj Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang;Justine Blanford",
                "AuthorAffiliation": "GeoVISTA Center, Department of Geography, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Computer Science & Engineering, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2010.5652478;10.1109/vast.2007.4388994;10.1109/tvcg.2010.129;10.1109/infvis.2005.1532134;10.1109/vast.2010.5652922",
                "AuthorKeywords": "social media analytics, scenario-based design, geovisualization, situational awareness, text analytics, crisis management, spatio-temporal analysis ",
                "AminerCitationCount": 455,
                "CitationCountCrossRef": 221,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3124,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1579,
                "i": [
                    1579
                ]
            }
        },
        {
            "name": "Anthony C. Robinson",
            "value": 103,
            "numPapers": 13,
            "cluster": "4",
            "visible": 1,
            "index": 966,
            "x": 308.22621098278535,
            "y": -40.578354614196996,
            "vy": 0,
            "vx": 0,
            "r": 1.1185952792170408,
            "node": {
                "Conference": "VAST",
                "Year": 2008,
                "Title": "Collaborative synthesis of visual analytic results",
                "DOI": "10.1109/vast.2008.4677358",
                "Link": "http://dx.doi.org/10.1109/VAST.2008.4677358",
                "FirstPage": 67,
                "LastPage": 74,
                "PaperType": "C",
                "Abstract": "Visual analytic tools allow analysts to generate large collections of useful analytical results. We anticipate that analysts in most real world situations will draw from these collections when working together to solve complicated problems. This indicates a need to understand how users synthesize multiple collections of results. This paper reports the results of collaborative synthesis experiments conducted with expert geographers and disease biologists. Ten participants were worked in pairs to complete a simulated real-world synthesis task using artifacts printed on cards on a large, paper-covered workspace. Experiment results indicate that groups use a number of different approaches to collaborative synthesis, and that they employ a variety of organizational metaphors to structure their information. It is further evident that establishing common ground and role assignment are critical aspects of collaborative synthesis. We conclude with a set of general design guidelines for collaborative synthesis support tools.",
                "AuthorNamesDeduped": "Anthony C. Robinson",
                "AuthorNames": "Anthony C. Robinson",
                "AuthorAffiliation": "GeoVISTA Center, Department of Geography, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2007.4389011;10.1109/tvcg.2007.70594;10.1109/tvcg.2007.70568",
                "AuthorKeywords": "Movement data, spatio-temporal data, aggregation, scalable visualization, geovisualization",
                "AminerCitationCount": 95,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 555,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1992,
                "i": [
                    1992
                ]
            }
        },
        {
            "name": "Scott Pezanowski",
            "value": 36,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 967,
            "x": -199.9695008165238,
            "y": 238.24818728206574,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "SensePlace2: GeoTwitter analytics support for situational awareness",
                "DOI": "10.1109/vast.2011.6102456",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102456",
                "FirstPage": 181,
                "LastPage": 190,
                "PaperType": "C",
                "Abstract": "Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media.",
                "AuthorNamesDeduped": "Alan M. MacEachren;Anuj R. Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang 0019;Justine I. Blanford",
                "AuthorNames": "Alan M. MacEachren;Anuj Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang;Justine Blanford",
                "AuthorAffiliation": "GeoVISTA Center, Department of Geography, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Computer Science & Engineering, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2010.5652478;10.1109/vast.2007.4388994;10.1109/tvcg.2010.129;10.1109/infvis.2005.1532134;10.1109/vast.2010.5652922",
                "AuthorKeywords": "social media analytics, scenario-based design, geovisualization, situational awareness, text analytics, crisis management, spatio-temporal analysis ",
                "AminerCitationCount": 455,
                "CitationCountCrossRef": 221,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3124,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1579,
                "i": [
                    1579
                ]
            }
        },
        {
            "name": "Alexander Savelyev",
            "value": 36,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 968,
            "x": -13.490017853249482,
            "y": -310.91481054835424,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "SensePlace2: GeoTwitter analytics support for situational awareness",
                "DOI": "10.1109/vast.2011.6102456",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102456",
                "FirstPage": 181,
                "LastPage": 190,
                "PaperType": "C",
                "Abstract": "Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media.",
                "AuthorNamesDeduped": "Alan M. MacEachren;Anuj R. Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang 0019;Justine I. Blanford",
                "AuthorNames": "Alan M. MacEachren;Anuj Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang;Justine Blanford",
                "AuthorAffiliation": "GeoVISTA Center, Department of Geography, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Computer Science & Engineering, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2010.5652478;10.1109/vast.2007.4388994;10.1109/tvcg.2010.129;10.1109/infvis.2005.1532134;10.1109/vast.2010.5652922",
                "AuthorKeywords": "social media analytics, scenario-based design, geovisualization, situational awareness, text analytics, crisis management, spatio-temporal analysis ",
                "AminerCitationCount": 455,
                "CitationCountCrossRef": 221,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3124,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1579,
                "i": [
                    1579
                ]
            }
        },
        {
            "name": "Prasenjit Mitra",
            "value": 71,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 969,
            "x": 220.08058758868302,
            "y": 220.2601529251717,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "SensePlace2: GeoTwitter analytics support for situational awareness",
                "DOI": "10.1109/vast.2011.6102456",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102456",
                "FirstPage": 181,
                "LastPage": 190,
                "PaperType": "C",
                "Abstract": "Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media.",
                "AuthorNamesDeduped": "Alan M. MacEachren;Anuj R. Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang 0019;Justine I. Blanford",
                "AuthorNames": "Alan M. MacEachren;Anuj Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang;Justine Blanford",
                "AuthorAffiliation": "GeoVISTA Center, Department of Geography, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Computer Science & Engineering, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2010.5652478;10.1109/vast.2007.4388994;10.1109/tvcg.2010.129;10.1109/infvis.2005.1532134;10.1109/vast.2010.5652922",
                "AuthorKeywords": "social media analytics, scenario-based design, geovisualization, situational awareness, text analytics, crisis management, spatio-temporal analysis ",
                "AminerCitationCount": 455,
                "CitationCountCrossRef": 221,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3124,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1579,
                "i": [
                    1579
                ]
            }
        },
        {
            "name": "Xiao Zhang 0019",
            "value": 36,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 970,
            "x": -311.2245551736847,
            "y": -13.757770783890738,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "SensePlace2: GeoTwitter analytics support for situational awareness",
                "DOI": "10.1109/vast.2011.6102456",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102456",
                "FirstPage": 181,
                "LastPage": 190,
                "PaperType": "C",
                "Abstract": "Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media.",
                "AuthorNamesDeduped": "Alan M. MacEachren;Anuj R. Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang 0019;Justine I. Blanford",
                "AuthorNames": "Alan M. MacEachren;Anuj Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang;Justine Blanford",
                "AuthorAffiliation": "GeoVISTA Center, Department of Geography, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Computer Science & Engineering, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2010.5652478;10.1109/vast.2007.4388994;10.1109/tvcg.2010.129;10.1109/infvis.2005.1532134;10.1109/vast.2010.5652922",
                "AuthorKeywords": "social media analytics, scenario-based design, geovisualization, situational awareness, text analytics, crisis management, spatio-temporal analysis ",
                "AminerCitationCount": 455,
                "CitationCountCrossRef": 221,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3124,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1579,
                "i": [
                    1579
                ]
            }
        },
        {
            "name": "Justine I. Blanford",
            "value": 36,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 971,
            "x": 238.9035293821016,
            "y": -200.1876710708612,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "SensePlace2: GeoTwitter analytics support for situational awareness",
                "DOI": "10.1109/vast.2011.6102456",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102456",
                "FirstPage": 181,
                "LastPage": 190,
                "PaperType": "C",
                "Abstract": "Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media.",
                "AuthorNamesDeduped": "Alan M. MacEachren;Anuj R. Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang 0019;Justine I. Blanford",
                "AuthorNames": "Alan M. MacEachren;Anuj Jaiswal;Anthony C. Robinson;Scott Pezanowski;Alexander Savelyev;Prasenjit Mitra;Xiao Zhang;Justine Blanford",
                "AuthorAffiliation": "GeoVISTA Center, Department of Geography, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA;GeoVISTA Center, College of Information Sciences and Technology, Pennsylvania State University, USA;GeoVISTA Center, Computer Science & Engineering, Pennsylvania State University, USA;GeoVISTA Center, Department of Geography, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2010.5652478;10.1109/vast.2007.4388994;10.1109/tvcg.2010.129;10.1109/infvis.2005.1532134;10.1109/vast.2010.5652922",
                "AuthorKeywords": "social media analytics, scenario-based design, geovisualization, situational awareness, text analytics, crisis management, spatio-temporal analysis ",
                "AminerCitationCount": 455,
                "CitationCountCrossRef": 221,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3124,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1579,
                "i": [
                    1579
                ]
            }
        },
        {
            "name": "Jonathan J. H. Zhu",
            "value": 186,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 972,
            "x": -40.95626120990138,
            "y": 309.14815973527374,
            "vy": 0,
            "vx": 0,
            "r": 1.2141623488773747,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Visual Analysis of Topic Competition on Social Media",
                "DOI": "10.1109/tvcg.2013.221",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.221",
                "FirstPage": 2012,
                "LastPage": 2021,
                "PaperType": "J",
                "Abstract": "How do various topics compete for public attention when they are spreading on social media? What roles do opinion leaders play in the rise and fall of competitiveness of various topics? In this study, we propose an expanded topic competition model to characterize the competition for public attention on multiple topics promoted by various opinion leaders on social media. To allow an intuitive understanding of the estimated measures, we present a timeline visualization through a metaphoric interpretation of the results. The visual design features both topical and social aspects of the information diffusion process by compositing ThemeRiver with storyline style visualization. ThemeRiver shows the increase and decrease of competitiveness of each topic. Opinion leaders are drawn as threads that converge or diverge with regard to their roles in influencing the public agenda change over time. To validate the effectiveness of the visual analysis techniques, we report the insights gained on two collections of Tweets: the 2012 United States presidential election and the Occupy Wall Street movement.",
                "AuthorNamesDeduped": "Panpan Xu;Yingcai Wu;Enxun Wei;Tai-Quan Peng;Shixia Liu;Jonathan J. H. Zhu;Huamin Qu",
                "AuthorNames": "Panpan Xu;Yingcai Wu;Enxun Wei;Tai-Quan Peng;Shixia Liu;Jonathan J.H. Zhu;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China;Microsoft Research, Asia, Russia;Shanghai Jiao Tong University, China;Nanyang Technological University, Singapore;Microsoft Research, Asia, Russia;City University of Hong Kong, Hong Kong, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2011.239;10.1109/tvcg.2012.253;10.1109/tvcg.2012.225;10.1109/vast.2009.5333437;10.1109/tvcg.2010.194;10.1109/tvcg.2012.291;10.1109/vast.2010.5652931;10.1109/tvcg.2013.196;10.1109/infvis.2001.963273;10.1109/tvcg.2012.212;10.1109/vast.2010.5652922;10.1109/tvcg.2010.129;10.1109/infvis.1999.801851",
                "AuthorKeywords": "Social media visuaization, topic competition, information diffusion, information propagation, agenda-setting",
                "AminerCitationCount": 153,
                "CitationCountCrossRef": 89,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 4370,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1372,
                "i": [
                    1372
                ]
            }
        },
        {
            "name": "Ronghua Liang",
            "value": 90,
            "numPapers": 13,
            "cluster": "1",
            "visible": 1,
            "index": 973,
            "x": -178.7185243394544,
            "y": -255.75317995662897,
            "vy": 0,
            "vx": 0,
            "r": 1.1036269430051813,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "EvoRiver: Visual Analysis of Topic Coopetition on Social Media",
                "DOI": "10.1109/tvcg.2014.2346919",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346919",
                "FirstPage": 1753,
                "LastPage": 1762,
                "PaperType": "J",
                "Abstract": "Cooperation and competition (jointly called “coopetition”) are two modes of interactions among a set of concurrent topics on social media. How do topics cooperate or compete with each other to gain public attention? Which topics tend to cooperate or compete with one another? Who plays the key role in coopetition-related interactions? We answer these intricate questions by proposing a visual analytics system that facilitates the in-depth analysis of topic coopetition on social media. We model the complex interactions among topics as a combination of carry-over, coopetition recruitment, and coopetition distraction effects. This model provides a close functional approximation of the coopetition process by depicting how different groups of influential users (i.e., “topic leaders”) affect coopetition. We also design EvoRiver, a time-based visualization, that allows users to explore coopetition-related interactions and to detect dynamically evolving patterns, as well as their major causes. We test our model and demonstrate the usefulness of our system based on two Twitter data sets (social topics data and business topics data).",
                "AuthorNamesDeduped": "Guodao Sun;Yingcai Wu;Shixia Liu;Tai-Quan Peng;Jonathan J. H. Zhu;Ronghua Liang",
                "AuthorNames": "Guodao Sun;Yingcai Wu;Shixia Liu;Tai-Quan Peng;Jonathan J. H. Zhu;Ronghua Liang",
                "AuthorAffiliation": "Zhejiang University of Technology;Microsoft Research;Microsoft Research;Nanyang Technological University;City University of Hong Kong;Zhejiang University of Technology",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/tvcg.2012.291;10.1109/tvcg.2008.166;10.1109/tvcg.2011.239;10.1109/tvcg.2012.253;10.1109/tvcg.2014.2346920;10.1109/tvcg.2013.221;10.1109/tvcg.2013.196;10.1109/tvcg.2013.162",
                "AuthorKeywords": "Topic coopetition, information diffusion, information propagation, time-based visualization",
                "AminerCitationCount": 128,
                "CitationCountCrossRef": 92,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 1937,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1249,
                "i": [
                    1249
                ]
            }
        },
        {
            "name": "Jianfei Chen 0001",
            "value": 81,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 974,
            "x": 304.69664355896833,
            "y": 67.89665237623286,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "TopicPanorama: A Full Picture of Relevant Topics",
                "DOI": "10.1109/vast.2014.7042494",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042494",
                "FirstPage": 183,
                "LastPage": 192,
                "PaperType": "C",
                "Abstract": "We present a visual analytics approach to developing a full picture of relevant topics discussed in multiple sources such as news, blogs, or micro-blogs. The full picture consists of a number of common topics among multiple sources as well as distinctive topics. The key idea behind our approach is to jointly match the topics extracted from each source together in order to interactively and effectively analyze common and distinctive topics. We start by modeling each textual corpus as a topic graph. These graphs are then matched together with a consistent graph matching method. Next, we develop an LOD-based visualization for better understanding and analysis of the matched graph. The major feature of this visualization is that it combines a radially stacked tree visualization with a density-based graph visualization to facilitate the examination of the matched topic graph from multiple perspectives. To compensate for the deficiency of the graph matching algorithm and meet different users' needs, we allow users to interactively modify the graph matching result. We have applied our approach to various data including news, tweets, and blog data. Qualitative evaluation and a real-world case study with domain experts demonstrate the promise of our approach, especially in support of analyzing a topic-graph-based full picture at different levels of detail.",
                "AuthorNamesDeduped": "Shixia Liu;Xiting Wang;Jianfei Chen 0001;Jim Zhu;Baining Guo",
                "AuthorNames": "Shixia Liu;Xiting Wang;Jianfei Chen;Jim Zhu;Baining Guo",
                "AuthorAffiliation": "Microsoft Research Asia;Microsoft Research Asia, Tsinghua University;Dept. of Comp. Sci. & Tech., Tsingua University;Dept. of Comp. Sci. & Tech., Tsinghua University;Microsoft Research Asia",
                "InternalReferences": "0.1109/vast.2011.6102461;10.1109/tvcg.2011.239;10.1109/tvcg.2009.111;10.1109/tvcg.2010.154;10.1109/vast.2011.6102439;10.1109/tvcg.2006.193;10.1109/tvcg.2012.285;10.1109/tvcg.2013.221;10.1109/tvcg.2013.162;10.1109/tvcg.2012.279;10.1109/tvcg.2010.183;10.1109/tvcg.2014.2346433;10.1109/tvcg.2007.70521;10.1109/tvcg.2013.233;10.1109/tvcg.2013.212;10.1109/tvcg.2007.70582;10.1109/tvcg.2013.232;10.1109/tvcg.2010.129;10.1109/tvcg.2014.2346919",
                "AuthorKeywords": "Topic graph, graph matching, graph visualization, user interactions, level-of-detail",
                "AminerCitationCount": 110,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 1271,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1260,
                "i": [
                    1260
                ]
            }
        },
        {
            "name": "Jim Zhu",
            "value": 81,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 975,
            "x": -270.67612460370935,
            "y": 155.83464175053382,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "TopicPanorama: A Full Picture of Relevant Topics",
                "DOI": "10.1109/vast.2014.7042494",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042494",
                "FirstPage": 183,
                "LastPage": 192,
                "PaperType": "C",
                "Abstract": "We present a visual analytics approach to developing a full picture of relevant topics discussed in multiple sources such as news, blogs, or micro-blogs. The full picture consists of a number of common topics among multiple sources as well as distinctive topics. The key idea behind our approach is to jointly match the topics extracted from each source together in order to interactively and effectively analyze common and distinctive topics. We start by modeling each textual corpus as a topic graph. These graphs are then matched together with a consistent graph matching method. Next, we develop an LOD-based visualization for better understanding and analysis of the matched graph. The major feature of this visualization is that it combines a radially stacked tree visualization with a density-based graph visualization to facilitate the examination of the matched topic graph from multiple perspectives. To compensate for the deficiency of the graph matching algorithm and meet different users' needs, we allow users to interactively modify the graph matching result. We have applied our approach to various data including news, tweets, and blog data. Qualitative evaluation and a real-world case study with domain experts demonstrate the promise of our approach, especially in support of analyzing a topic-graph-based full picture at different levels of detail.",
                "AuthorNamesDeduped": "Shixia Liu;Xiting Wang;Jianfei Chen 0001;Jim Zhu;Baining Guo",
                "AuthorNames": "Shixia Liu;Xiting Wang;Jianfei Chen;Jim Zhu;Baining Guo",
                "AuthorAffiliation": "Microsoft Research Asia;Microsoft Research Asia, Tsinghua University;Dept. of Comp. Sci. & Tech., Tsingua University;Dept. of Comp. Sci. & Tech., Tsinghua University;Microsoft Research Asia",
                "InternalReferences": "0.1109/vast.2011.6102461;10.1109/tvcg.2011.239;10.1109/tvcg.2009.111;10.1109/tvcg.2010.154;10.1109/vast.2011.6102439;10.1109/tvcg.2006.193;10.1109/tvcg.2012.285;10.1109/tvcg.2013.221;10.1109/tvcg.2013.162;10.1109/tvcg.2012.279;10.1109/tvcg.2010.183;10.1109/tvcg.2014.2346433;10.1109/tvcg.2007.70521;10.1109/tvcg.2013.233;10.1109/tvcg.2013.212;10.1109/tvcg.2007.70582;10.1109/tvcg.2013.232;10.1109/tvcg.2010.129;10.1109/tvcg.2014.2346919",
                "AuthorKeywords": "Topic graph, graph matching, graph visualization, user interactions, level-of-detail",
                "AminerCitationCount": 110,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 1271,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1260,
                "i": [
                    1260
                ]
            }
        },
        {
            "name": "Baining Guo",
            "value": 119,
            "numPapers": 39,
            "cluster": "1",
            "visible": 1,
            "index": 976,
            "x": 94.37169608779352,
            "y": -297.8992832779447,
            "vy": 0,
            "vx": 0,
            "r": 1.1370178468624064,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "TopicPanorama: A Full Picture of Relevant Topics",
                "DOI": "10.1109/vast.2014.7042494",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042494",
                "FirstPage": 183,
                "LastPage": 192,
                "PaperType": "C",
                "Abstract": "We present a visual analytics approach to developing a full picture of relevant topics discussed in multiple sources such as news, blogs, or micro-blogs. The full picture consists of a number of common topics among multiple sources as well as distinctive topics. The key idea behind our approach is to jointly match the topics extracted from each source together in order to interactively and effectively analyze common and distinctive topics. We start by modeling each textual corpus as a topic graph. These graphs are then matched together with a consistent graph matching method. Next, we develop an LOD-based visualization for better understanding and analysis of the matched graph. The major feature of this visualization is that it combines a radially stacked tree visualization with a density-based graph visualization to facilitate the examination of the matched topic graph from multiple perspectives. To compensate for the deficiency of the graph matching algorithm and meet different users' needs, we allow users to interactively modify the graph matching result. We have applied our approach to various data including news, tweets, and blog data. Qualitative evaluation and a real-world case study with domain experts demonstrate the promise of our approach, especially in support of analyzing a topic-graph-based full picture at different levels of detail.",
                "AuthorNamesDeduped": "Shixia Liu;Xiting Wang;Jianfei Chen 0001;Jim Zhu;Baining Guo",
                "AuthorNames": "Shixia Liu;Xiting Wang;Jianfei Chen;Jim Zhu;Baining Guo",
                "AuthorAffiliation": "Microsoft Research Asia;Microsoft Research Asia, Tsinghua University;Dept. of Comp. Sci. & Tech., Tsingua University;Dept. of Comp. Sci. & Tech., Tsinghua University;Microsoft Research Asia",
                "InternalReferences": "0.1109/vast.2011.6102461;10.1109/tvcg.2011.239;10.1109/tvcg.2009.111;10.1109/tvcg.2010.154;10.1109/vast.2011.6102439;10.1109/tvcg.2006.193;10.1109/tvcg.2012.285;10.1109/tvcg.2013.221;10.1109/tvcg.2013.162;10.1109/tvcg.2012.279;10.1109/tvcg.2010.183;10.1109/tvcg.2014.2346433;10.1109/tvcg.2007.70521;10.1109/tvcg.2013.233;10.1109/tvcg.2013.212;10.1109/tvcg.2007.70582;10.1109/tvcg.2013.232;10.1109/tvcg.2010.129;10.1109/tvcg.2014.2346919",
                "AuthorKeywords": "Topic graph, graph matching, graph visualization, user interactions, level-of-detail",
                "AminerCitationCount": 110,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 1271,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1260,
                "i": [
                    1260
                ]
            }
        },
        {
            "name": "Li Tan",
            "value": 257,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 977,
            "x": 131.70871027844169,
            "y": 283.55390252435166,
            "vy": 0,
            "vx": 0,
            "r": 1.2959124928036845,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "TextFlow: Towards Better Understanding of Evolving Topics in Text",
                "DOI": "10.1109/tvcg.2011.239",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.239",
                "FirstPage": 2412,
                "LastPage": 2421,
                "PaperType": "J",
                "Abstract": "Understanding how topics evolve in text data is an important and challenging task. Although much work has been devoted to topic analysis, the study of topic evolution has largely been limited to individual topics. In this paper, we introduce TextFlow, a seamless integration of visualization and topic mining techniques, for analyzing various evolution patterns that emerge from multiple topics. We first extend an existing analysis technique to extract three-level features: the topic evolution trend, the critical event, and the keyword correlation. Then a coherent visualization that consists of three new visual components is designed to convey complex relationships between them. Through interaction, the topic mining model and visualization can communicate with each other to help users refine the analysis result and gain insights into the data progressively. Finally, two case studies are conducted to demonstrate the effectiveness and usefulness of TextFlow in helping users understand the major topic evolution patterns in time-varying text data.",
                "AuthorNamesDeduped": "Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong 0001",
                "AuthorNames": "Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China;Microsoft Research Asia, China;Zhejiang University, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/vast.2009.5333443;10.1109/tvcg.2006.156;10.1109/tvcg.2009.171;10.1109/tvcg.2008.166;10.1109/tvcg.2010.129;10.1109/vast.2008.4677364;10.1109/infvis.2005.1532122;10.1109/vast.2009.5333437;10.1109/infvis.2005.1532152",
                "AuthorKeywords": "Text visualization, Topic evolution, Hierarchical Dirichlet process, Critical event",
                "AminerCitationCount": 466,
                "CitationCountCrossRef": 258,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 4285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1537,
                "i": [
                    1537
                ]
            }
        },
        {
            "name": "Zekai Gao",
            "value": 213,
            "numPapers": 9,
            "cluster": "1",
            "visible": 1,
            "index": 978,
            "x": -288.80342527569934,
            "y": -120.1772921521515,
            "vy": 0,
            "vx": 0,
            "r": 1.2452504317789292,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "TextFlow: Towards Better Understanding of Evolving Topics in Text",
                "DOI": "10.1109/tvcg.2011.239",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.239",
                "FirstPage": 2412,
                "LastPage": 2421,
                "PaperType": "J",
                "Abstract": "Understanding how topics evolve in text data is an important and challenging task. Although much work has been devoted to topic analysis, the study of topic evolution has largely been limited to individual topics. In this paper, we introduce TextFlow, a seamless integration of visualization and topic mining techniques, for analyzing various evolution patterns that emerge from multiple topics. We first extend an existing analysis technique to extract three-level features: the topic evolution trend, the critical event, and the keyword correlation. Then a coherent visualization that consists of three new visual components is designed to convey complex relationships between them. Through interaction, the topic mining model and visualization can communicate with each other to help users refine the analysis result and gain insights into the data progressively. Finally, two case studies are conducted to demonstrate the effectiveness and usefulness of TextFlow in helping users understand the major topic evolution patterns in time-varying text data.",
                "AuthorNamesDeduped": "Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong 0001",
                "AuthorNames": "Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China;Microsoft Research Asia, China;Zhejiang University, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/vast.2009.5333443;10.1109/tvcg.2006.156;10.1109/tvcg.2009.171;10.1109/tvcg.2008.166;10.1109/tvcg.2010.129;10.1109/vast.2008.4677364;10.1109/infvis.2005.1532122;10.1109/vast.2009.5333437;10.1109/infvis.2005.1532152",
                "AuthorKeywords": "Text visualization, Topic evolution, Hierarchical Dirichlet process, Critical event",
                "AminerCitationCount": 466,
                "CitationCountCrossRef": 258,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 4285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1537,
                "i": [
                    1537
                ]
            }
        },
        {
            "name": "Xin Tong 0001",
            "value": 213,
            "numPapers": 9,
            "cluster": "1",
            "visible": 1,
            "index": 979,
            "x": 294.28351177341347,
            "y": -106.52330589268826,
            "vy": 0,
            "vx": 0,
            "r": 1.2452504317789292,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "TextFlow: Towards Better Understanding of Evolving Topics in Text",
                "DOI": "10.1109/tvcg.2011.239",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.239",
                "FirstPage": 2412,
                "LastPage": 2421,
                "PaperType": "J",
                "Abstract": "Understanding how topics evolve in text data is an important and challenging task. Although much work has been devoted to topic analysis, the study of topic evolution has largely been limited to individual topics. In this paper, we introduce TextFlow, a seamless integration of visualization and topic mining techniques, for analyzing various evolution patterns that emerge from multiple topics. We first extend an existing analysis technique to extract three-level features: the topic evolution trend, the critical event, and the keyword correlation. Then a coherent visualization that consists of three new visual components is designed to convey complex relationships between them. Through interaction, the topic mining model and visualization can communicate with each other to help users refine the analysis result and gain insights into the data progressively. Finally, two case studies are conducted to demonstrate the effectiveness and usefulness of TextFlow in helping users understand the major topic evolution patterns in time-varying text data.",
                "AuthorNamesDeduped": "Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong 0001",
                "AuthorNames": "Weiwei Cui;Shixia Liu;Li Tan;Conglei Shi;Yangqiu Song;Zekai Gao;Huamin Qu;Xin Tong",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China;Microsoft Research Asia, China;Zhejiang University, China and Microsoft Research Asia, China;Microsoft Research Asia, China;Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/vast.2010.5652931;10.1109/vast.2009.5333443;10.1109/tvcg.2006.156;10.1109/tvcg.2009.171;10.1109/tvcg.2008.166;10.1109/tvcg.2010.129;10.1109/vast.2008.4677364;10.1109/infvis.2005.1532122;10.1109/vast.2009.5333437;10.1109/infvis.2005.1532152",
                "AuthorKeywords": "Text visualization, Topic evolution, Hierarchical Dirichlet process, Critical event",
                "AminerCitationCount": 466,
                "CitationCountCrossRef": 258,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 4285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1537,
                "i": [
                    1537
                ]
            }
        },
        {
            "name": "James A. Wise",
            "value": 190,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 980,
            "x": -145.11406257962068,
            "y": 277.47415887184513,
            "vy": 0,
            "vx": 0,
            "r": 1.2187679907887161,
            "node": {
                "Conference": "InfoVis",
                "Year": 1995,
                "Title": "Visualizing the non-visual: spatial analysis and interaction with information from text documents",
                "DOI": "10.1109/infvis.1995.528686",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1995.528686",
                "FirstPage": 51,
                "LastPage": 58,
                "PaperType": "C",
                "Abstract": "The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.",
                "AuthorNamesDeduped": "James A. Wise;James J. Thomas;Kelly Pennock;David Lantrip;Marc Pottier;Anne Schur;Vern Crow",
                "AuthorNames": "J.A. Wise;J.J. Thomas;K. Pennock;D. Lantrip;M. Pottier;A. Schur;V. Crow",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA",
                "InternalReferences": "0.1109/visual.1993.398863",
                "AuthorKeywords": null,
                "AminerCitationCount": 914,
                "CitationCountCrossRef": 211,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 2576,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3396,
                "i": [
                    3396
                ]
            }
        },
        {
            "name": "James J. Thomas",
            "value": 344,
            "numPapers": 25,
            "cluster": "4",
            "visible": 1,
            "index": 981,
            "x": -80.46951128387983,
            "y": -302.7782319681082,
            "vy": 0,
            "vx": 0,
            "r": 1.3960852043753598,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "IN-SPIRE InfoVis 2004 Contest Entry",
                "DOI": "10.1109/infvis.2004.37",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.37",
                "FirstPage": null,
                "LastPage": null,
                "PaperType": "M",
                "Abstract": "This is the first part (summary) of a three-part contest entry submitted to IEEE InfoVis 2004. The contest topic is visualizing InfoVis symposium papers from 1995 to 2002 and their references. The paper introduces the visualization tool IN-SPIRE, the visualization process and results, and presents lessons learned.",
                "AuthorNamesDeduped": "Pak Chung Wong;Elizabeth G. Hetzler;Christian Posse;Mark A. Whiting;Susan Havre;Nick Cramer;Anuj R. Shah;Mudita Singhal;Alan Turner;James J. Thomas",
                "AuthorNames": "Pak Chung Wong;B. Hetzler;C. Posse;M. Whiting;S. Havre;N. Cramer;Anuj Shah;M. Singhal;A. Turner;J. Thomas",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;;Pacific Northwest National Laboratory, USA",
                "InternalReferences": "10.1109/infvis.1995.528686",
                "AuthorKeywords": null,
                "AminerCitationCount": 72,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 5,
                "DownloadsXplore": 234,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2472,
                "i": [
                    2472
                ]
            }
        },
        {
            "name": "Kelly Pennock",
            "value": 190,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 982,
            "x": 263.99385248221057,
            "y": 168.98889268706637,
            "vy": 0,
            "vx": 0,
            "r": 1.2187679907887161,
            "node": {
                "Conference": "InfoVis",
                "Year": 1995,
                "Title": "Visualizing the non-visual: spatial analysis and interaction with information from text documents",
                "DOI": "10.1109/infvis.1995.528686",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1995.528686",
                "FirstPage": 51,
                "LastPage": 58,
                "PaperType": "C",
                "Abstract": "The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.",
                "AuthorNamesDeduped": "James A. Wise;James J. Thomas;Kelly Pennock;David Lantrip;Marc Pottier;Anne Schur;Vern Crow",
                "AuthorNames": "J.A. Wise;J.J. Thomas;K. Pennock;D. Lantrip;M. Pottier;A. Schur;V. Crow",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA",
                "InternalReferences": "0.1109/visual.1993.398863",
                "AuthorKeywords": null,
                "AminerCitationCount": 914,
                "CitationCountCrossRef": 211,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 2576,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3396,
                "i": [
                    3396
                ]
            }
        },
        {
            "name": "David Lantrip",
            "value": 190,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 983,
            "x": -308.96832353494983,
            "y": 53.74546540874452,
            "vy": 0,
            "vx": 0,
            "r": 1.2187679907887161,
            "node": {
                "Conference": "InfoVis",
                "Year": 1995,
                "Title": "Visualizing the non-visual: spatial analysis and interaction with information from text documents",
                "DOI": "10.1109/infvis.1995.528686",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1995.528686",
                "FirstPage": 51,
                "LastPage": 58,
                "PaperType": "C",
                "Abstract": "The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.",
                "AuthorNamesDeduped": "James A. Wise;James J. Thomas;Kelly Pennock;David Lantrip;Marc Pottier;Anne Schur;Vern Crow",
                "AuthorNames": "J.A. Wise;J.J. Thomas;K. Pennock;D. Lantrip;M. Pottier;A. Schur;V. Crow",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA",
                "InternalReferences": "0.1109/visual.1993.398863",
                "AuthorKeywords": null,
                "AminerCitationCount": 914,
                "CitationCountCrossRef": 211,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 2576,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3396,
                "i": [
                    3396
                ]
            }
        },
        {
            "name": "Marc Pottier",
            "value": 190,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 984,
            "x": 191.61642718850658,
            "y": -248.4615560434084,
            "vy": 0,
            "vx": 0,
            "r": 1.2187679907887161,
            "node": {
                "Conference": "InfoVis",
                "Year": 1995,
                "Title": "Visualizing the non-visual: spatial analysis and interaction with information from text documents",
                "DOI": "10.1109/infvis.1995.528686",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1995.528686",
                "FirstPage": 51,
                "LastPage": 58,
                "PaperType": "C",
                "Abstract": "The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.",
                "AuthorNamesDeduped": "James A. Wise;James J. Thomas;Kelly Pennock;David Lantrip;Marc Pottier;Anne Schur;Vern Crow",
                "AuthorNames": "J.A. Wise;J.J. Thomas;K. Pennock;D. Lantrip;M. Pottier;A. Schur;V. Crow",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA",
                "InternalReferences": "0.1109/visual.1993.398863",
                "AuthorKeywords": null,
                "AminerCitationCount": 914,
                "CitationCountCrossRef": 211,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 2576,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3396,
                "i": [
                    3396
                ]
            }
        },
        {
            "name": "Anne Schur",
            "value": 190,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 985,
            "x": 26.554855869853224,
            "y": 312.8015978695303,
            "vy": 0,
            "vx": 0,
            "r": 1.2187679907887161,
            "node": {
                "Conference": "InfoVis",
                "Year": 1995,
                "Title": "Visualizing the non-visual: spatial analysis and interaction with information from text documents",
                "DOI": "10.1109/infvis.1995.528686",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1995.528686",
                "FirstPage": 51,
                "LastPage": 58,
                "PaperType": "C",
                "Abstract": "The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.",
                "AuthorNamesDeduped": "James A. Wise;James J. Thomas;Kelly Pennock;David Lantrip;Marc Pottier;Anne Schur;Vern Crow",
                "AuthorNames": "J.A. Wise;J.J. Thomas;K. Pennock;D. Lantrip;M. Pottier;A. Schur;V. Crow",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA",
                "InternalReferences": "0.1109/visual.1993.398863",
                "AuthorKeywords": null,
                "AminerCitationCount": 914,
                "CitationCountCrossRef": 211,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 2576,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3396,
                "i": [
                    3396
                ]
            }
        },
        {
            "name": "Vern Crow",
            "value": 190,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 986,
            "x": -230.99227402749196,
            "y": -212.8205096780103,
            "vy": 0,
            "vx": 0,
            "r": 1.2187679907887161,
            "node": {
                "Conference": "InfoVis",
                "Year": 1995,
                "Title": "Visualizing the non-visual: spatial analysis and interaction with information from text documents",
                "DOI": "10.1109/infvis.1995.528686",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1995.528686",
                "FirstPage": 51,
                "LastPage": 58,
                "PaperType": "C",
                "Abstract": "The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.",
                "AuthorNamesDeduped": "James A. Wise;James J. Thomas;Kelly Pennock;David Lantrip;Marc Pottier;Anne Schur;Vern Crow",
                "AuthorNames": "J.A. Wise;J.J. Thomas;K. Pennock;D. Lantrip;M. Pottier;A. Schur;V. Crow",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA",
                "InternalReferences": "0.1109/visual.1993.398863",
                "AuthorKeywords": null,
                "AminerCitationCount": 914,
                "CitationCountCrossRef": 211,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 2576,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3396,
                "i": [
                    3396
                ]
            }
        },
        {
            "name": "Zezhong Wang 0001",
            "value": 7,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 987,
            "x": 314.243853765358,
            "y": 0.8946343925379462,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Interactive Data Comics",
                "DOI": "10.1109/tvcg.2021.3114849",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114849",
                "FirstPage": 944,
                "LastPage": 954,
                "PaperType": "J",
                "Abstract": "This paper investigates how to make data comics interactive. Data comics are an effective and versatile means for visual communication, leveraging the power of sequential narration and combined textual and visual content, while providing an overview of the storyline through panels assembled in expressive layouts. While a powerful static storytelling medium that works well on paper support, adding interactivity to data comics can enable non-linear storytelling, personalization, levels of details, explanations, and potentially enriched user experiences. This paper introduces a set of operations tailored to support data comics narrative goals that go beyond the traditional linear, immutable storyline curated by a story author. The goals and operations include adding and removing panels into pre-defined layouts to support branching, change of perspective, or access to detail-on-demand, as well as providing and modifying data, and interacting with data representation, to support personalization and reader-defined data focus. We propose a lightweight specification language, COMICSCRIPT, for designers to add such interactivity to static comics. To assess the viability of our authoring process, we recruited six professional illustrators, designers and data comics enthusiasts and asked them to craft an interactive comic, allowing us to understand authoring workflow and potential of our approach. We present examples of interactive comics in a gallery. This initial step towards understanding the design space of interactive comics can inform the design of creation tools and experiences for interactive storytelling.",
                "AuthorNamesDeduped": "Zezhong Wang 0001;Hugo Romat;Fanny Chevalier;Nathalie Henry Riche;Dave Murray-Rust;Benjamin Bach",
                "AuthorNames": "Zezhong Wang;Hugo Romat;Fanny Chevalier;Nathalie Henry Riche;Dave Murray-Rust;Benjamin Bach",
                "AuthorAffiliation": "University of Edinburgh, Scotland;ETH Zurich, Switzerland;University of Toronto, Canada;Microsoft Research, United States;TU Delft, Netherlands;University of Edinburgh, Scotland",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2018.2865119;10.1109/tvcg.2016.2598609;10.1109/tvcg.2015.2467201;10.1109/tvcg.2013.210;10.1109/tvcg.2008.127;10.1109/tvcg.2016.2598620;10.1109/tvcg.2020.3028948;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2010.179;10.1109/tvcg.2020.3030433;10.1109/infvis.2005.1532122",
                "AuthorKeywords": "Data comics,Non-linear narrative,interactive storytelling",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 108,
                "DownloadsXplore": 1305,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 309,
                "i": [
                    309
                ]
            }
        },
        {
            "name": "Zeng Dai",
            "value": 23,
            "numPapers": 23,
            "cluster": "3",
            "visible": 1,
            "index": 988,
            "x": -232.4355543227036,
            "y": 211.7161143765337,
            "vy": 0,
            "vx": 0,
            "r": 1.0264824409902131,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "VIBR: Visualizing Bipartite Relations at Scale with the Minimum Description Length Principle",
                "DOI": "10.1109/tvcg.2018.2864826",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864826",
                "FirstPage": 321,
                "LastPage": 330,
                "PaperType": "J",
                "Abstract": "Bipartite graphs model the key relations in many large scale real-world data: customers purchasing items, legislators voting for bills, people's affiliation with different social groups, faults occurring in vehicles, etc. However, it is challenging to visualize large scale bipartite graphs with tens of thousands or even more nodes or edges. In this paper, we propose a novel visual summarization technique for bipartite graphs based on the minimum description length (MDL) principle. The method simultaneously groups the two different set of nodes and constructs aggregated bipartite relations with balanced granularity and precision. It addresses the key trade-off that often occurs for visualizing large scale and noisy data: acquiring a clear and uncluttered overview while maximizing the information content in it. We formulate the visual summarization task as a co-clustering problem and propose an efficient algorithm based on locality sensitive hashing (LSH) that can easily scale to large graphs under reasonable interactive time constraints that previous related methods cannot satisfy. The method leads to the opportunity of introducing a visual analytics framework with multiple levels-of-detail to facilitate interactive data exploration. In the framework, we also introduce a compact visual design inspired by adjacency list representation of graphs as the building block for a small multiples display to compare the bipartite relations for different subsets of data. We showcase the applicability and effectiveness of our approach by applying it on synthetic data with ground truth and performing case studies on real-world datasets from two application domains including roll-call vote record analysis and vehicle fault pattern analysis. Interviews with experts in the political science community and the automotive industry further highlight the benefits of our approach.",
                "AuthorNamesDeduped": "Gromit Yeuk-Yin Chan;Panpan Xu;Zeng Dai;Liu Ren",
                "AuthorNames": "Gromit Yeuk-Yin Chan;Panpan Xu;Zeng Dai;Liu Ren",
                "AuthorAffiliation": "New York University, New York, NY, US;Bosch Research North America, Sunnyvale;Bosch Research North America, Sunnyvale;Bosch Research North America, Sunnyvale",
                "InternalReferences": "0.1109/tvcg.2010.154;10.1109/tvcg.2012.252;10.1109/tvcg.2013.223;10.1109/infvis.2004.1;10.1109/tvcg.2016.2598831;10.1109/tvcg.2009.111;10.1109/tvcg.2014.2346279;10.1109/tvcg.2006.166;10.1109/vast.2007.4389006;10.1109/tvcg.2015.2467813;10.1109/tvcg.2014.2346665;10.1109/tvcg.2016.2598591",
                "AuthorKeywords": "Bipartite Graph,Visual Summarization,Minimum Description Length,Information Theory",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 610,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 762,
                "i": [
                    762
                ]
            }
        },
        {
            "name": "Phong Hai Nguyen",
            "value": 114,
            "numPapers": 47,
            "cluster": "5",
            "visible": 1,
            "index": 989,
            "x": 28.392914244692907,
            "y": -313.27917648751173,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Propagating Visual Designs to Numerous Plots and Dashboards",
                "DOI": "10.1109/tvcg.2021.3114828",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114828",
                "FirstPage": 86,
                "LastPage": 95,
                "PaperType": "J",
                "Abstract": "In the process of developing an infrastructure for providing visualization and visual analytics (VIS) tools to epidemiologists and modeling scientists, we encountered a technical challenge for applying a number of visual designs to numerous datasets rapidly and reliably with limited development resources. In this paper, we present a technical solution to address this challenge. Operationally, we separate the tasks of data management, visual designs, and plots and dashboard deployment in order to streamline the development workflow. Technically, we utilize: an ontology to bring datasets, visual designs, and deployable plots and dashboards under the same management framework; multi-criteria search and ranking algorithms for discovering potential datasets that match a visual design; and a purposely-design user interface for propagating each visual design to appropriate datasets (often in tens and hundreds) and quality-assuring the propagation before the deployment. This technical solution has been used in the development of the RAMPVIS infrastructure for supporting a consortium of epidemiologists and modeling scientists through visualization.",
                "AuthorNamesDeduped": "Saiful Khan;Phong Hai Nguyen;Alfie Abdul-Rahman;Benjamin Bach;Min Chen 0001;Euan Freeman;Cagatay Turkay",
                "AuthorNames": "Saiful Khan;Phong H. Nguyen;Alfie Abdul-Rahman;Benjamin Bach;Min Chen;Euan Freeman;Cagatay Turkay",
                "AuthorAffiliation": "University of Oxford, UK;Redsift Ltd., UK;King's College London, UK;Edinburgh University, UK;University of Oxford, UK;University of Glasgow, Scotland;University of Warwick, UK",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/vast.2010.5652885;10.1109/tvcg.2019.2934785;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2599030;10.1109/tvcg.2020.3030403;10.1109/tvcg.2020.3030467;10.1109/tvcg.2015.2467191;10.1109/tvcg.2016.2598497;10.1109/tvcg.2019.2934668",
                "AuthorKeywords": "Visualization system,propagation,infrastructure,ontology,quality assurance,pandemic,emergency response",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 757,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 316,
                "i": [
                    316
                ]
            }
        },
        {
            "name": "Haekyu Park",
            "value": 114,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 990,
            "x": 190.77731965214085,
            "y": 250.30783908288782,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations",
                "DOI": "10.1109/tvcg.2019.2934659",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934659",
                "FirstPage": 1096,
                "LastPage": 1106,
                "PaperType": "J",
                "Abstract": "Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons. As predictions are often computed from millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture. We present Summit, an interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. Summit introduces two new scalable summarization techniques: (1) activation aggregation discovers important neurons, and (2) neuron-influence aggregation identifies relationships among such neurons. Summit combines these techniques to create the novel attribution graph that reveals and summarizes crucial neuron associations and substructures that contribute to a model's outcomes. Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models into compact, interactive visualizations. We present neural network exploration scenarios where Summit helps us discover multiple surprising insights into a prevalent, large-scale image classifier's learned representations and informs future neural network architecture design. The Summit visualization runs in modern web browsers and is open-sourced.",
                "AuthorNamesDeduped": "Fred Hohman;Haekyu Park;Caleb Robinson;Duen Horng (Polo) Chau",
                "AuthorNames": "Fred Hohman;Haekyu Park;Caleb Robinson;Duen Horng Polo Chau",
                "AuthorAffiliation": "Georgia Tech.;Georgia Tech.;Georgia Tech.;Georgia Tech.",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/vast.2018.8802509;10.1109/tvcg.2016.2598831;10.1109/tvcg.2016.2598828;10.1109/tvcg.2009.108;10.1109/tvcg.2017.2744878",
                "AuthorKeywords": "Deep learning interpretability,visual analytics,scalable summarization,attribution graph",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2485,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 597,
                "i": [
                    597
                ]
            }
        },
        {
            "name": "Nilaksh Das",
            "value": 37,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 991,
            "x": -309.91009691630217,
            "y": -55.72909320389329,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks",
                "DOI": "10.1109/tvcg.2021.3114858",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114858",
                "FirstPage": 813,
                "LastPage": 823,
                "PaperType": "J",
                "Abstract": "Existing research on making sense of deep neural networks often focuses on neuron-level interpretation, which may not adequately capture the bigger picture of how concepts are collectively encoded by multiple neurons. We present Neurocartography, an interactive system that scalably summarizes and visualizes concepts learned by neural networks. It automatically discovers and groups neurons that detect the same concepts, and describes how such neuron groups interact to form higher-level concepts and the subsequent predictions. Neurocartography introduces two scalable summarization techniques: (1) neuron clustering groups neurons based on the semantic similarity of the concepts detected by neurons (e.g., neurons detecting “dog faces” of different breeds are grouped); and (2) neuron embedding encodes the associations between related concepts based on how often they co-occur (e.g., neurons detecting “dog face” and “dog tail” are placed closer in the embedding space). Key to our scalable techniques is the ability to efficiently compute all neuron pairs' relationships, in time linear to the number of neurons instead of quadratic time. Neurocartography scales to large data, such as the ImageNet dataset with 1.2M images. The system's tightly coordinated views integrate the scalable techniques to visualize the concepts and their relationships, projecting the concept associations to a 2D space in Neuron Projection View, and summarizing neuron clusters and their relationships in Graph View. Through a large-scale human evaluation, we demonstrate that our technique discovers neuron groups that represent coherent, human-meaningful concepts. And through usage scenarios, we describe how our approaches enable interesting and surprising discoveries, such as concept cascades of related and isolated concepts. The Neurocartography visualization runs in modern browsers and is open-sourced.",
                "AuthorNamesDeduped": "Haekyu Park;Nilaksh Das;Rahul Duggal;Austin P. Wright;Omar Shaikh;Fred Hohman;Duen Horng (Polo) Chau",
                "AuthorNames": "Haekyu Park;Nilaksh Das;Rahul Duggal;Austin P. Wright;Omar Shaikh;Fred Hohman;Duen Horng Polo Chau",
                "AuthorAffiliation": "Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Apple, United States;Georgia Institute of Technology, United States",
                "InternalReferences": "0.1109/tvcg.2019.2934659;10.1109/tvcg.2019.2934659;10.1109/tvcg.2020.3030461;10.1109/vast.2018.8802509",
                "AuthorKeywords": "Deep learning interpretability,visual analytics,scalable summarization,neuron clustering,neuron embedding",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 721,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 317,
                "i": [
                    317
                ]
            }
        },
        {
            "name": "Omar Shaikh",
            "value": 37,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 992,
            "x": 266.29671035868125,
            "y": -168.33318761356784,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks",
                "DOI": "10.1109/tvcg.2021.3114858",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114858",
                "FirstPage": 813,
                "LastPage": 823,
                "PaperType": "J",
                "Abstract": "Existing research on making sense of deep neural networks often focuses on neuron-level interpretation, which may not adequately capture the bigger picture of how concepts are collectively encoded by multiple neurons. We present Neurocartography, an interactive system that scalably summarizes and visualizes concepts learned by neural networks. It automatically discovers and groups neurons that detect the same concepts, and describes how such neuron groups interact to form higher-level concepts and the subsequent predictions. Neurocartography introduces two scalable summarization techniques: (1) neuron clustering groups neurons based on the semantic similarity of the concepts detected by neurons (e.g., neurons detecting “dog faces” of different breeds are grouped); and (2) neuron embedding encodes the associations between related concepts based on how often they co-occur (e.g., neurons detecting “dog face” and “dog tail” are placed closer in the embedding space). Key to our scalable techniques is the ability to efficiently compute all neuron pairs' relationships, in time linear to the number of neurons instead of quadratic time. Neurocartography scales to large data, such as the ImageNet dataset with 1.2M images. The system's tightly coordinated views integrate the scalable techniques to visualize the concepts and their relationships, projecting the concept associations to a 2D space in Neuron Projection View, and summarizing neuron clusters and their relationships in Graph View. Through a large-scale human evaluation, we demonstrate that our technique discovers neuron groups that represent coherent, human-meaningful concepts. And through usage scenarios, we describe how our approaches enable interesting and surprising discoveries, such as concept cascades of related and isolated concepts. The Neurocartography visualization runs in modern browsers and is open-sourced.",
                "AuthorNamesDeduped": "Haekyu Park;Nilaksh Das;Rahul Duggal;Austin P. Wright;Omar Shaikh;Fred Hohman;Duen Horng (Polo) Chau",
                "AuthorNames": "Haekyu Park;Nilaksh Das;Rahul Duggal;Austin P. Wright;Omar Shaikh;Fred Hohman;Duen Horng Polo Chau",
                "AuthorAffiliation": "Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Georgia Institute of Technology, United States;Apple, United States;Georgia Institute of Technology, United States",
                "InternalReferences": "0.1109/tvcg.2019.2934659;10.1109/tvcg.2019.2934659;10.1109/tvcg.2020.3030461;10.1109/vast.2018.8802509",
                "AuthorKeywords": "Deep learning interpretability,visual analytics,scalable summarization,neuron clustering,neuron embedding",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 721,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 317,
                "i": [
                    317
                ]
            }
        },
        {
            "name": "Matt-Heun Hong",
            "value": 15,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 993,
            "x": -82.69309965639145,
            "y": 304.1576092574672,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "The Weighted Average Illusion: Biases in Perceived Mean Position in Scatterplots",
                "DOI": "10.1109/tvcg.2021.3114783",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114783",
                "FirstPage": 987,
                "LastPage": 997,
                "PaperType": "J",
                "Abstract": "Scatterplots can encode a third dimension by using additional channels like size or color (e.g. bubble charts). We explore a potential misinterpretation of trivariate scatterplots, which we call the <i>weighted average illusion</i>, where locations of larger and darker points are given more weight toward x- and y-mean estimates. This systematic bias is sensitive to a designer's choice of size or lightness ranges mapped onto the data. In this paper, we quantify this bias against varying size/lightness ranges and data correlations. We discuss possible explanations for its cause by measuring attention given to individual data points using a vision science technique called the centroid method. Our work illustrates how ensemble processing mechanisms and mental shortcuts can significantly distort visual summaries of data, and can lead to misjudgments like the demonstrated weighted average illusion.",
                "AuthorNamesDeduped": "Matt-Heun Hong;Jessica K. Witt;Danielle Albers Szafir",
                "AuthorNames": "Matt-Heun Hong;Jessica K. Witt;Danielle Albers Szafir",
                "AuthorAffiliation": "ATLAS Institute, University of Colorado, Boulder, USA;Department of Psychology, Colorado State University, USA;ATLAS Institute, University of Colorado, Boulder, USA",
                "InternalReferences": "0.1109/tvcg.2017.2745086;10.1109/tvcg.2014.2346978;10.1109/tvcg.2018.2865233;10.1109/tvcg.2013.183;10.1109/tvcg.2012.233;10.1109/tvcg.2012.233;10.1109/tvcg.2014.2346979;10.1109/tvcg.2017.2744184;10.1109/tvcg.2017.2744359;10.1109/tvcg.2019.2934208;10.1109/tvcg.2019.2934400",
                "AuthorKeywords": "Human-Subjects Quantitative Studies,Perception & Cognition",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 96,
                "DownloadsXplore": 560,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 319,
                "i": [
                    319
                ]
            }
        },
        {
            "name": "Jessica K. Witt",
            "value": 15,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 994,
            "x": -144.5528893030208,
            "y": -280.2756896238927,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "The Weighted Average Illusion: Biases in Perceived Mean Position in Scatterplots",
                "DOI": "10.1109/tvcg.2021.3114783",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114783",
                "FirstPage": 987,
                "LastPage": 997,
                "PaperType": "J",
                "Abstract": "Scatterplots can encode a third dimension by using additional channels like size or color (e.g. bubble charts). We explore a potential misinterpretation of trivariate scatterplots, which we call the <i>weighted average illusion</i>, where locations of larger and darker points are given more weight toward x- and y-mean estimates. This systematic bias is sensitive to a designer's choice of size or lightness ranges mapped onto the data. In this paper, we quantify this bias against varying size/lightness ranges and data correlations. We discuss possible explanations for its cause by measuring attention given to individual data points using a vision science technique called the centroid method. Our work illustrates how ensemble processing mechanisms and mental shortcuts can significantly distort visual summaries of data, and can lead to misjudgments like the demonstrated weighted average illusion.",
                "AuthorNamesDeduped": "Matt-Heun Hong;Jessica K. Witt;Danielle Albers Szafir",
                "AuthorNames": "Matt-Heun Hong;Jessica K. Witt;Danielle Albers Szafir",
                "AuthorAffiliation": "ATLAS Institute, University of Colorado, Boulder, USA;Department of Psychology, Colorado State University, USA;ATLAS Institute, University of Colorado, Boulder, USA",
                "InternalReferences": "0.1109/tvcg.2017.2745086;10.1109/tvcg.2014.2346978;10.1109/tvcg.2018.2865233;10.1109/tvcg.2013.183;10.1109/tvcg.2012.233;10.1109/tvcg.2012.233;10.1109/tvcg.2014.2346979;10.1109/tvcg.2017.2744184;10.1109/tvcg.2017.2744359;10.1109/tvcg.2019.2934208;10.1109/tvcg.2019.2934400",
                "AuthorKeywords": "Human-Subjects Quantitative Studies,Perception & Cognition",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 96,
                "DownloadsXplore": 560,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 319,
                "i": [
                    319
                ]
            }
        },
        {
            "name": "Maoyuan Sun",
            "value": 71,
            "numPapers": 75,
            "cluster": "4",
            "visible": 1,
            "index": 995,
            "x": 296.0610469004456,
            "y": 109.07729602539723,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "SightBi: Exploring Cross-View Data Relationships with Biclusters",
                "DOI": "10.1109/tvcg.2021.3114801",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114801",
                "FirstPage": 54,
                "LastPage": 64,
                "PaperType": "J",
                "Abstract": "Multiple-view visualization (MV) has been heavily used in visual analysis tools for sensemaking of data in various domains (e.g., bioinformatics, cybersecurity and text analytics). One common task of visual analysis with multiple views is to relate data across different views. For example, to identify threats, an intelligence analyst needs to link people from a social network graph with locations on a crime-map, and then search for and read relevant documents. Currently, exploring cross-view data relationships heavily relies on view-coordination techniques (e.g., brushing and linking), which may require significant user effort on many trial-and-error attempts, such as repetitiously selecting elements in one view, and then observing and following elements highlighted in other views. To address this, we present SightBi, a visual analytics approach for supporting cross-view data relationship explorations. We discuss the design rationale of SightBi in detail, with identified user tasks regarding the use of cross-view data relationships. SightBi formalizes cross-view data relationships as biclusters, computes them from a dataset, and uses a bi-context design that highlights creating stand-alone relationship-views. This helps preserve existing views and offers an overview of cross-view data relationships to guide user exploration. Moreover, SightBi allows users to interactively manage the layout of multiple views by using newly created relationship-views. With a usage scenario, we demonstrate the usefulness of SightBi for sensemaking of cross-view data relationships.",
                "AuthorNamesDeduped": "Maoyuan Sun;Abdul Rahman Shaikh;Hamed Alhoori;Jian Zhao 0010",
                "AuthorNames": "Maoyuan Sun;Abdul Rahman Shaikh;Hamed Alhoori;Jian Zhao",
                "AuthorAffiliation": "Northern Illinois University, United States;Northern Illinois University, United States;Northern Illinois University, United States;University of Waterloo, Canada",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2020.3030338;10.1109/tvcg.2007.70521;10.1109/tvcg.2009.122;10.1109/tvcg.2014.2346260;10.1109/tvcg.2007.70582;10.1109/tvcg.2013.160;10.1109/tvcg.2017.2743859;10.1109/tvcg.2018.2864903;10.1109/tvcg.2006.166;10.1109/vast.2007.4389006;10.1109/tvcg.2015.2467813;10.1109/tvcg.2014.2346665;10.1109/infvis.2004.12;10.1109/tvcg.2013.167;10.1109/tvcg.2017.2744458;10.1109/infvis.2004.10",
                "AuthorKeywords": "Cross-view data relationship,multi-view visualization,bicluster,visual analytics",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 74,
                "DownloadsXplore": 688,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 324,
                "i": [
                    324
                ]
            }
        },
        {
            "name": "Nathalie Henry",
            "value": 301,
            "numPapers": 15,
            "cluster": "4",
            "visible": 1,
            "index": 996,
            "x": -292.1334733430858,
            "y": 119.61619352121421,
            "vy": 0,
            "vx": 0,
            "r": 1.34657455382844,
            "node": {
                "Conference": "InfoVis",
                "Year": 2007,
                "Title": "NodeTrix: a Hybrid Visualization of Social Networks",
                "DOI": "10.1109/tvcg.2007.70582",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70582",
                "FirstPage": 1302,
                "LastPage": 1309,
                "PaperType": "J",
                "Abstract": "The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.",
                "AuthorNamesDeduped": "Nathalie Henry;Jean-Daniel Fekete;Michael J. McGuffin",
                "AuthorNames": "Nathalie Henry;Jean-Daniel Fekete;Michael J. McGuffin",
                "AuthorAffiliation": "University of Sydney, Australia and INRIA Futurs, University of Paris-Sud 11, France;INRIA Futurs and Laboratory RI UMR CNRS 5800, France;Ontario Cancer Institute, University of Toronto, Canada",
                "InternalReferences": "0.1109/tvcg.2006.160;10.1109/vast.2006.261426;10.1109/infvis.2005.1532126;10.1109/infvis.2004.46;10.1109/tvcg.2006.193;10.1109/infvis.2005.1532129;10.1109/tvcg.2006.166;10.1109/tvcg.2006.147;10.1109/infvis.2004.64;10.1109/infvis.2003.1249011",
                "AuthorKeywords": "Network visualization, Matrix visualization, Hybrid visualization, Aggregation, Interaction",
                "AminerCitationCount": 675,
                "CitationCountCrossRef": 363,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 4039,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2087,
                "i": [
                    2087
                ]
            }
        },
        {
            "name": "Carsten Görg",
            "value": 366,
            "numPapers": 9,
            "cluster": "4",
            "visible": 1,
            "index": 997,
            "x": 134.67807846451677,
            "y": -285.6778170966473,
            "vy": 0,
            "vx": 0,
            "r": 1.4214162348877375,
            "node": {
                "Conference": "VAST",
                "Year": 2007,
                "Title": "Jigsaw: Supporting Investigative Analysis through Interactive Visualization",
                "DOI": "10.1109/vast.2007.4389006",
                "Link": "http://dx.doi.org/10.1109/VAST.2007.4389006",
                "FirstPage": 131,
                "LastPage": 138,
                "PaperType": "C",
                "Abstract": "Investigative analysts who work with collections of text documents connect embedded threads of evidence in order to formulate hypotheses about plans and activities of potential interest. As the number of documents and the corresponding number of concepts and entities within the documents grow larger, sense-making processes become more and more difficult for the analysts. We have developed a visual analytic system called Jigsaw that represents documents and their entities visually in order to help analysts examine reports more efficiently and develop theories about potential actions more quickly. Jigsaw provides multiple coordinated views of document entities with a special emphasis on visually illustrating connections between entities across the different documents.",
                "AuthorNamesDeduped": "John T. Stasko;Carsten Görg;Zhicheng Liu 0001;Kanupriya Singhal",
                "AuthorNames": "John Stasko;Carsten Gorg;Zhicheng Liu;Kanupriya Singhal",
                "AuthorAffiliation": "School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/infvis.2004.27;10.1109/vast.2006.261432",
                "AuthorKeywords": "Visual analytics, investigative analysis, intelligence analysis, information visualization, multiple views",
                "AminerCitationCount": 746,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 1130,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2112,
                "i": [
                    2112
                ]
            }
        },
        {
            "name": "Ravin Balakrishnan",
            "value": 257,
            "numPapers": 25,
            "cluster": "5",
            "visible": 1,
            "index": 998,
            "x": 93.7121073139225,
            "y": 301.7748182714785,
            "vy": 0,
            "vx": 0,
            "r": 1.2959124928036845,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Exploratory Analysis of Time-Series with ChronoLenses",
                "DOI": "10.1109/tvcg.2011.195",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.195",
                "FirstPage": 2422,
                "LastPage": 2431,
                "PaperType": "J",
                "Abstract": "Visual representations of time-series are useful for tasks such as identifying trends, patterns and anomalies in the data. Many techniques have been devised to make these visual representations more scalable, enabling the simultaneous display of multiple variables, as well as the multi-scale display of time-series of very high resolution or that span long time periods. There has been comparatively little research on how to support the more elaborate tasks associated with the exploratory visual analysis of timeseries, e.g., visualizing derived values, identifying correlations, or discovering anomalies beyond obvious outliers. Such tasks typically require deriving new time-series from the original data, trying different functions and parameters in an iterative manner. We introduce a novel visualization technique called ChronoLenses, aimed at supporting users in such exploratory tasks. ChronoLenses perform on-the-fly transformation of the data points in their focus area, tightly integrating visual analysis with user actions, and enabling the progressive construction of advanced visual analysis pipelines.",
                "AuthorNamesDeduped": "Jian Zhao 0010;Fanny Chevalier;Emmanuel Pietriga;Ravin Balakrishnan",
                "AuthorNames": "Jian Zhao;Fanny Chevalier;Emmanuel Pietriga;Ravin Balakrishnan",
                "AuthorAffiliation": "DGP, University of Toronto, Canada;OCAD University, Canada;INRIA, France;DGP, University of Toronto, Canada",
                "InternalReferences": "0.1109/tvcg.2010.162;10.1109/infvis.1999.801851;10.1109/vast.2007.4389007;10.1109/infvis.2001.963273;10.1109/infvis.2005.1532148;10.1109/tvcg.2007.70583;10.1109/tvcg.2010.193",
                "AuthorKeywords": "Time-series Data, Exploratory Visualization, Focus+Context, Lens, Interaction Techniques",
                "AminerCitationCount": 113,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1937,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1557,
                "i": [
                    1557
                ]
            }
        },
        {
            "name": "Samana Shrestha",
            "value": 41,
            "numPapers": 1,
            "cluster": "5",
            "visible": 1,
            "index": 999,
            "x": -273.0829962169902,
            "y": -159.2974487465236,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Imagining Replications: Graphical Prediction & Discrete Visualizations Improve Recall & Estimation of Effect Uncertainty",
                "DOI": "10.1109/tvcg.2017.2743898",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743898",
                "FirstPage": 446,
                "LastPage": 456,
                "PaperType": "J",
                "Abstract": "People often have erroneous intuitions about the results of uncertain processes, such as scientific experiments. Many uncertainty visualizations assume considerable statistical knowledge, but have been shown to prompt erroneous conclusions even when users possess this knowledge. Active learning approaches been shown to improve statistical reasoning, but are rarely applied in visualizing uncertainty in scientific reports. We present a controlled study to evaluate the impact of an interactive, graphical uncertainty prediction technique for communicating uncertainty in experiment results. Using our technique, users sketch their prediction of the uncertainty in experimental effects prior to viewing the true sampling distribution from an experiment. We find that having a user graphically predict the possible effects from experiment replications is an effective way to improve one's ability to make predictions about replications of new experiments. Additionally, visualizing uncertainty as a set of discrete outcomes, as opposed to a continuous probability distribution, can improve recall of a sampling distribution from a single experiment. Our work has implications for various applications where it is important to elicit peoples' estimates of probability distributions and to communicate uncertainty effectively.",
                "AuthorNamesDeduped": "Jessica Hullman;Matthew Kay 0001;Yea-Seul Kim;Samana Shrestha",
                "AuthorNames": "Jessica Hullman;Matthew Kay;Yea-Seul Kim;Samana Shrestha",
                "AuthorAffiliation": "University of Washington;University of Michigan;University of Washington;Vassar College",
                "InternalReferences": "0.1109/tvcg.2012.199;10.1109/tvcg.2014.2346298",
                "AuthorKeywords": "Graphical prediction,interactive uncertainty visualization,replication crisis,probability distribution",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 753,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 797,
                "i": [
                    797
                ]
            }
        },
        {
            "name": "David Lloyd 0002",
            "value": 100,
            "numPapers": 1,
            "cluster": "5",
            "visible": 1,
            "index": 1000,
            "x": 309.12130510338886,
            "y": -67.03744275535547,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Human-Centered Approaches in Geovisualization Design: Investigating Multiple Methods Through a Long-Term Case Study",
                "DOI": "10.1109/tvcg.2011.209",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.209",
                "FirstPage": 2498,
                "LastPage": 2507,
                "PaperType": "J",
                "Abstract": "Working with three domain specialists we investigate human-centered approaches to geovisualization following an ISO13407 taxonomy covering context of use, requirements and early stages of design. Our case study, undertaken over three years, draws attention to repeating trends: that generic approaches fail to elicit adequate requirements for geovis application design; that the use of real data is key to understanding needs and possibilities; that trust and knowledge must be built and developed with collaborators. These processes take time but modified human-centred approaches can be effective. A scenario developed through contextual inquiry but supplemented with domain data and graphics is useful to geovis designers. Wireframe, paper and digital prototypes enable successful communication between specialist and geovis domains when incorporating real and interesting data, prompting exploratory behaviour and eliciting previously unconsidered requirements. Paper prototypes are particularly successful at eliciting suggestions, especially for novel visualization. Enabling specialists to explore their data freely with a digital prototype is as effective as using a structured task protocol and is easier to administer. Autoethnography has potential for framing the design process. We conclude that a common understanding of context of use, domain data and visualization possibilities are essential to successful geovis design and develop as this progresses. HC approaches can make a significant contribution here. However, modified approaches, applied with flexibility, are most promising. We advise early, collaborative engagement with data - through simple, transient visual artefacts supported by data sketches and existing designs - before moving to successively more sophisticated data wireframes and data prototypes.",
                "AuthorNamesDeduped": "David Lloyd 0002;Jason Dykes",
                "AuthorNames": "David Lloyd;Jason Dykes",
                "AuthorAffiliation": "GiCentre, City University London, UK;GiCentre, City University London, UK",
                "InternalReferences": "0.1109/tvcg.2010.191;10.1109/tvcg.2009.174",
                "AuthorKeywords": "Evaluation, geovisualization, context of use, requirements, field study, prototypes, sketching, design",
                "AminerCitationCount": 138,
                "CitationCountCrossRef": 81,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 1988,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1552,
                "i": [
                    1552
                ]
            }
        },
        {
            "name": "Jimmy Moore",
            "value": 0,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 1001,
            "x": -182.74454620688113,
            "y": 258.36878842391377,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Exploring the Personal Informatics Analysis Gap: \"There's a Lot of Bacon\"",
                "DOI": "10.1109/tvcg.2021.3114798",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114798",
                "FirstPage": 96,
                "LastPage": 106,
                "PaperType": "J",
                "Abstract": "Personal informatics research helps people track personal data for the purposes of self-reflection and gaining self-knowledge. This field, however, has predominantly focused on the data collection and insight-generation elements of self-tracking, with less attention paid to flexible data analysis. As a result, this inattention has led to inflexible analytic pipelines that do not reflect or support the diverse ways people want to engage with their data. This paper contributes a review of personal informatics and visualization research literature to expose a gap in our knowledge for designing flexible tools that assist people engaging with and analyzing personal data in personal contexts, what we call the personal informatics analysis gap. We explore this gap through a multistage longitudinal study on how asthmatics engage with personal air quality data, and we report how participants: were motivated by broad and diverse goals; exhibited patterns in the way they explored their data; engaged with their data in playful ways; discovered new insights through serendipitous exploration; and were reluctant to use analysis tools on their own. These results present new opportunities for visual analysis research and suggest the need for fundamental shifts in how and what we design when supporting personal data analysis.",
                "AuthorNamesDeduped": "Jimmy Moore;Pascal Goffin;Jason Wiese;Miriah Meyer",
                "AuthorNames": "Jimmy Moore;Pascal Goffin;Jason Wiese;Miriah Meyer",
                "AuthorAffiliation": "School of Computing at the Univeristy of Utah, United States;Asvito Digital AG., Switzerland;School of Computing at the Univeristy of Utah, United States;Department of Science and Technology at Linköping University, School of Computing at the University of Utah, United States",
                "InternalReferences": "0.1109/tvcg.2018.2865040;10.1109/vast47406.2019.8986909;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/vast.2011.6102435;10.1109/tvcg.2010.164;10.1109/infvis.2005.1532126;10.1109/tvcg.2012.219;10.1109/tvcg.2018.2865241;10.1109/tvcg.2017.2743859;10.1109/tvcg.2018.2864526;10.1109/tvcg.2007.70594;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2014.2352953;10.1109/tvcg.2015.2467831;10.1109/tvcg.2007.70577;10.1109/infvis.2005.1532122;10.1109/tvcg.2015.2467191;10.1109/vast.2007.4389011",
                "AuthorKeywords": "Personal visualization,Personal visual analytics,Personal informatics,Interview methods",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 110,
                "DownloadsXplore": 883,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 333,
                "i": [
                    333
                ]
            }
        },
        {
            "name": "Pascal Goffin",
            "value": 24,
            "numPapers": 29,
            "cluster": "5",
            "visible": 1,
            "index": 1002,
            "x": -39.79532090141863,
            "y": -314.1119743568416,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "Exploring the Placement and Design of Word-Scale Visualizations",
                "DOI": "10.1109/tvcg.2014.2346435",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346435",
                "FirstPage": 2291,
                "LastPage": 2300,
                "PaperType": "J",
                "Abstract": "We present an exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents. Word-scale visualizations are a more general version of sparklines-small, word-sized data graphics that allow meta-information to be visually presented in-line with document text. In accordance with Edward Tufte's definition, sparklines are traditionally placed directly before or after words in the text. We describe alternative placements that permit a wider range of word-scale graphics and more flexible integration with text layouts. These alternative placements include positioning visualizations between lines, within additional vertical and horizontal space in the document, and as interactive overlays on top of the text. Each strategy changes the dimensions of the space available to display the visualizations, as well as the degree to which the text must be adjusted or reflowed to accommodate them. We provide an illustrated design space of placement options for word-scale visualizations and identify six important variables that control the placement of the graphics and the level of disruption of the source text. We also contribute a quantitative analysis that highlights the effect of different placements on readability and text disruption. Finally, we use this analysis to propose guidelines to support the design and placement of word-scale visualizations.",
                "AuthorNamesDeduped": "Pascal Goffin;Wesley Willett;Jean-Daniel Fekete;Petra Isenberg",
                "AuthorNames": "Pascal Goffin;Wesley Willett;Jean-Daniel Fekete;Petra Isenberg",
                "AuthorAffiliation": "Inria;Inria;Inria;Inria",
                "InternalReferences": "0.1109/tvcg.2013.192;10.1109/tvcg.2006.163;10.1109/tvcg.2012.196;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70589;10.1109/tvcg.2011.183;10.1109/tvcg.2013.120;10.1109/tvcg.2010.194;10.1109/infvis.2005.1532144",
                "AuthorKeywords": "Information visualization, text visualization, sparklines, glyphs, design space, word-scale visualizations",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 1151,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1193,
                "i": [
                    1193
                ]
            }
        },
        {
            "name": "Jason Wiese",
            "value": 0,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 1003,
            "x": 241.64385164930977,
            "y": 204.83712788478167,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Exploring the Personal Informatics Analysis Gap: \"There's a Lot of Bacon\"",
                "DOI": "10.1109/tvcg.2021.3114798",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114798",
                "FirstPage": 96,
                "LastPage": 106,
                "PaperType": "J",
                "Abstract": "Personal informatics research helps people track personal data for the purposes of self-reflection and gaining self-knowledge. This field, however, has predominantly focused on the data collection and insight-generation elements of self-tracking, with less attention paid to flexible data analysis. As a result, this inattention has led to inflexible analytic pipelines that do not reflect or support the diverse ways people want to engage with their data. This paper contributes a review of personal informatics and visualization research literature to expose a gap in our knowledge for designing flexible tools that assist people engaging with and analyzing personal data in personal contexts, what we call the personal informatics analysis gap. We explore this gap through a multistage longitudinal study on how asthmatics engage with personal air quality data, and we report how participants: were motivated by broad and diverse goals; exhibited patterns in the way they explored their data; engaged with their data in playful ways; discovered new insights through serendipitous exploration; and were reluctant to use analysis tools on their own. These results present new opportunities for visual analysis research and suggest the need for fundamental shifts in how and what we design when supporting personal data analysis.",
                "AuthorNamesDeduped": "Jimmy Moore;Pascal Goffin;Jason Wiese;Miriah Meyer",
                "AuthorNames": "Jimmy Moore;Pascal Goffin;Jason Wiese;Miriah Meyer",
                "AuthorAffiliation": "School of Computing at the Univeristy of Utah, United States;Asvito Digital AG., Switzerland;School of Computing at the Univeristy of Utah, United States;Department of Science and Technology at Linköping University, School of Computing at the University of Utah, United States",
                "InternalReferences": "0.1109/tvcg.2018.2865040;10.1109/vast47406.2019.8986909;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/vast.2011.6102435;10.1109/tvcg.2010.164;10.1109/infvis.2005.1532126;10.1109/tvcg.2012.219;10.1109/tvcg.2018.2865241;10.1109/tvcg.2017.2743859;10.1109/tvcg.2018.2864526;10.1109/tvcg.2007.70594;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2014.2352953;10.1109/tvcg.2015.2467831;10.1109/tvcg.2007.70577;10.1109/infvis.2005.1532122;10.1109/tvcg.2015.2467191;10.1109/vast.2007.4389011",
                "AuthorKeywords": "Personal visualization,Personal visual analytics,Personal informatics,Interview methods",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 110,
                "DownloadsXplore": 883,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 333,
                "i": [
                    333
                ]
            }
        },
        {
            "name": "Jessica Magallanes",
            "value": 14,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1004,
            "x": -316.7038293526922,
            "y": 12.193624290620422,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Sequen-C: A Multilevel Overview of Temporal Event Sequences",
                "DOI": "10.1109/tvcg.2021.3114868",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114868",
                "FirstPage": 901,
                "LastPage": 911,
                "PaperType": "J",
                "Abstract": "Building a visual overview of temporal event sequences with an optimal level-of-detail (i.e. simplified but informative) is an ongoing challenge - expecting the user to zoom into every important aspect of the overview can lead to missing insights. We propose a technique to build a multilevel overview of event sequences, whose granularity can be transformed across sequence clusters (vertical level-of-detail) or longitudinally (horizontal level-of-detail), using hierarchical aggregation and a novel cluster data representation Align-Score-Simplify. By default, the overview shows an optimal number of sequence clusters obtained through the average silhouette width metric - then users are able to explore alternative optimal sequence clusterings. The vertical level-of-detail of the overview changes along with the number of clusters, whilst the horizontal level-of-detail refers to the level of summarization applied to each cluster representation. The proposed technique has been implemented into a visualization system called Sequence Cluster Explorer (Sequen-C) that allows multilevel and detail-on-demand exploration through three coordinated views, and the inspection of data attributes at cluster, unique sequence, and individual sequence level. We present two case studies using real-world datasets in the healthcare domain: CUREd and MIMIC-III; which demonstrate how the technique can aid users to obtain a summary of common and deviating pathways, and explore data attributes for selected patterns.",
                "AuthorNamesDeduped": "Jessica Magallanes;Tony Stone;Paul D. Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorNames": "Jessica Magallanes;Tony Stone;Paul D Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorAffiliation": "Department of Computer Science, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Sheffield Teaching Hospitals NHS Foundation Trust, UK;Department of Computer Science, University of Sheffield, UK",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2020.3030442;10.1109/vast.2006.261421;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934661;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2019.2934609;10.1109/tvcg.2009.117;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2014.7042487;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Temporal event sequence visualization,clustering,hierarchical aggregation,multiple sequence alignment",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 725,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 335,
                "i": [
                    335
                ]
            }
        },
        {
            "name": "Tony Stone",
            "value": 14,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1005,
            "x": 225.40298537961792,
            "y": -223.0324957981589,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Sequen-C: A Multilevel Overview of Temporal Event Sequences",
                "DOI": "10.1109/tvcg.2021.3114868",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114868",
                "FirstPage": 901,
                "LastPage": 911,
                "PaperType": "J",
                "Abstract": "Building a visual overview of temporal event sequences with an optimal level-of-detail (i.e. simplified but informative) is an ongoing challenge - expecting the user to zoom into every important aspect of the overview can lead to missing insights. We propose a technique to build a multilevel overview of event sequences, whose granularity can be transformed across sequence clusters (vertical level-of-detail) or longitudinally (horizontal level-of-detail), using hierarchical aggregation and a novel cluster data representation Align-Score-Simplify. By default, the overview shows an optimal number of sequence clusters obtained through the average silhouette width metric - then users are able to explore alternative optimal sequence clusterings. The vertical level-of-detail of the overview changes along with the number of clusters, whilst the horizontal level-of-detail refers to the level of summarization applied to each cluster representation. The proposed technique has been implemented into a visualization system called Sequence Cluster Explorer (Sequen-C) that allows multilevel and detail-on-demand exploration through three coordinated views, and the inspection of data attributes at cluster, unique sequence, and individual sequence level. We present two case studies using real-world datasets in the healthcare domain: CUREd and MIMIC-III; which demonstrate how the technique can aid users to obtain a summary of common and deviating pathways, and explore data attributes for selected patterns.",
                "AuthorNamesDeduped": "Jessica Magallanes;Tony Stone;Paul D. Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorNames": "Jessica Magallanes;Tony Stone;Paul D Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorAffiliation": "Department of Computer Science, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Sheffield Teaching Hospitals NHS Foundation Trust, UK;Department of Computer Science, University of Sheffield, UK",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2020.3030442;10.1109/vast.2006.261421;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934661;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2019.2934609;10.1109/tvcg.2009.117;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2014.7042487;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Temporal event sequence visualization,clustering,hierarchical aggregation,multiple sequence alignment",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 725,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 335,
                "i": [
                    335
                ]
            }
        },
        {
            "name": "Paul D. Morris",
            "value": 14,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1006,
            "x": -15.556590211484075,
            "y": 316.87220215883866,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Sequen-C: A Multilevel Overview of Temporal Event Sequences",
                "DOI": "10.1109/tvcg.2021.3114868",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114868",
                "FirstPage": 901,
                "LastPage": 911,
                "PaperType": "J",
                "Abstract": "Building a visual overview of temporal event sequences with an optimal level-of-detail (i.e. simplified but informative) is an ongoing challenge - expecting the user to zoom into every important aspect of the overview can lead to missing insights. We propose a technique to build a multilevel overview of event sequences, whose granularity can be transformed across sequence clusters (vertical level-of-detail) or longitudinally (horizontal level-of-detail), using hierarchical aggregation and a novel cluster data representation Align-Score-Simplify. By default, the overview shows an optimal number of sequence clusters obtained through the average silhouette width metric - then users are able to explore alternative optimal sequence clusterings. The vertical level-of-detail of the overview changes along with the number of clusters, whilst the horizontal level-of-detail refers to the level of summarization applied to each cluster representation. The proposed technique has been implemented into a visualization system called Sequence Cluster Explorer (Sequen-C) that allows multilevel and detail-on-demand exploration through three coordinated views, and the inspection of data attributes at cluster, unique sequence, and individual sequence level. We present two case studies using real-world datasets in the healthcare domain: CUREd and MIMIC-III; which demonstrate how the technique can aid users to obtain a summary of common and deviating pathways, and explore data attributes for selected patterns.",
                "AuthorNamesDeduped": "Jessica Magallanes;Tony Stone;Paul D. Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorNames": "Jessica Magallanes;Tony Stone;Paul D Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorAffiliation": "Department of Computer Science, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Sheffield Teaching Hospitals NHS Foundation Trust, UK;Department of Computer Science, University of Sheffield, UK",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2020.3030442;10.1109/vast.2006.261421;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934661;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2019.2934609;10.1109/tvcg.2009.117;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2014.7042487;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Temporal event sequence visualization,clustering,hierarchical aggregation,multiple sequence alignment",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 725,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 335,
                "i": [
                    335
                ]
            }
        },
        {
            "name": "Suzanne Mason",
            "value": 14,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1007,
            "x": -202.6737590901524,
            "y": -244.28128740504636,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Sequen-C: A Multilevel Overview of Temporal Event Sequences",
                "DOI": "10.1109/tvcg.2021.3114868",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114868",
                "FirstPage": 901,
                "LastPage": 911,
                "PaperType": "J",
                "Abstract": "Building a visual overview of temporal event sequences with an optimal level-of-detail (i.e. simplified but informative) is an ongoing challenge - expecting the user to zoom into every important aspect of the overview can lead to missing insights. We propose a technique to build a multilevel overview of event sequences, whose granularity can be transformed across sequence clusters (vertical level-of-detail) or longitudinally (horizontal level-of-detail), using hierarchical aggregation and a novel cluster data representation Align-Score-Simplify. By default, the overview shows an optimal number of sequence clusters obtained through the average silhouette width metric - then users are able to explore alternative optimal sequence clusterings. The vertical level-of-detail of the overview changes along with the number of clusters, whilst the horizontal level-of-detail refers to the level of summarization applied to each cluster representation. The proposed technique has been implemented into a visualization system called Sequence Cluster Explorer (Sequen-C) that allows multilevel and detail-on-demand exploration through three coordinated views, and the inspection of data attributes at cluster, unique sequence, and individual sequence level. We present two case studies using real-world datasets in the healthcare domain: CUREd and MIMIC-III; which demonstrate how the technique can aid users to obtain a summary of common and deviating pathways, and explore data attributes for selected patterns.",
                "AuthorNamesDeduped": "Jessica Magallanes;Tony Stone;Paul D. Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorNames": "Jessica Magallanes;Tony Stone;Paul D Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorAffiliation": "Department of Computer Science, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Sheffield Teaching Hospitals NHS Foundation Trust, UK;Department of Computer Science, University of Sheffield, UK",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2020.3030442;10.1109/vast.2006.261421;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934661;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2019.2934609;10.1109/tvcg.2009.117;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2014.7042487;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Temporal event sequence visualization,clustering,hierarchical aggregation,multiple sequence alignment",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 725,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 335,
                "i": [
                    335
                ]
            }
        },
        {
            "name": "Steven Wood",
            "value": 14,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1008,
            "x": 314.6109794160506,
            "y": 43.242706100259234,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Sequen-C: A Multilevel Overview of Temporal Event Sequences",
                "DOI": "10.1109/tvcg.2021.3114868",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114868",
                "FirstPage": 901,
                "LastPage": 911,
                "PaperType": "J",
                "Abstract": "Building a visual overview of temporal event sequences with an optimal level-of-detail (i.e. simplified but informative) is an ongoing challenge - expecting the user to zoom into every important aspect of the overview can lead to missing insights. We propose a technique to build a multilevel overview of event sequences, whose granularity can be transformed across sequence clusters (vertical level-of-detail) or longitudinally (horizontal level-of-detail), using hierarchical aggregation and a novel cluster data representation Align-Score-Simplify. By default, the overview shows an optimal number of sequence clusters obtained through the average silhouette width metric - then users are able to explore alternative optimal sequence clusterings. The vertical level-of-detail of the overview changes along with the number of clusters, whilst the horizontal level-of-detail refers to the level of summarization applied to each cluster representation. The proposed technique has been implemented into a visualization system called Sequence Cluster Explorer (Sequen-C) that allows multilevel and detail-on-demand exploration through three coordinated views, and the inspection of data attributes at cluster, unique sequence, and individual sequence level. We present two case studies using real-world datasets in the healthcare domain: CUREd and MIMIC-III; which demonstrate how the technique can aid users to obtain a summary of common and deviating pathways, and explore data attributes for selected patterns.",
                "AuthorNamesDeduped": "Jessica Magallanes;Tony Stone;Paul D. Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorNames": "Jessica Magallanes;Tony Stone;Paul D Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorAffiliation": "Department of Computer Science, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Sheffield Teaching Hospitals NHS Foundation Trust, UK;Department of Computer Science, University of Sheffield, UK",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2020.3030442;10.1109/vast.2006.261421;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934661;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2019.2934609;10.1109/tvcg.2009.117;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2014.7042487;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Temporal event sequence visualization,clustering,hierarchical aggregation,multiple sequence alignment",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 725,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 335,
                "i": [
                    335
                ]
            }
        },
        {
            "name": "Maria-Cruz Villa-Uriol",
            "value": 14,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1009,
            "x": -261.32383757274255,
            "y": 180.72036940050467,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Sequen-C: A Multilevel Overview of Temporal Event Sequences",
                "DOI": "10.1109/tvcg.2021.3114868",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114868",
                "FirstPage": 901,
                "LastPage": 911,
                "PaperType": "J",
                "Abstract": "Building a visual overview of temporal event sequences with an optimal level-of-detail (i.e. simplified but informative) is an ongoing challenge - expecting the user to zoom into every important aspect of the overview can lead to missing insights. We propose a technique to build a multilevel overview of event sequences, whose granularity can be transformed across sequence clusters (vertical level-of-detail) or longitudinally (horizontal level-of-detail), using hierarchical aggregation and a novel cluster data representation Align-Score-Simplify. By default, the overview shows an optimal number of sequence clusters obtained through the average silhouette width metric - then users are able to explore alternative optimal sequence clusterings. The vertical level-of-detail of the overview changes along with the number of clusters, whilst the horizontal level-of-detail refers to the level of summarization applied to each cluster representation. The proposed technique has been implemented into a visualization system called Sequence Cluster Explorer (Sequen-C) that allows multilevel and detail-on-demand exploration through three coordinated views, and the inspection of data attributes at cluster, unique sequence, and individual sequence level. We present two case studies using real-world datasets in the healthcare domain: CUREd and MIMIC-III; which demonstrate how the technique can aid users to obtain a summary of common and deviating pathways, and explore data attributes for selected patterns.",
                "AuthorNamesDeduped": "Jessica Magallanes;Tony Stone;Paul D. Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorNames": "Jessica Magallanes;Tony Stone;Paul D Morris;Suzanne Mason;Steven Wood;Maria-Cruz Villa-Uriol",
                "AuthorAffiliation": "Department of Computer Science, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, UK;Centre for Urgent and Emergency Care Research, University of Sheffield, UK;Sheffield Teaching Hospitals NHS Foundation Trust, UK;Department of Computer Science, University of Sheffield, UK",
                "InternalReferences": "0.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2020.3030442;10.1109/vast.2006.261421;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934661;10.1109/tvcg.2018.2864885;10.1109/tvcg.2017.2745320;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2019.2934609;10.1109/tvcg.2009.117;10.1109/vast.2012.6400494;10.1109/tvcg.2012.225;10.1109/vast.2014.7042487;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Temporal event sequence visualization,clustering,hierarchical aggregation,multiple sequence alignment",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 725,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 335,
                "i": [
                    335
                ]
            }
        },
        {
            "name": "Devin Lange",
            "value": 6,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1010,
            "x": 70.65217709280823,
            "y": -309.9326860336716,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Loon: Using Exemplars to Visualize Large-Scale Microscopy Data",
                "DOI": "10.1109/tvcg.2021.3114766",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114766",
                "FirstPage": 248,
                "LastPage": 258,
                "PaperType": "J",
                "Abstract": "Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.",
                "AuthorNamesDeduped": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorNames": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorAffiliation": "University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2015.2467441;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598587;10.1109/tvcg.2018.2865241;10.1109/tvcg.2019.2934547;10.1109/tvcg.2017.2745978;10.1109/tvcg.2010.137;10.1109/tvcg.2012.213;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Microscopy Visualization,Cancer Cell Lines,Exemplars,Design Study",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 628,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 336,
                "i": [
                    336
                ]
            }
        },
        {
            "name": "Eddie Polanco",
            "value": 6,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1011,
            "x": 157.3375983591336,
            "y": 276.3962375695081,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Loon: Using Exemplars to Visualize Large-Scale Microscopy Data",
                "DOI": "10.1109/tvcg.2021.3114766",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114766",
                "FirstPage": 248,
                "LastPage": 258,
                "PaperType": "J",
                "Abstract": "Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.",
                "AuthorNamesDeduped": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorNames": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorAffiliation": "University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2015.2467441;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598587;10.1109/tvcg.2018.2865241;10.1109/tvcg.2019.2934547;10.1109/tvcg.2017.2745978;10.1109/tvcg.2010.137;10.1109/tvcg.2012.213;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Microscopy Visualization,Cancer Cell Lines,Exemplars,Design Study",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 628,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 336,
                "i": [
                    336
                ]
            }
        },
        {
            "name": "Robert Judson-Torres",
            "value": 6,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1012,
            "x": -302.86842583274114,
            "y": -97.57415966124113,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Loon: Using Exemplars to Visualize Large-Scale Microscopy Data",
                "DOI": "10.1109/tvcg.2021.3114766",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114766",
                "FirstPage": 248,
                "LastPage": 258,
                "PaperType": "J",
                "Abstract": "Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.",
                "AuthorNamesDeduped": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorNames": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorAffiliation": "University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2015.2467441;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598587;10.1109/tvcg.2018.2865241;10.1109/tvcg.2019.2934547;10.1109/tvcg.2017.2745978;10.1109/tvcg.2010.137;10.1109/tvcg.2012.213;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Microscopy Visualization,Cancer Cell Lines,Exemplars,Design Study",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 628,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 336,
                "i": [
                    336
                ]
            }
        },
        {
            "name": "Thomas Zangle",
            "value": 6,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1013,
            "x": 289.37894659987177,
            "y": -132.70201680738901,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Loon: Using Exemplars to Visualize Large-Scale Microscopy Data",
                "DOI": "10.1109/tvcg.2021.3114766",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114766",
                "FirstPage": 248,
                "LastPage": 258,
                "PaperType": "J",
                "Abstract": "Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.",
                "AuthorNamesDeduped": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorNames": "Devin Lange;Eddie Polanco;Robert Judson-Torres;Thomas Zangle;Alexander Lex",
                "AuthorAffiliation": "University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA;University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2015.2467441;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598587;10.1109/tvcg.2018.2865241;10.1109/tvcg.2019.2934547;10.1109/tvcg.2017.2745978;10.1109/tvcg.2010.137;10.1109/tvcg.2012.213;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Microscopy Visualization,Cancer Cell Lines,Exemplars,Design Study",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 628,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 336,
                "i": [
                    336
                ]
            }
        },
        {
            "name": "Douglas Markant",
            "value": 14,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 1014,
            "x": -123.80113569305948,
            "y": 293.4676793125755,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "A Bayesian cognition approach for belief updating of correlation judgement through uncertainty visualizations",
                "DOI": "10.1109/tvcg.2020.3029412",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3029412",
                "FirstPage": 978,
                "LastPage": 988,
                "PaperType": "J",
                "Abstract": "Understanding correlation judgement is important to designing effective visualizations of bivariate data. Prior work on correlation perception has not considered how factors including prior beliefs and uncertainty representation impact such judgements. The present work focuses on the impact of uncertainty communication when judging bivariate visualizations. Specifically, we model how users update their beliefs about variable relationships after seeing a scatterplot with and without uncertainty representation. To model and evaluate the belief updating, we present three studies. Study 1 focuses on a proposed “Line + Cone” visual elicitation method for capturing users' beliefs in an accurate and intuitive fashion. The findings reveal that our proposed method of belief solicitation reduces complexity and accurately captures the users' uncertainty about a range of bivariate relationships. Study 2 leverages the “Line + Cone” elicitation method to measure belief updating on the relationship between different sets of variables when seeing correlation visualization with and without uncertainty representation. We compare changes in users beliefs to the predictions of Bayesian cognitive models which provide normative benchmarks for how users should update their prior beliefs about a relationship in light of observed data. The findings from Study 2 revealed that one of the visualization conditions with uncertainty communication led to users being slightly more confident about their judgement compared to visualization without uncertainty information. Study 3 builds on findings from Study 2 and explores differences in belief update when the bivariate visualization is congruent or incongruent with users' prior belief. Our results highlight the effects of incorporating uncertainty representation, and the potential of measuring belief updating on correlation judgement with Bayesian cognitive models.",
                "AuthorNamesDeduped": "Alireza Karduni;Douglas Markant;Ryan Wesslen;Wenwen Dou",
                "AuthorNames": "Alireza Karduni;Douglas Markant;Ryan Wesslen;Wenwen Dou",
                "AuthorAffiliation": "University of North Carolina, Charlotte;University of North Carolina, Charlotte;University of North Carolina, Charlotte;University of North Carolina, Charlotte",
                "InternalReferences": "0.1109/tvcg.2014.2346979;10.1109/tvcg.2019.2934287;10.1109/tvcg.2017.2743898;10.1109/tvcg.2018.2864889;10.1109/tvcg.2018.2864909;10.1109/tvcg.2015.2467671;10.1109/tvcg.2017.2745240;10.1109/tvcg.2010.177;10.1109/tvcg.2012.279;10.1109/tvcg.2012.199;10.1109/tvcg.2015.2467758;10.1109/tvcg.2013.153",
                "AuthorKeywords": "Information visualization,Bayesian modeling,uncertainty visualizations,correlations,belief elicitation",
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 743,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 400,
                "i": [
                    400
                ]
            }
        },
        {
            "name": "Mohamed Ibrahim",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1015,
            "x": -107.00016098696683,
            "y": -300.1682287464201,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Probabilistic Occlusion Culling using Confidence Maps for High-Quality Rendering of Large Particle Data",
                "DOI": "10.1109/tvcg.2021.3114788",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114788",
                "FirstPage": 573,
                "LastPage": 582,
                "PaperType": "J",
                "Abstract": "Achieving high rendering quality in the visualization of large particle data, for example from large-scale molecular dynamics simulations, requires a significant amount of sub-pixel super-sampling, due to very high numbers of particles per pixel. Although it is impossible to super-sample all particles of large-scale data at interactive rates, efficient occlusion culling can decouple the overall data size from a high effective sampling rate of visible particles. However, while the latter is essential for domain scientists to be able to see important data features, performing occlusion culling by sampling or sorting the data is usually slow or error-prone due to visibility estimates of insufficient quality. We present a novel probabilistic culling architecture for super-sampled high-quality rendering of large particle data. Occlusion is dynamically determined at the sub-pixel level, without explicit visibility sorting or data simplification. We introduce confidence maps to probabilistically estimate confidence in the visibility data gathered so far. This enables progressive, confidence-based culling, helping to avoid wrong visibility decisions. In this way, we determine particle visibility with high accuracy, although only a small part of the data set is sampled. This enables extensive super-sampling of (partially) visible particles for high rendering quality, at a fraction of the cost of sampling all particles. For real-time performance with millions of particles, we exploit novel features of recent GPU architectures to group particles into two hierarchy levels, combining fine-grained culling with high frame rates.",
                "AuthorNamesDeduped": "Mohamed Ibrahim;Peter Rautek;Guido Reina;Marco Agus;Markus Hadwiger",
                "AuthorNames": "Mohamed Ibrahim;Peter Rautek;Guido Reina;Marco Agus;Markus Hadwiger",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia;Visualization Research Center (VISUS), University of Stuttgart, Germany;College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar;King Abdullah University of Science and Technology (KAUST), Visual Computing Center, Thuwal, Saudi Arabia",
                "InternalReferences": "0.1109/tvcg.2017.2743979;10.1109/tvcg.2016.2599041;10.1109/scivis.2015.7429492",
                "AuthorKeywords": "Large-scale particle data,sub-pixel occlusion culling,super-sampling,anti-aliasing,coverage,probabilistic methods",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 571,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 342,
                "i": [
                    342
                ]
            }
        },
        {
            "name": "Gregory P. Johnson",
            "value": 51,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 1016,
            "x": 281.7979598161206,
            "y": 149.13051278484923,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "OSPRay - A CPU Ray Tracing Framework for Scientific Visualization",
                "DOI": "10.1109/tvcg.2016.2599041",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599041",
                "FirstPage": 931,
                "LastPage": 940,
                "PaperType": "J",
                "Abstract": "Scientific data is continually increasing in complexity, variety and size, making efficient visualization and specifically rendering an ongoing challenge. Traditional rasterization-based visualization approaches encounter performance and quality limitations, particularly in HPC environments without dedicated rendering hardware. In this paper, we present OSPRay, a turn-key CPU ray tracing framework oriented towards production-use scientific visualization which can utilize varying SIMD widths and multiple device backends found across diverse HPC resources. This framework provides a high-quality, efficient CPU-based solution for typical visualization workloads, which has already been integrated into several prevalent visualization packages. We show that this system delivers the performance, high-level API simplicity, and modular device support needed to provide a compelling new rendering framework for implementing efficient scientific visualization workflows.",
                "AuthorNamesDeduped": "Ingo Wald;Gregory P. Johnson;Jefferson Amstutz;Carson Brownlee;Aaron Knoll;Jim Jeffers;Johannes Günther 0001;Paul A. Navrátil",
                "AuthorNames": "I Wald;GP Johnson;J Amstutz;C Brownlee;A Knoll;J Jeffers;J Günther;P Navratil",
                "AuthorAffiliation": "Intel Corp;Intel Corp;Intel Corp;Texas Advanced Computing Center and Intel Corp;Argonne National Laboratory and SCI Insitute, University of Utah;Intel Corp;Intel Corp;Texas Advanced Computing Center",
                "InternalReferences": "0.1109/scivis.2015.7429492;10.1109/tvcg.2010.173;10.1109/tvcg.2015.2467963",
                "AuthorKeywords": null,
                "AminerCitationCount": 190,
                "CitationCountCrossRef": 114,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2017,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 925,
                "i": [
                    925
                ]
            }
        },
        {
            "name": "Gerald Penn",
            "value": 192,
            "numPapers": 6,
            "cluster": "5",
            "visible": 1,
            "index": 1017,
            "x": -308.67698074593096,
            "y": 80.42711954046466,
            "vy": 0,
            "vx": 0,
            "r": 1.221070811744387,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "Bubble Sets: Revealing Set Relations with Isocontours over Existing Visualizations",
                "DOI": "10.1109/tvcg.2009.122",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.122",
                "FirstPage": 1009,
                "LastPage": 1016,
                "PaperType": "J",
                "Abstract": "While many data sets contain multiple relationships, depicting more than one data relationship within a single visualization is challenging. We introduce Bubble Sets as a visualization technique for data that has both a primary data relation with a semantically significant spatial organization and a significant set membership relation in which members of the same set are not necessarily adjacent in the primary layout. In order to maintain the spatial rights of the primary data relation, we avoid layout adjustment techniques that improve set cluster continuity and density. Instead, we use a continuous, possibly concave, isocontour to delineate set membership, without disrupting the primary layout. Optimizations minimize cluster overlap and provide for calculation of the isocontours at interactive speeds. Case studies show how this technique can be used to indicate multiple sets on a variety of common visualizations.",
                "AuthorNamesDeduped": "Christopher Collins 0001;Gerald Penn;Sheelagh Carpendale",
                "AuthorNames": "Christopher Collins;Gerald Penn;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Toronto, Canada;University of Toronto, Canada;University of Calgary, Canada",
                "InternalReferences": "0.1109/tvcg.2006.122;10.1109/infvis.2005.1532150;10.1109/tvcg.2008.130;10.1109/tvcg.2008.144;10.1109/infvis.2005.1532126;10.1109/tvcg.2007.70521;10.1109/tvcg.2008.153",
                "AuthorKeywords": "clustering, spatial layout, graph visualization, tree visualization",
                "AminerCitationCount": 402,
                "CitationCountCrossRef": 223,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 2652,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1816,
                "i": [
                    1816
                ]
            }
        },
        {
            "name": "Joseph M. Hellerstein",
            "value": 94,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 1018,
            "x": 173.36618982323597,
            "y": -267.944330460963,
            "vy": 0,
            "vx": 0,
            "r": 1.1082325849165227,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Enterprise Data Analysis and Visualization: An Interview Study",
                "DOI": "10.1109/tvcg.2012.219",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.219",
                "FirstPage": 2917,
                "LastPage": 2926,
                "PaperType": "J",
                "Abstract": "Organizations rely on data analysts to model customer engagement, streamline operations, improve production, inform business decisions, and combat fraud. Though numerous analysis and visualization tools have been built to improve the scale and efficiency at which analysts can work, there has been little research on how analysis takes place within the social and organizational context of companies. To better understand the enterprise analysts' ecosystem, we conducted semi-structured interviews with 35 data analysts from 25 organizations across a variety of sectors, including healthcare, retail, marketing and finance. Based on our interview data, we characterize the process of industrial data analysis and document how organizational features of an enterprise impact it. We describe recurring pain points, outstanding challenges, and barriers to adoption for visual analytic tools. Finally, we discuss design implications and opportunities for visual analysis research.",
                "AuthorNamesDeduped": "Sean Kandel;Andreas Paepcke;Joseph M. Hellerstein;Jeffrey Heer",
                "AuthorNames": "Sean Kandel;Andreas Paepcke;Joseph M. Hellerstein;Jeffrey Heer",
                "AuthorAffiliation": "University of Stanford, USA;University of Stanford, USA;University of California, Berkeley, USA;University of Stanford, USA",
                "InternalReferences": "0.1109/tvcg.2008.137;10.1109/vast.2008.4677365;10.1109/vast.2011.6102438;10.1109/infvis.2005.1532136;10.1109/vast.2010.5652880;10.1109/vast.2009.5333878;10.1109/vast.2007.4389011;10.1109/vast.2011.6102435",
                "AuthorKeywords": "Data, analysis, visualization, enterprise",
                "AminerCitationCount": 500,
                "CitationCountCrossRef": 266,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 6546,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1483,
                "i": [
                    1483
                ]
            }
        },
        {
            "name": "Eugene Wu 0002",
            "value": 15,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 1019,
            "x": 53.18505203165204,
            "y": 314.83543358457996,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "At a Glance: Pixel Approximate Entropy as a Measure of Line Chart Complexity",
                "DOI": "10.1109/tvcg.2018.2865264",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865264",
                "FirstPage": 872,
                "LastPage": 881,
                "PaperType": "J",
                "Abstract": "When inspecting information visualizations under time critical settings, such as emergency response or monitoring the heart rate in a surgery room, the user only has a small amount of time to view the visualization “at a glance”. In these settings, it is important to provide a quantitative measure of the visualization to understand whether or not the visualization is too “complex” to accurately judge at a glance. This paper proposes Pixel Approximate Entropy (PAE), which adapts the approximate entropy statistical measure commonly used to quantify regularity and unpredictability in time-series data, as a measure of visual complexity for line charts. We show that PAE is correlated with user-perceived chart complexity, and that increased chart PAE correlates with reduced judgement accuracy. `We also find that the correlation between PAE values and participants' judgment increases when the user has less time to examine the line charts.",
                "AuthorNamesDeduped": "Gabriel Ryan;Abigail Mosca;Remco Chang;Eugene Wu 0002",
                "AuthorNames": "Gabriel Ryan;Abigail Mosca;Remco Chang;Eugene Wu",
                "AuthorAffiliation": "Columbia University, New York, NY, US;Tufts University, Medford, MA, US;Tufts University, Medford, MA, US;Columbia University, New York, NY, US",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/tvcg.2013.133;10.1109/tvcg.2010.131;10.1109/tvcg.2010.184;10.1109/vast.2010.5653598;10.1109/tvcg.2007.70594;10.1109/infvis.2004.15;10.1109/vast.2006.261423;10.1109/tvcg.2008.140;10.1109/tvcg.2010.161;10.1109/infvis.2005.1532142;10.1109/tvcg.2014.2346979;10.1109/tvcg.2013.234;10.1109/tvcg.2010.132",
                "AuthorKeywords": "Visualization,Graphical Perception,Entropy,At-a-glance",
                "AminerCitationCount": 25,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 69,
                "DownloadsXplore": 776,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 683,
                "i": [
                    683
                ]
            }
        },
        {
            "name": "Michael Oppermann",
            "value": 25,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 1020,
            "x": -252.00878527835647,
            "y": -196.32007575010567,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "VizSnippets: Compressing Visualization Bundles Into Representative Previews for Browsing Visualization Collections",
                "DOI": "10.1109/tvcg.2021.3114841",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114841",
                "FirstPage": 747,
                "LastPage": 757,
                "PaperType": "J",
                "Abstract": "Visualization collections, accessed by platforms such as Tableau Online or Power Bl, are used by millions of people to share and access diverse analytical knowledge in the form of interactive visualization bundles. Result snippets, compact previews of these bundles, are presented to users to help them identify relevant content when browsing collections. Our engagement with Tableau product teams and review of existing snippet designs on five platforms showed us that current practices fail to help people judge the relevance of bundles because they include only the title and one image. Users frequently need to undertake the time-consuming endeavour of opening a bundle within its visualization system to examine its many views and dashboards. In response, we contribute the first systematic approach to visualization snippet design. We propose a framework for snippet design that addresses eight key challenges that we identify. We present a computational pipeline to compress the visual and textual content of bundles into representative previews that is adaptive to a provided pixel budget and provides high information density with multiple images and carefully chosen keywords. We also reflect on the method of visual inspection through random sampling to gain confidence in model and parameter choices.",
                "AuthorNamesDeduped": "Michael Oppermann;Tamara Munzner",
                "AuthorNames": "Michael Oppermann;Tamara Munzner",
                "AuthorAffiliation": "University of British Columbia, Canada;University of British Columbia, Canada",
                "InternalReferences": "0.1109/tvcg.2019.2934397;10.1109/tvcg.2008.137;10.1109/vast.2017.8585720;10.1109/tvcg.2019.2934267;10.1109/tvcg.2014.2346578;10.1109/tvcg.2020.3030387;10.1109/tvcg.2020.3030405;10.1109/tvcg.2018.2864903;10.1109/tvcg.2014.2346321;10.1109/tvcg.2017.2744158;10.1109/tvcg.2009.139;10.1109/tvcg.2019.2934619;10.1109/tvcg.2020.3030423;10.1109/tvcg.2018.2864499",
                "AuthorKeywords": "visualization collections,visualization bundles,result snippets,visual inspection",
                "AminerCitationCount": 1,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 709,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 351,
                "i": [
                    351
                ]
            }
        },
        {
            "name": "Robert Kincaid",
            "value": 142,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1021,
            "x": 318.59172227318993,
            "y": -25.481650240959386,
            "vy": 0,
            "vx": 0,
            "r": 1.1635002878526195,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "SignalLens: Focus+Context Applied to Electronic Time Series",
                "DOI": "10.1109/tvcg.2010.193",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.193",
                "FirstPage": 900,
                "LastPage": 907,
                "PaperType": "J",
                "Abstract": "Electronic test and measurement systems are becoming increasingly sophisticated in order to match the increased complexity and ultra-high speed of the devices under test. A key feature in many such instruments is a vastly increased capacity for storage of digital signals. Storage of 109 time points or more is now possible. At the same time, the typical screens on such measurement devices are relatively small. Therefore, these instruments can only render an extremely small fraction of the complete signal at any time. SignalLens uses a Focus+Context approach to provide a means of navigating to and inspecting low-level signal details in the context of the entire signal trace. This approach provides a compact visualization suitable for embedding into the small displays typically provided by electronic measurement instruments. We further augment this display with computed tracks which display time-aligned computed properties of the signal. By combining and filtering these computed tracks it is possible to easily and quickly find computationally detected features in the data which are often obscured by the visual compression required to render the large data sets on a small screen. Further, these tracks can be viewed in the context of the entire signal trace as well as visible high-level signal features. Several examples using real-world electronic measurement data are presented, which demonstrate typical use cases and the effectiveness of the design.",
                "AuthorNamesDeduped": "Robert Kincaid",
                "AuthorNames": "Robert Kincaid",
                "AuthorAffiliation": "Agilent Laboratories, USA",
                "InternalReferences": "0.1109/vast.2009.5333895",
                "AuthorKeywords": "Focus+Context, Lens, Test and Measurement, Electronic Signal, Signal Processing ",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 1071,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1703,
                "i": [
                    1703
                ]
            }
        },
        {
            "name": "Aniketh Venkat",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1022,
            "x": -217.81354974574012,
            "y": 234.10949905366928,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards replacing physical testing of granular materials with a Topology-based Model",
                "DOI": "10.1109/tvcg.2021.3114819",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114819",
                "FirstPage": 76,
                "LastPage": 85,
                "PaperType": "J",
                "Abstract": "In the study of packed granular materials, the performance of a sample (e.g., the detonation of a high-energy explosive) often correlates to measurements of a fluid flowing through it. The “effective surface area,” the surface area accessible to the airflow, is typically measured using a permeametry apparatus that relates the flow conductance to the permeable surface area via the Carman-Kozeny equation. This equation allows calculating the flow rate of a fluid flowing through the granules packed in the sample for a given pressure drop. However, Carman-Kozeny makes inherent assumptions about tunnel shapes and flow paths that may not accurately hold in situations where the particles possess a wide distribution in shapes, sizes, and aspect ratios, as is true with many powdered systems of technological and commercial interest. To address this challenge, we replicate these measurements virtually on micro-CT images of the powdered material, introducing a new Pore Network Model based on the skeleton of the Morse-Smale complex. Pores are identified as basins of the complex, their incidence encodes adjacency, and the conductivity of the capillary between them is computed from the cross-section at their interface. We build and solve a resistive network to compute an approximate laminar fluid flow through the pore structure. We provide two means of estimating flow-permeable surface area: (i) by direct computation of conductivity, and (ii) by identifying dead-ends in the flow coupled with isosurface extraction and the application of the Carman-Kozeny equation, with the aim of establishing consistency over a range of particle shapes, sizes, porosity levels, and void distribution patterns.",
                "AuthorNamesDeduped": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorNames": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah, United States;SCI Institute, University of Utah, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;SCI Institute, University of Utah, United States",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.200;10.1109/tvcg.2017.2743938",
                "AuthorKeywords": "Physical and Environmental Sciences,Computational Topology-based Techniques,Data Abstractions and Types,Scalar Field Data,Pore Network Model,Morse-Smale Complex",
                "AminerCitationCount": 1,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 392,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 355,
                "i": [
                    355
                ]
            }
        },
        {
            "name": "Graham Kosiba",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1023,
            "x": 2.4714460464351617,
            "y": -319.91231916642346,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards replacing physical testing of granular materials with a Topology-based Model",
                "DOI": "10.1109/tvcg.2021.3114819",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114819",
                "FirstPage": 76,
                "LastPage": 85,
                "PaperType": "J",
                "Abstract": "In the study of packed granular materials, the performance of a sample (e.g., the detonation of a high-energy explosive) often correlates to measurements of a fluid flowing through it. The “effective surface area,” the surface area accessible to the airflow, is typically measured using a permeametry apparatus that relates the flow conductance to the permeable surface area via the Carman-Kozeny equation. This equation allows calculating the flow rate of a fluid flowing through the granules packed in the sample for a given pressure drop. However, Carman-Kozeny makes inherent assumptions about tunnel shapes and flow paths that may not accurately hold in situations where the particles possess a wide distribution in shapes, sizes, and aspect ratios, as is true with many powdered systems of technological and commercial interest. To address this challenge, we replicate these measurements virtually on micro-CT images of the powdered material, introducing a new Pore Network Model based on the skeleton of the Morse-Smale complex. Pores are identified as basins of the complex, their incidence encodes adjacency, and the conductivity of the capillary between them is computed from the cross-section at their interface. We build and solve a resistive network to compute an approximate laminar fluid flow through the pore structure. We provide two means of estimating flow-permeable surface area: (i) by direct computation of conductivity, and (ii) by identifying dead-ends in the flow coupled with isosurface extraction and the application of the Carman-Kozeny equation, with the aim of establishing consistency over a range of particle shapes, sizes, porosity levels, and void distribution patterns.",
                "AuthorNamesDeduped": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorNames": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah, United States;SCI Institute, University of Utah, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;SCI Institute, University of Utah, United States",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.200;10.1109/tvcg.2017.2743938",
                "AuthorKeywords": "Physical and Environmental Sciences,Computational Topology-based Techniques,Data Abstractions and Types,Scalar Field Data,Pore Network Model,Morse-Smale Complex",
                "AminerCitationCount": 1,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 392,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 355,
                "i": [
                    355
                ]
            }
        },
        {
            "name": "Amitesh Maiti",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1024,
            "x": 214.3799513801495,
            "y": 237.67885149134483,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards replacing physical testing of granular materials with a Topology-based Model",
                "DOI": "10.1109/tvcg.2021.3114819",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114819",
                "FirstPage": 76,
                "LastPage": 85,
                "PaperType": "J",
                "Abstract": "In the study of packed granular materials, the performance of a sample (e.g., the detonation of a high-energy explosive) often correlates to measurements of a fluid flowing through it. The “effective surface area,” the surface area accessible to the airflow, is typically measured using a permeametry apparatus that relates the flow conductance to the permeable surface area via the Carman-Kozeny equation. This equation allows calculating the flow rate of a fluid flowing through the granules packed in the sample for a given pressure drop. However, Carman-Kozeny makes inherent assumptions about tunnel shapes and flow paths that may not accurately hold in situations where the particles possess a wide distribution in shapes, sizes, and aspect ratios, as is true with many powdered systems of technological and commercial interest. To address this challenge, we replicate these measurements virtually on micro-CT images of the powdered material, introducing a new Pore Network Model based on the skeleton of the Morse-Smale complex. Pores are identified as basins of the complex, their incidence encodes adjacency, and the conductivity of the capillary between them is computed from the cross-section at their interface. We build and solve a resistive network to compute an approximate laminar fluid flow through the pore structure. We provide two means of estimating flow-permeable surface area: (i) by direct computation of conductivity, and (ii) by identifying dead-ends in the flow coupled with isosurface extraction and the application of the Carman-Kozeny equation, with the aim of establishing consistency over a range of particle shapes, sizes, porosity levels, and void distribution patterns.",
                "AuthorNamesDeduped": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorNames": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah, United States;SCI Institute, University of Utah, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;SCI Institute, University of Utah, United States",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.200;10.1109/tvcg.2017.2743938",
                "AuthorKeywords": "Physical and Environmental Sciences,Computational Topology-based Techniques,Data Abstractions and Types,Scalar Field Data,Pore Network Model,Morse-Smale Complex",
                "AminerCitationCount": 1,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 392,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 355,
                "i": [
                    355
                ]
            }
        },
        {
            "name": "Henry Reinstein",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1025,
            "x": -318.7823272303473,
            "y": -30.46026666993926,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards replacing physical testing of granular materials with a Topology-based Model",
                "DOI": "10.1109/tvcg.2021.3114819",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114819",
                "FirstPage": 76,
                "LastPage": 85,
                "PaperType": "J",
                "Abstract": "In the study of packed granular materials, the performance of a sample (e.g., the detonation of a high-energy explosive) often correlates to measurements of a fluid flowing through it. The “effective surface area,” the surface area accessible to the airflow, is typically measured using a permeametry apparatus that relates the flow conductance to the permeable surface area via the Carman-Kozeny equation. This equation allows calculating the flow rate of a fluid flowing through the granules packed in the sample for a given pressure drop. However, Carman-Kozeny makes inherent assumptions about tunnel shapes and flow paths that may not accurately hold in situations where the particles possess a wide distribution in shapes, sizes, and aspect ratios, as is true with many powdered systems of technological and commercial interest. To address this challenge, we replicate these measurements virtually on micro-CT images of the powdered material, introducing a new Pore Network Model based on the skeleton of the Morse-Smale complex. Pores are identified as basins of the complex, their incidence encodes adjacency, and the conductivity of the capillary between them is computed from the cross-section at their interface. We build and solve a resistive network to compute an approximate laminar fluid flow through the pore structure. We provide two means of estimating flow-permeable surface area: (i) by direct computation of conductivity, and (ii) by identifying dead-ends in the flow coupled with isosurface extraction and the application of the Carman-Kozeny equation, with the aim of establishing consistency over a range of particle shapes, sizes, porosity levels, and void distribution patterns.",
                "AuthorNamesDeduped": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorNames": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah, United States;SCI Institute, University of Utah, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;SCI Institute, University of Utah, United States",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.200;10.1109/tvcg.2017.2743938",
                "AuthorKeywords": "Physical and Environmental Sciences,Computational Topology-based Techniques,Data Abstractions and Types,Scalar Field Data,Pore Network Model,Morse-Smale Complex",
                "AminerCitationCount": 1,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 392,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 355,
                "i": [
                    355
                ]
            }
        },
        {
            "name": "Richard Gee",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1026,
            "x": 255.76039068834154,
            "y": -192.96793141593994,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "Towards replacing physical testing of granular materials with a Topology-based Model",
                "DOI": "10.1109/tvcg.2021.3114819",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114819",
                "FirstPage": 76,
                "LastPage": 85,
                "PaperType": "J",
                "Abstract": "In the study of packed granular materials, the performance of a sample (e.g., the detonation of a high-energy explosive) often correlates to measurements of a fluid flowing through it. The “effective surface area,” the surface area accessible to the airflow, is typically measured using a permeametry apparatus that relates the flow conductance to the permeable surface area via the Carman-Kozeny equation. This equation allows calculating the flow rate of a fluid flowing through the granules packed in the sample for a given pressure drop. However, Carman-Kozeny makes inherent assumptions about tunnel shapes and flow paths that may not accurately hold in situations where the particles possess a wide distribution in shapes, sizes, and aspect ratios, as is true with many powdered systems of technological and commercial interest. To address this challenge, we replicate these measurements virtually on micro-CT images of the powdered material, introducing a new Pore Network Model based on the skeleton of the Morse-Smale complex. Pores are identified as basins of the complex, their incidence encodes adjacency, and the conductivity of the capillary between them is computed from the cross-section at their interface. We build and solve a resistive network to compute an approximate laminar fluid flow through the pore structure. We provide two means of estimating flow-permeable surface area: (i) by direct computation of conductivity, and (ii) by identifying dead-ends in the flow coupled with isosurface extraction and the application of the Carman-Kozeny equation, with the aim of establishing consistency over a range of particle shapes, sizes, porosity levels, and void distribution patterns.",
                "AuthorNamesDeduped": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorNames": "Aniketh Venkat;Attila Gyulassy;Graham Kosiba;Amitesh Maiti;Henry Reinstein;Richard Gee;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah, United States;SCI Institute, University of Utah, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;Lawrence Livermore National Laboratory, United States;SCI Institute, University of Utah, United States",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.200;10.1109/tvcg.2017.2743938",
                "AuthorKeywords": "Physical and Environmental Sciences,Computational Topology-based Techniques,Data Abstractions and Types,Scalar Field Data,Pore Network Model,Morse-Smale Complex",
                "AminerCitationCount": 1,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 392,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 355,
                "i": [
                    355
                ]
            }
        },
        {
            "name": "Brian Summa",
            "value": 39,
            "numPapers": 36,
            "cluster": "11",
            "visible": 1,
            "index": 1027,
            "x": -58.270149770029334,
            "y": 315.2056307329841,
            "vy": 0,
            "vx": 0,
            "r": 1.0449050086355787,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Efficient and Flexible Hierarchical Data Layouts for a Unified Encoding of Scalar Field Precision and Resolution",
                "DOI": "10.1109/tvcg.2020.3030381",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030381",
                "FirstPage": 603,
                "LastPage": 613,
                "PaperType": "J",
                "Abstract": "To address the problem of ever-growing scientific data sizes making data movement a major hindrance to analysis, we introduce a novel encoding for scalar fields: a unified tree of resolution and precision, specifically constructed so that valid cuts correspond to sensible approximations of the original field in the precision-resolution space. Furthermore, we introduce a highly flexible encoding of such trees that forms a parameterized family of data hierarchies. We discuss how different parameter choices lead to different trade-offs in practice, and show how specific choices result in known data representation schemes such as zfp [52], idx [58], and jpeg2000 [76]. Finally, we provide system-level details and empirical evidence on how such hierarchies facilitate common approximate queries with minimal data movement and time, using real-world data sets ranging from a few gigabytes to nearly a terabyte in size. Experiments suggest that our new strategy of combining reductions in resolution and precision is competitive with state-of-the-art compression techniques with respect to data quality, while being significantly more flexible and orders of magnitude faster, and requiring significantly reduced resources.",
                "AuthorNamesDeduped": "Duong Hoang;Brian Summa;Harsh Bhatia;Peter Lindstrom 0001;Pavol Klacansky;Will Usher 0001;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorNames": "Duong Hoang;Brian Summa;Harsh Bhatia;Peter Lindstrom;Pavol Klacansky;Will Usher;Peer-Timo Bremer;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;Tulane Universiy;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/visual.2005.1532797;10.1109/visual.1997.663865;10.1109/tvcg.2007.70516;10.1109/visual.2002.1183757;10.1109/tvcg.2012.240;10.1109/tvcg.2018.2864853;10.1109/visual.1999.809908;10.1109/tvcg.2014.2346458;10.1109/tvcg.2006.143;10.1109/visual.2004.51;10.1109/visual.2003.1250385;10.1109/tvcg.2011.214;10.1109/tvcg.2012.274",
                "AuthorKeywords": "scalar field,large-scale data,data compression,multiresolution,wavelet transform,coarse approximation",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 616,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 432,
                "i": [
                    432
                ]
            }
        },
        {
            "name": "Paula Kayongo",
            "value": 26,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1028,
            "x": -170.03443076424207,
            "y": -271.91596561195183,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Bayesian-Assisted Inference from Visualized Data",
                "DOI": "10.1109/tvcg.2020.3028984",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028984",
                "FirstPage": 989,
                "LastPage": 999,
                "PaperType": "J",
                "Abstract": "A Bayesian view of data interpretation suggests that a visualization user should update their existing beliefs about a parameter's value in accordance with the amount of information about the parameter value captured by the new observations. Extending recent work applying Bayesian models to understand and evaluate belief updating from visualizations, we show how the predictions of Bayesian inference can be used to guide more rational belief updating. We design a Bayesian inference-assisted uncertainty analogy that numerically relates uncertainty in observed data to the user's subjective uncertainty, and a posterior visualization that prescribes how a user should update their beliefs given their prior beliefs and the observed data. In a pre-registered experiment on 4,800 people, we find that when a newly observed data sample is relatively small (N=158), both techniques reliably improve people's Bayesian updating on average compared to the current best practice of visualizing uncertainty in the observed data. For large data samples (N=5208), where people's updated beliefs tend to deviate more strongly from the prescriptions of a Bayesian model, we find evidence that the effectiveness of the two forms of Bayesian assistance may depend on people's proclivity toward trusting the source of the data. We discuss how our results provide insight into individual processes of belief updating and subjective uncertainty, and how understanding these aspects of interpretation paves the way for more sophisticated interactive visualizations for analysis and communication.",
                "AuthorNamesDeduped": "Yea-Seul Kim;Paula Kayongo;Madeleine Grunde-McLaughlin;Jessica Hullman",
                "AuthorNames": "Yea-Seul Kim;Paula Kayongo;Madeleine Grunde-McLaughlin;Jessica Hullman",
                "AuthorAffiliation": "University of Washington;Northwestern University;University of Pennsylvania;University of Washington",
                "InternalReferences": "0.1109/tvcg.2014.2346298;10.1109/tvcg.2010.176;10.1109/tvcg.2019.2934287;10.1109/tvcg.2017.2743898;10.1109/tvcg.2018.2864909;10.1109/tvcg.2018.2864913;10.1109/tvcg.2012.199;10.1109/tvcg.2015.2467758",
                "AuthorKeywords": "Bayesian cognition,Belief updating,Uncertainty visualization,Adaptive visualization",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 586,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 394,
                "i": [
                    394
                ]
            }
        },
        {
            "name": "Patrick Reipschläger",
            "value": 10,
            "numPapers": 18,
            "cluster": "5",
            "visible": 1,
            "index": 1029,
            "x": 309.2049019241267,
            "y": 85.68738895596694,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Personal Augmented Reality for Information Visualization on Large Interactive Displays",
                "DOI": "10.1109/tvcg.2020.3030460",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030460",
                "FirstPage": 1182,
                "LastPage": 1192,
                "PaperType": "J",
                "Abstract": "In this work we propose the combination of large interactive displays with personal head-mounted Augmented Reality (AR) for information visualization to facilitate data exploration and analysis. Even though large displays provide more display space, they are challenging with regard to perception, effective multi-user support, and managing data density and complexity. To address these issues and illustrate our proposed setup, we contribute an extensive design space comprising first, the spatial alignment of display, visualizations, and objects in AR space. Next, we discuss which parts of a visualization can be augmented. Finally, we analyze how AR can be used to display personal views in order to show additional information and to minimize the mutual disturbance of data analysts. Based on this conceptual foundation, we present a number of exemplary techniques for extending visualizations with AR and discuss their relation to our design space. We further describe how these techniques address typical visualization problems that we have identified during our literature research. To examine our concepts, we introduce a generic AR visualization framework as well as a prototype implementing several example techniques. In order to demonstrate their potential, we further present a use case walkthrough in which we analyze a movie data set. From these experiences, we conclude that the contributed techniques can be useful in exploring and understanding multivariate data. We are convinced that the extension of large displays with AR for information visualization has a great potential for data analysis and sense-making.",
                "AuthorNamesDeduped": "Patrick Reipschläger;Tamara Flemisch;Raimund Dachselt",
                "AuthorNames": "Patrick Reipschlager;Tamara Flemisch;Raimund Dachselt",
                "AuthorAffiliation": "Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden;Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden;Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2017.2745941;10.1109/tvcg.2019.2934803;10.1109/tvcg.2012.251;10.1109/tvcg.2008.153;10.1109/tvcg.2019.2934415;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.197;10.1109/tvcg.2013.163;10.1109/tvcg.2013.166;10.1109/tvcg.2018.2865235;10.1109/tvcg.2012.204;10.1109/tvcg.2017.2744184;10.1109/tvcg.2009.162;10.1109/tvcg.2017.2745958;10.1109/tvcg.2012.275;10.1109/tvcg.2017.2745258;10.1109/tvcg.2016.2598608;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Augmented Reality,Information Visualization,InfoVis,Large Displays,Immersive Analytics,Physical Navigation,Multiple Coordinated Views",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 78,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 5904,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 361,
                "i": [
                    361
                ]
            }
        },
        {
            "name": "Tamara Flemisch",
            "value": 10,
            "numPapers": 24,
            "cluster": "5",
            "visible": 1,
            "index": 1030,
            "x": -286.01788115655035,
            "y": 145.75243277118037,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Personal Augmented Reality for Information Visualization on Large Interactive Displays",
                "DOI": "10.1109/tvcg.2020.3030460",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030460",
                "FirstPage": 1182,
                "LastPage": 1192,
                "PaperType": "J",
                "Abstract": "In this work we propose the combination of large interactive displays with personal head-mounted Augmented Reality (AR) for information visualization to facilitate data exploration and analysis. Even though large displays provide more display space, they are challenging with regard to perception, effective multi-user support, and managing data density and complexity. To address these issues and illustrate our proposed setup, we contribute an extensive design space comprising first, the spatial alignment of display, visualizations, and objects in AR space. Next, we discuss which parts of a visualization can be augmented. Finally, we analyze how AR can be used to display personal views in order to show additional information and to minimize the mutual disturbance of data analysts. Based on this conceptual foundation, we present a number of exemplary techniques for extending visualizations with AR and discuss their relation to our design space. We further describe how these techniques address typical visualization problems that we have identified during our literature research. To examine our concepts, we introduce a generic AR visualization framework as well as a prototype implementing several example techniques. In order to demonstrate their potential, we further present a use case walkthrough in which we analyze a movie data set. From these experiences, we conclude that the contributed techniques can be useful in exploring and understanding multivariate data. We are convinced that the extension of large displays with AR for information visualization has a great potential for data analysis and sense-making.",
                "AuthorNamesDeduped": "Patrick Reipschläger;Tamara Flemisch;Raimund Dachselt",
                "AuthorNames": "Patrick Reipschlager;Tamara Flemisch;Raimund Dachselt",
                "AuthorAffiliation": "Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden;Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden;Interactive Media Lab, Technische Universitat Dresden, Germany and Centre for Tactile Internet (CeTi), Cluster of Excellence Physics of Life, Dresden, TU Dresden",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2017.2745941;10.1109/tvcg.2019.2934803;10.1109/tvcg.2012.251;10.1109/tvcg.2008.153;10.1109/tvcg.2019.2934415;10.1109/tvcg.2017.2744199;10.1109/tvcg.2013.197;10.1109/tvcg.2013.163;10.1109/tvcg.2013.166;10.1109/tvcg.2018.2865235;10.1109/tvcg.2012.204;10.1109/tvcg.2017.2744184;10.1109/tvcg.2009.162;10.1109/tvcg.2017.2745958;10.1109/tvcg.2012.275;10.1109/tvcg.2017.2745258;10.1109/tvcg.2016.2598608;10.1109/tvcg.2018.2865192",
                "AuthorKeywords": "Augmented Reality,Information Visualization,InfoVis,Large Displays,Immersive Analytics,Physical Navigation,Multiple Coordinated Views",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 78,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 5904,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 361,
                "i": [
                    361
                ]
            }
        },
        {
            "name": "Bireswar Laha",
            "value": 61,
            "numPapers": 12,
            "cluster": "2",
            "visible": 1,
            "index": 1031,
            "x": 112.5008763194408,
            "y": -300.8214633754677,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Immersive Collaborative Analysis of Network Connectivity: CAVE-style or Head-Mounted Display?",
                "DOI": "10.1109/tvcg.2016.2599107",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599107",
                "FirstPage": 441,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "High-quality immersive display technologies are becoming mainstream with the release of head-mounted displays (HMDs) such as the Oculus Rift. These devices potentially represent an affordable alternative to the more traditional, centralised CAVE-style immersive environments. One driver for the development of CAVE-style immersive environments has been collaborative sense-making. Despite this, there has been little research on the effectiveness of collaborative visualisation in CAVE-style facilities, especially with respect to abstract data visualisation tasks. Indeed, very few studies have focused on the use of these displays to explore and analyse abstract data such as networks and there have been no formal user studies investigating collaborative visualisation of abstract data in immersive environments. In this paper we present the results of the first such study. It explores the relative merits of HMD and CAVE-style immersive environments for collaborative analysis of network connectivity, a common and important task involving abstract data. We find significant differences between the two conditions in task completion time and the physical movements of the participants within the space: participants using the HMD were faster while the CAVE2 condition introduced an asymmetry in movement between collaborators. Otherwise, affordances for collaborative data analysis offered by the low-cost HMD condition were not found to be different for accuracy and communication with the CAVE2. These results are notable, given that the latest HMDs will soon be accessible (in terms of cost and potentially ubiquity) to a massive audience.",
                "AuthorNamesDeduped": "Maxime Cordeil;Tim Dwyer;Karsten Klein 0001;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorNames": "Maxime Cordeil;Tim Dwyer;Karsten Klein;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Stanford University, USA;Monash University;University of South Australia",
                "InternalReferences": "0.1109/visual.2001.964545;10.1109/tvcg.2014.2346573;10.1109/vast.2007.4389011;10.1109/tvcg.2006.156;10.1109/tvcg.2011.234;10.1109/tvcg.2016.2598446",
                "AuthorKeywords": "3D Network;Oculus Rift;CAVE;Immersive Analytics;Collaboration",
                "AminerCitationCount": 188,
                "CitationCountCrossRef": 132,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 3680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 890,
                "i": [
                    890
                ]
            }
        },
        {
            "name": "Bruce H. Thomas",
            "value": 85,
            "numPapers": 11,
            "cluster": "2",
            "visible": 1,
            "index": 1032,
            "x": 120.30560734842227,
            "y": 297.95395758493834,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Immersive Collaborative Analysis of Network Connectivity: CAVE-style or Head-Mounted Display?",
                "DOI": "10.1109/tvcg.2016.2599107",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599107",
                "FirstPage": 441,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "High-quality immersive display technologies are becoming mainstream with the release of head-mounted displays (HMDs) such as the Oculus Rift. These devices potentially represent an affordable alternative to the more traditional, centralised CAVE-style immersive environments. One driver for the development of CAVE-style immersive environments has been collaborative sense-making. Despite this, there has been little research on the effectiveness of collaborative visualisation in CAVE-style facilities, especially with respect to abstract data visualisation tasks. Indeed, very few studies have focused on the use of these displays to explore and analyse abstract data such as networks and there have been no formal user studies investigating collaborative visualisation of abstract data in immersive environments. In this paper we present the results of the first such study. It explores the relative merits of HMD and CAVE-style immersive environments for collaborative analysis of network connectivity, a common and important task involving abstract data. We find significant differences between the two conditions in task completion time and the physical movements of the participants within the space: participants using the HMD were faster while the CAVE2 condition introduced an asymmetry in movement between collaborators. Otherwise, affordances for collaborative data analysis offered by the low-cost HMD condition were not found to be different for accuracy and communication with the CAVE2. These results are notable, given that the latest HMDs will soon be accessible (in terms of cost and potentially ubiquity) to a massive audience.",
                "AuthorNamesDeduped": "Maxime Cordeil;Tim Dwyer;Karsten Klein 0001;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorNames": "Maxime Cordeil;Tim Dwyer;Karsten Klein;Bireswar Laha;Kim Marriott;Bruce H. Thomas",
                "AuthorAffiliation": "Monash University;Monash University;Monash University;Stanford University, USA;Monash University;University of South Australia",
                "InternalReferences": "0.1109/visual.2001.964545;10.1109/tvcg.2014.2346573;10.1109/vast.2007.4389011;10.1109/tvcg.2006.156;10.1109/tvcg.2011.234;10.1109/tvcg.2016.2598446",
                "AuthorKeywords": "3D Network;Oculus Rift;CAVE;Immersive Analytics;Collaboration",
                "AminerCitationCount": 188,
                "CitationCountCrossRef": 132,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 3680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 890,
                "i": [
                    890
                ]
            }
        },
        {
            "name": "Younghoon Kim",
            "value": 27,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 1033,
            "x": -290.11500676150916,
            "y": -138.50372865655828,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Gemini: A Grammar and Recommender System for Animated Transitions in Statistical Graphics",
                "DOI": "10.1109/tvcg.2020.3030360",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030360",
                "FirstPage": 485,
                "LastPage": 494,
                "PaperType": "J",
                "Abstract": "Animated transitions help viewers follow changes between related visualizations. Specifying effective animations demands significant effort: authors must select the elements and properties to animate, provide transition parameters, and coordinate the timing of stages. To facilitate this process, we present Gemini, a declarative grammar and recommendation system for animated transitions between single-view statistical graphics. Gemini specifications define transition “steps” in terms of high-level visual components (marks, axes, legends) and composition rules to synchronize and concatenate steps. With this grammar, Gemini can recommend animation designs to augment and accelerate designers' work. Gemini enumerates staged animation designs for given start and end states, and ranks those designs using a cost function informed by prior perceptual studies. To evaluate Gemini, we conduct both a formative study on Mechanical Turk to assess and tune our ranking function, and a summative study in which 8 experienced visualization developers implement animations in D3 that we then compare to Gemini's suggestions. We find that most designs (9/11) are exactly replicable in Gemini, with many (8/11) achievable via edits to suggestions, and that Gemini suggestions avoid multiple participant errors.",
                "AuthorNamesDeduped": "Younghoon Kim;Jeffrey Heer",
                "AuthorNames": "Younghoon Kim;Jeffrey Heer",
                "AuthorAffiliation": "University of Washington;University of Washington",
                "InternalReferences": "0.1109/tvcg.2015.2467191;10.1109/infvis.2000.885086;10.1109/tvcg.2015.2467091;10.1109/tvcg.2016.2599030;10.1109/tvcg.2008.125;10.1109/tvcg.2018.2864884;10.1109/tvcg.2018.2865240;10.1109/tvcg.2007.70594;10.1109/tvcg.2018.2864909;10.1109/tvcg.2011.175;10.1109/tvcg.2007.70539;10.1109/tvcg.2014.2346424;10.1109/tvcg.2011.185;10.1109/tvcg.2009.174;10.1109/infvis.1999.801854;10.1109/tvcg.2016.2598647",
                "AuthorKeywords": "Animated transition,animation,transition,declarative grammar,automated design,charts",
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 697,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 370,
                "i": [
                    370
                ]
            }
        },
        {
            "name": "Mai Elshehaly",
            "value": 22,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1034,
            "x": 307.62842201740847,
            "y": -93.88692117158412,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "QualDash: Adaptable Generation of Visualisation Dashboards for Healthcare Quality Improvement",
                "DOI": "10.1109/tvcg.2020.3030424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030424",
                "FirstPage": 689,
                "LastPage": 699,
                "PaperType": "J",
                "Abstract": "Adapting dashboard design to different contexts of use is an open question in visualisation research. Dashboard designers often seek to strike a balance between dashboard adaptability and ease-of-use, and in hospitals challenges arise from the vast diversity of key metrics, data models and users involved at different organizational levels. In this design study, we present QualDash, a dashboard generation engine that allows for the dynamic configuration and deployment of visualisation dashboards for healthcare quality improvement (QI). We present a rigorous task analysis based on interviews with healthcare professionals, a co-design workshop and a series of one-on-one meetings with front line analysts. From these activities we define a metric card metaphor as a unit of visual analysis in healthcare QI, using this concept as a building block for generating highly adaptable dashboards, and leading to the design of a Metric Specification Structure (MSS). Each MSS is a JSON structure which enables dashboard authors to concisely configure unit-specific variants of a metric card, while offloading common patterns that are shared across cards to be preset by the engine. We reflect on deploying and iterating the design of OualDash in cardiology wards and pediatric intensive care units of five NHS hospitals. Finally, we report evaluation results that demonstrate the adaptability, ease-of-use and usefulness of QualDash in a real-world scenario.",
                "AuthorNamesDeduped": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorNames": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorAffiliation": "University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;Tableau, Seattle, Washington, United States;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Leeds, UK;University of Leeds, UK",
                "InternalReferences": "0.1109/tvcg.2013.124;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934264;10.1109/tvcg.2011.209;10.1109/tvcg.2015.2467325;10.1109/tvcg.2007.70594;10.1109/tvcg.2013.200;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2013.120;10.1109/tvcg.2012.213;10.1109/visual.1990.146375;10.1109/tvcg.2015.2467191;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Information visualisation,task analysis,co-design,dashboards,design study,healthcare",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1793,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 373,
                "i": [
                    373
                ]
            }
        },
        {
            "name": "Rebecca Randell",
            "value": 22,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1035,
            "x": -163.49488436558966,
            "y": 277.163169967228,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "QualDash: Adaptable Generation of Visualisation Dashboards for Healthcare Quality Improvement",
                "DOI": "10.1109/tvcg.2020.3030424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030424",
                "FirstPage": 689,
                "LastPage": 699,
                "PaperType": "J",
                "Abstract": "Adapting dashboard design to different contexts of use is an open question in visualisation research. Dashboard designers often seek to strike a balance between dashboard adaptability and ease-of-use, and in hospitals challenges arise from the vast diversity of key metrics, data models and users involved at different organizational levels. In this design study, we present QualDash, a dashboard generation engine that allows for the dynamic configuration and deployment of visualisation dashboards for healthcare quality improvement (QI). We present a rigorous task analysis based on interviews with healthcare professionals, a co-design workshop and a series of one-on-one meetings with front line analysts. From these activities we define a metric card metaphor as a unit of visual analysis in healthcare QI, using this concept as a building block for generating highly adaptable dashboards, and leading to the design of a Metric Specification Structure (MSS). Each MSS is a JSON structure which enables dashboard authors to concisely configure unit-specific variants of a metric card, while offloading common patterns that are shared across cards to be preset by the engine. We reflect on deploying and iterating the design of OualDash in cardiology wards and pediatric intensive care units of five NHS hospitals. Finally, we report evaluation results that demonstrate the adaptability, ease-of-use and usefulness of QualDash in a real-world scenario.",
                "AuthorNamesDeduped": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorNames": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorAffiliation": "University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;Tableau, Seattle, Washington, United States;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Leeds, UK;University of Leeds, UK",
                "InternalReferences": "0.1109/tvcg.2013.124;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934264;10.1109/tvcg.2011.209;10.1109/tvcg.2015.2467325;10.1109/tvcg.2007.70594;10.1109/tvcg.2013.200;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2013.120;10.1109/tvcg.2012.213;10.1109/visual.1990.146375;10.1109/tvcg.2015.2467191;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Information visualisation,task analysis,co-design,dashboards,design study,healthcare",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1793,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 373,
                "i": [
                    373
                ]
            }
        },
        {
            "name": "Lynn McVey",
            "value": 22,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1036,
            "x": -66.69717377505175,
            "y": -314.96267558302924,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "QualDash: Adaptable Generation of Visualisation Dashboards for Healthcare Quality Improvement",
                "DOI": "10.1109/tvcg.2020.3030424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030424",
                "FirstPage": 689,
                "LastPage": 699,
                "PaperType": "J",
                "Abstract": "Adapting dashboard design to different contexts of use is an open question in visualisation research. Dashboard designers often seek to strike a balance between dashboard adaptability and ease-of-use, and in hospitals challenges arise from the vast diversity of key metrics, data models and users involved at different organizational levels. In this design study, we present QualDash, a dashboard generation engine that allows for the dynamic configuration and deployment of visualisation dashboards for healthcare quality improvement (QI). We present a rigorous task analysis based on interviews with healthcare professionals, a co-design workshop and a series of one-on-one meetings with front line analysts. From these activities we define a metric card metaphor as a unit of visual analysis in healthcare QI, using this concept as a building block for generating highly adaptable dashboards, and leading to the design of a Metric Specification Structure (MSS). Each MSS is a JSON structure which enables dashboard authors to concisely configure unit-specific variants of a metric card, while offloading common patterns that are shared across cards to be preset by the engine. We reflect on deploying and iterating the design of OualDash in cardiology wards and pediatric intensive care units of five NHS hospitals. Finally, we report evaluation results that demonstrate the adaptability, ease-of-use and usefulness of QualDash in a real-world scenario.",
                "AuthorNamesDeduped": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorNames": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorAffiliation": "University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;Tableau, Seattle, Washington, United States;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Leeds, UK;University of Leeds, UK",
                "InternalReferences": "0.1109/tvcg.2013.124;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934264;10.1109/tvcg.2011.209;10.1109/tvcg.2015.2467325;10.1109/tvcg.2007.70594;10.1109/tvcg.2013.200;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2013.120;10.1109/tvcg.2012.213;10.1109/visual.1990.146375;10.1109/tvcg.2015.2467191;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Information visualisation,task analysis,co-design,dashboards,design study,healthcare",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1793,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 373,
                "i": [
                    373
                ]
            }
        },
        {
            "name": "Natasha Alvarado",
            "value": 22,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1037,
            "x": 262.06097550157807,
            "y": 187.28065868946874,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "QualDash: Adaptable Generation of Visualisation Dashboards for Healthcare Quality Improvement",
                "DOI": "10.1109/tvcg.2020.3030424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030424",
                "FirstPage": 689,
                "LastPage": 699,
                "PaperType": "J",
                "Abstract": "Adapting dashboard design to different contexts of use is an open question in visualisation research. Dashboard designers often seek to strike a balance between dashboard adaptability and ease-of-use, and in hospitals challenges arise from the vast diversity of key metrics, data models and users involved at different organizational levels. In this design study, we present QualDash, a dashboard generation engine that allows for the dynamic configuration and deployment of visualisation dashboards for healthcare quality improvement (QI). We present a rigorous task analysis based on interviews with healthcare professionals, a co-design workshop and a series of one-on-one meetings with front line analysts. From these activities we define a metric card metaphor as a unit of visual analysis in healthcare QI, using this concept as a building block for generating highly adaptable dashboards, and leading to the design of a Metric Specification Structure (MSS). Each MSS is a JSON structure which enables dashboard authors to concisely configure unit-specific variants of a metric card, while offloading common patterns that are shared across cards to be preset by the engine. We reflect on deploying and iterating the design of OualDash in cardiology wards and pediatric intensive care units of five NHS hospitals. Finally, we report evaluation results that demonstrate the adaptability, ease-of-use and usefulness of QualDash in a real-world scenario.",
                "AuthorNamesDeduped": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorNames": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorAffiliation": "University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;Tableau, Seattle, Washington, United States;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Leeds, UK;University of Leeds, UK",
                "InternalReferences": "0.1109/tvcg.2013.124;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934264;10.1109/tvcg.2011.209;10.1109/tvcg.2015.2467325;10.1109/tvcg.2007.70594;10.1109/tvcg.2013.200;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2013.120;10.1109/tvcg.2012.213;10.1109/visual.1990.146375;10.1109/tvcg.2015.2467191;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Information visualisation,task analysis,co-design,dashboards,design study,healthcare",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1793,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 373,
                "i": [
                    373
                ]
            }
        },
        {
            "name": "Chris P. Gale",
            "value": 22,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1038,
            "x": -319.8959300982728,
            "y": 38.94347065376873,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "QualDash: Adaptable Generation of Visualisation Dashboards for Healthcare Quality Improvement",
                "DOI": "10.1109/tvcg.2020.3030424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030424",
                "FirstPage": 689,
                "LastPage": 699,
                "PaperType": "J",
                "Abstract": "Adapting dashboard design to different contexts of use is an open question in visualisation research. Dashboard designers often seek to strike a balance between dashboard adaptability and ease-of-use, and in hospitals challenges arise from the vast diversity of key metrics, data models and users involved at different organizational levels. In this design study, we present QualDash, a dashboard generation engine that allows for the dynamic configuration and deployment of visualisation dashboards for healthcare quality improvement (QI). We present a rigorous task analysis based on interviews with healthcare professionals, a co-design workshop and a series of one-on-one meetings with front line analysts. From these activities we define a metric card metaphor as a unit of visual analysis in healthcare QI, using this concept as a building block for generating highly adaptable dashboards, and leading to the design of a Metric Specification Structure (MSS). Each MSS is a JSON structure which enables dashboard authors to concisely configure unit-specific variants of a metric card, while offloading common patterns that are shared across cards to be preset by the engine. We reflect on deploying and iterating the design of OualDash in cardiology wards and pediatric intensive care units of five NHS hospitals. Finally, we report evaluation results that demonstrate the adaptability, ease-of-use and usefulness of QualDash in a real-world scenario.",
                "AuthorNamesDeduped": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorNames": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorAffiliation": "University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;Tableau, Seattle, Washington, United States;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Leeds, UK;University of Leeds, UK",
                "InternalReferences": "0.1109/tvcg.2013.124;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934264;10.1109/tvcg.2011.209;10.1109/tvcg.2015.2467325;10.1109/tvcg.2007.70594;10.1109/tvcg.2013.200;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2013.120;10.1109/tvcg.2012.213;10.1109/visual.1990.146375;10.1109/tvcg.2015.2467191;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Information visualisation,task analysis,co-design,dashboards,design study,healthcare",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1793,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 373,
                "i": [
                    373
                ]
            }
        },
        {
            "name": "Roy A. Ruddle",
            "value": 64,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 1039,
            "x": 209.67624526890935,
            "y": -244.9201342681572,
            "vy": 0,
            "vx": 0,
            "r": 1.0736902705814624,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "QualDash: Adaptable Generation of Visualisation Dashboards for Healthcare Quality Improvement",
                "DOI": "10.1109/tvcg.2020.3030424",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030424",
                "FirstPage": 689,
                "LastPage": 699,
                "PaperType": "J",
                "Abstract": "Adapting dashboard design to different contexts of use is an open question in visualisation research. Dashboard designers often seek to strike a balance between dashboard adaptability and ease-of-use, and in hospitals challenges arise from the vast diversity of key metrics, data models and users involved at different organizational levels. In this design study, we present QualDash, a dashboard generation engine that allows for the dynamic configuration and deployment of visualisation dashboards for healthcare quality improvement (QI). We present a rigorous task analysis based on interviews with healthcare professionals, a co-design workshop and a series of one-on-one meetings with front line analysts. From these activities we define a metric card metaphor as a unit of visual analysis in healthcare QI, using this concept as a building block for generating highly adaptable dashboards, and leading to the design of a Metric Specification Structure (MSS). Each MSS is a JSON structure which enables dashboard authors to concisely configure unit-specific variants of a metric card, while offloading common patterns that are shared across cards to be preset by the engine. We reflect on deploying and iterating the design of OualDash in cardiology wards and pediatric intensive care units of five NHS hospitals. Finally, we report evaluation results that demonstrate the adaptability, ease-of-use and usefulness of QualDash in a real-world scenario.",
                "AuthorNamesDeduped": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorNames": "Mai Elshehaly;Rebecca Randell;Matthew Brehmer;Lynn McVey;Natasha Alvarado;Chris P. Gale;Roy A. Ruddle",
                "AuthorAffiliation": "University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;Tableau, Seattle, Washington, United States;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Bradford and the Wolfson Centre for Applied Health Research, UK;University of Leeds, UK;University of Leeds, UK",
                "InternalReferences": "0.1109/tvcg.2013.124;10.1109/tvcg.2014.2346682;10.1109/tvcg.2019.2934264;10.1109/tvcg.2011.209;10.1109/tvcg.2015.2467325;10.1109/tvcg.2007.70594;10.1109/tvcg.2013.200;10.1109/tvcg.2018.2865240;10.1109/tvcg.2009.111;10.1109/tvcg.2017.2744198;10.1109/tvcg.2018.2864903;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2013.120;10.1109/tvcg.2012.213;10.1109/visual.1990.146375;10.1109/tvcg.2015.2467191;10.1109/tvcg.2018.2865076",
                "AuthorKeywords": "Information visualisation,task analysis,co-design,dashboards,design study,healthcare",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1793,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 373,
                "i": [
                    373
                ]
            }
        },
        {
            "name": "Yuhua Liu",
            "value": 20,
            "numPapers": 34,
            "cluster": "1",
            "visible": 1,
            "index": 1040,
            "x": 10.837644988103177,
            "y": 322.3857091297811,
            "vy": 0,
            "vx": 0,
            "r": 1.023028209556707,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Context-aware Sampling of Large Networks via Graph Representation Learning",
                "DOI": "10.1109/tvcg.2020.3030440",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030440",
                "FirstPage": 1709,
                "LastPage": 1719,
                "PaperType": "J",
                "Abstract": "Numerous sampling strategies have been proposed to simplify large-scale networks for highly readable visualizations. It is of great challenge to preserve contextual structures formed by nodes and edges with tight relationships in a sampled graph, because they are easily overlooked during the process of sampling due to their irregular distribution and immunity to scale. In this paper, a new graph sampling method is proposed oriented to the preservation of contextual structures. We first utilize a graph representation learning (GRL) model to transform nodes into vectors so that the contextual structures in a network can be effectively extracted and organized. Then, we propose a multi-objective blue noise sampling model to select a subset of nodes in the vectorized space to preserve contextual structures with the retention of relative data and cluster densities in addition to those features of significance, such as bridging nodes and graph connections. We also design a set of visual interfaces enabling users to interactively conduct context-aware sampling, visually compare results with various sampling strategies, and deeply explore large networks. Case studies and quantitative comparisons based on real-world datasets have demonstrated the effectiveness of our method in the abstraction and exploration of large networks.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Chen Shi;Xilong Shen;Lihong Cai;Haoxuan Wang;Yuhua Liu;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Chen Shi;Xilong Shen;Lihong Cai;Haoxuan Wang;Yuhua Liu;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;Central South University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2018.2865139;10.1109/tvcg.2008.135;10.1109/infvis.2004.1;10.1109/tvcg.2006.147;10.1109/tvcg.2008.151;10.1109/tvcg.2017.2743858;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2015.2468078;10.1109/tvcg.2015.2467691;10.1109/tvcg.2016.2598867;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503;10.1109/tvcg.2012.238",
                "AuthorKeywords": "Graph sampling,Graph representation learning,Blue noise sampling,Graph evaluation",
                "AminerCitationCount": 20,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 936,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 378,
                "i": [
                    378
                ]
            }
        },
        {
            "name": "Feng Luo",
            "value": 87,
            "numPapers": 22,
            "cluster": "3",
            "visible": 1,
            "index": 1041,
            "x": -225.86821978747778,
            "y": -230.5071523619943,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Evaluating Multi-Dimensional Visualizations for Understanding Fuzzy Clusters",
                "DOI": "10.1109/tvcg.2018.2865020",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865020",
                "FirstPage": 12,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "Fuzzy clustering assigns a probability of membership for a datum to a cluster, which veritably reflects real-world clustering scenarios but significantly increases the complexity of understanding fuzzy clusters. Many studies have demonstrated that visualization techniques for multi-dimensional data are beneficial to understand fuzzy clusters. However, no empirical evidence exists on the effectiveness and efficiency of these visualization techniques in solving analytical tasks featured by fuzzy clusters. In this paper, we conduct a controlled experiment to evaluate the ability of fuzzy clusters analysis to use four multi-dimensional visualization techniques, namely, parallel coordinate plot, scatterplot matrix, principal component analysis, and Radviz. First, we define the analytical tasks and their representative questions specific to fuzzy clusters analysis. Then, we design objective questionnaires to compare the accuracy, time, and satisfaction in using the four techniques to solve the questions. We also design subjective questionnaires to collect the experience of the volunteers with the four techniques in terms of ease of use, informativeness, and helpfulness. With a complete experiment process and a detailed result analysis, we test against four hypotheses that are formulated on the basis of our experience, and provide instructive guidance for analysts in selecting appropriate and efficient visualization techniques to analyze fuzzy clusters.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "Central South University;Central South University;Central South University;Central South University;Central South University;Central South University;Shandong University;Beijing Technology, Business University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2745138;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2009.153;10.1109/tvcg.2016.2598831;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744198;10.1109/tvcg.2015.2467324;10.1109/tvcg.2013.153;10.1109/tvcg.2008.173;10.1109/visual.1990.146375;10.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598479;10.1109/infvis.2003.1249015",
                "AuthorKeywords": "Evaluation,multi-dimensional visualization,fuzzy clustering,parallel coordinate plot,scatterplot matrix,principal component analysis,radviz",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1464,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 739,
                "i": [
                    739
                ]
            }
        },
        {
            "name": "Minghui Chen",
            "value": 80,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1042,
            "x": 322.4082093311869,
            "y": 17.405359974949125,
            "vy": 0,
            "vx": 0,
            "r": 1.092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Evaluating Multi-Dimensional Visualizations for Understanding Fuzzy Clusters",
                "DOI": "10.1109/tvcg.2018.2865020",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865020",
                "FirstPage": 12,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "Fuzzy clustering assigns a probability of membership for a datum to a cluster, which veritably reflects real-world clustering scenarios but significantly increases the complexity of understanding fuzzy clusters. Many studies have demonstrated that visualization techniques for multi-dimensional data are beneficial to understand fuzzy clusters. However, no empirical evidence exists on the effectiveness and efficiency of these visualization techniques in solving analytical tasks featured by fuzzy clusters. In this paper, we conduct a controlled experiment to evaluate the ability of fuzzy clusters analysis to use four multi-dimensional visualization techniques, namely, parallel coordinate plot, scatterplot matrix, principal component analysis, and Radviz. First, we define the analytical tasks and their representative questions specific to fuzzy clusters analysis. Then, we design objective questionnaires to compare the accuracy, time, and satisfaction in using the four techniques to solve the questions. We also design subjective questionnaires to collect the experience of the volunteers with the four techniques in terms of ease of use, informativeness, and helpfulness. With a complete experiment process and a detailed result analysis, we test against four hypotheses that are formulated on the basis of our experience, and provide instructive guidance for analysts in selecting appropriate and efficient visualization techniques to analyze fuzzy clusters.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "Central South University;Central South University;Central South University;Central South University;Central South University;Central South University;Shandong University;Beijing Technology, Business University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2745138;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2009.153;10.1109/tvcg.2016.2598831;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744198;10.1109/tvcg.2015.2467324;10.1109/tvcg.2013.153;10.1109/tvcg.2008.173;10.1109/visual.1990.146375;10.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598479;10.1109/infvis.2003.1249015",
                "AuthorKeywords": "Evaluation,multi-dimensional visualization,fuzzy clustering,parallel coordinate plot,scatterplot matrix,principal component analysis,radviz",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1464,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 739,
                "i": [
                    739
                ]
            }
        },
        {
            "name": "Yingchao Wang",
            "value": 80,
            "numPapers": 15,
            "cluster": "1",
            "visible": 1,
            "index": 1043,
            "x": -249.61056256641683,
            "y": 205.04771897116262,
            "vy": 0,
            "vx": 0,
            "r": 1.092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Evaluating Multi-Dimensional Visualizations for Understanding Fuzzy Clusters",
                "DOI": "10.1109/tvcg.2018.2865020",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865020",
                "FirstPage": 12,
                "LastPage": 21,
                "PaperType": "J",
                "Abstract": "Fuzzy clustering assigns a probability of membership for a datum to a cluster, which veritably reflects real-world clustering scenarios but significantly increases the complexity of understanding fuzzy clusters. Many studies have demonstrated that visualization techniques for multi-dimensional data are beneficial to understand fuzzy clusters. However, no empirical evidence exists on the effectiveness and efficiency of these visualization techniques in solving analytical tasks featured by fuzzy clusters. In this paper, we conduct a controlled experiment to evaluate the ability of fuzzy clusters analysis to use four multi-dimensional visualization techniques, namely, parallel coordinate plot, scatterplot matrix, principal component analysis, and Radviz. First, we define the analytical tasks and their representative questions specific to fuzzy clusters analysis. Then, we design objective questionnaires to compare the accuracy, time, and satisfaction in using the four techniques to solve the questions. We also design subjective questionnaires to collect the experience of the volunteers with the four techniques in terms of ease of use, informativeness, and helpfulness. With a complete experiment process and a detailed result analysis, we test against four hypotheses that are formulated on the basis of our experience, and provide instructive guidance for analysts in selecting appropriate and efficient visualization techniques to analyze fuzzy clusters.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Feng Luo;Minghui Chen;Yingchao Wang;Jiazhi Xia;Fangfang Zhou;Yunhai Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "Central South University;Central South University;Central South University;Central South University;Central South University;Central South University;Shandong University;Beijing Technology, Business University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2745138;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2009.153;10.1109/tvcg.2016.2598831;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744198;10.1109/tvcg.2015.2467324;10.1109/tvcg.2013.153;10.1109/tvcg.2008.173;10.1109/visual.1990.146375;10.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598479;10.1109/infvis.2003.1249015",
                "AuthorKeywords": "Evaluation,multi-dimensional visualization,fuzzy clustering,parallel coordinate plot,scatterplot matrix,principal component analysis,radviz",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1464,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 739,
                "i": [
                    739
                ]
            }
        },
        {
            "name": "Dhiraj Barnwal",
            "value": 57,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1044,
            "x": 45.569135525613106,
            "y": -319.95851901058717,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Lyra 2: Designing Interactive Visualizations by Demonstration",
                "DOI": "10.1109/tvcg.2020.3030367",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030367",
                "FirstPage": 304,
                "LastPage": 314,
                "PaperType": "J",
                "Abstract": "Recent graphical interfaces offer direct manipulation mechanisms for authoring visualizations, but are largely restricted to static output. To author interactive visualizations, users must instead turn to textual specification, but such approaches impose a higher technical burden. To bridge this gap, we introduce Lyra 2, a system that extends a prior visualization design environment with novel methods for authoring interaction techniques by demonstration. Users perform an interaction (e.g., button clicks, drags, or key presses) directly on the visualization they are editing. The system interprets this performance using a set of heuristics and enumerates suggestions of possible interaction designs. These heuristics account for the properties of the interaction (e.g., target and event type) as well as the visualization (e.g., mark and scale types, and multiple views). Interaction design suggestions are displayed as thumbnails; users can preview and test these suggestions, iteratively refine them through additional demonstrations, and finally apply and customize them via property inspectors. We evaluate our approach through a gallery of diverse examples, and evaluate its usability through a first-use study and via an analysis of its cognitive dimensions. We find that, in Lyra 2, interaction design by demonstration enables users to rapidly express a wide range of interactive visualizations.",
                "AuthorNamesDeduped": "Jonathan Zong;Dhiraj Barnwal;Rupayan Neogy;Arvind Satyanarayan",
                "AuthorNames": "Jonathan Zong;Dhiraj Barnwal;Rupayan Neogy;Arvind Satyanarayan",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Indian Institute of Technology Kharagpur;Massachusetts Institute of Technology;Massachusetts Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346250;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2865240;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2598839;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/infvis.2004.12;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Direct manipulation,interactive visualization,interaction design by demonstration",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 851,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 379,
                "i": [
                    379
                ]
            }
        },
        {
            "name": "Rupayan Neogy",
            "value": 57,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1045,
            "x": 182.6149665092326,
            "y": 266.83660544766315,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Lyra 2: Designing Interactive Visualizations by Demonstration",
                "DOI": "10.1109/tvcg.2020.3030367",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030367",
                "FirstPage": 304,
                "LastPage": 314,
                "PaperType": "J",
                "Abstract": "Recent graphical interfaces offer direct manipulation mechanisms for authoring visualizations, but are largely restricted to static output. To author interactive visualizations, users must instead turn to textual specification, but such approaches impose a higher technical burden. To bridge this gap, we introduce Lyra 2, a system that extends a prior visualization design environment with novel methods for authoring interaction techniques by demonstration. Users perform an interaction (e.g., button clicks, drags, or key presses) directly on the visualization they are editing. The system interprets this performance using a set of heuristics and enumerates suggestions of possible interaction designs. These heuristics account for the properties of the interaction (e.g., target and event type) as well as the visualization (e.g., mark and scale types, and multiple views). Interaction design suggestions are displayed as thumbnails; users can preview and test these suggestions, iteratively refine them through additional demonstrations, and finally apply and customize them via property inspectors. We evaluate our approach through a gallery of diverse examples, and evaluate its usability through a first-use study and via an analysis of its cognitive dimensions. We find that, in Lyra 2, interaction design by demonstration enables users to rapidly express a wide range of interactive visualizations.",
                "AuthorNamesDeduped": "Jonathan Zong;Dhiraj Barnwal;Rupayan Neogy;Arvind Satyanarayan",
                "AuthorNames": "Jonathan Zong;Dhiraj Barnwal;Rupayan Neogy;Arvind Satyanarayan",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Indian Institute of Technology Kharagpur;Massachusetts Institute of Technology;Massachusetts Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2016.2598620;10.1109/tvcg.2014.2346250;10.1109/tvcg.2010.177;10.1109/tvcg.2018.2865240;10.1109/tvcg.2017.2744198;10.1109/tvcg.2014.2346291;10.1109/tvcg.2018.2865158;10.1109/tvcg.2016.2598839;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/infvis.2000.885086;10.1109/infvis.2004.12;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Direct manipulation,interactive visualization,interaction design by demonstration",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 851,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 379,
                "i": [
                    379
                ]
            }
        },
        {
            "name": "Norman Au",
            "value": 71,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 1046,
            "x": -315.05069197737464,
            "y": -73.43746649073222,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "OpinionSeer: Interactive Visualization of Hotel Customer Feedback",
                "DOI": "10.1109/tvcg.2010.183",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.183",
                "FirstPage": 1109,
                "LastPage": 1118,
                "PaperType": "J",
                "Abstract": "The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.",
                "AuthorNamesDeduped": "Yingcai Wu;Furu Wei;Shixia Liu;Norman Au;Weiwei Cui;Hong Zhou 0004;Huamin Qu",
                "AuthorNames": "Yingcai Wu;Furu Wei;Shixia Liu;Norman Au;Weiwei Cui;Hong Zhou;Huamin Qu",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;IBM China Research Laboratory, Beijing, China;IBM China Research Laboratory, Beijing, China;School of Hotel & Tourism Management, Hong Kong PolyTechnic University, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;Shenzhen University, Shenzhen, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/vast.2006.261431;10.1109/tvcg.2009.171;10.1109/vast.2009.5332611;10.1109/tvcg.2008.187;10.1109/vast.2009.5333919;10.1109/infvis.2002.1173151",
                "AuthorKeywords": "opinion visualization, radial visualization, uncertainty visualization",
                "AminerCitationCount": 233,
                "CitationCountCrossRef": 115,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 3227,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1690,
                "i": [
                    1690
                ]
            }
        },
        {
            "name": "Hong Zhou 0004",
            "value": 232,
            "numPapers": 54,
            "cluster": "1",
            "visible": 1,
            "index": 1047,
            "x": 282.04953312612355,
            "y": -158.7389708399792,
            "vy": 0,
            "vx": 0,
            "r": 1.2671272308578008,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Causality-Based Visual Analysis of Questionnaire Responses",
                "DOI": "10.1109/tvcg.2023.3327376",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327376",
                "FirstPage": 638,
                "LastPage": 648,
                "PaperType": "J",
                "Abstract": "As the final stage of questionnaire analysis, causal reasoning is the key to turning responses into valuable insights and actionable items for decision-makers. During the questionnaire analysis, classical statistical methods (e.g., Differences-in-Differences) have been widely exploited to evaluate causality between questions. However, due to the huge search space and complex causal structure in data, causal reasoning is still extremely challenging and time-consuming, and often conducted in a trial-and-error manner. On the other hand, existing visual methods of causal reasoning face the challenge of bringing scalability and expert knowledge together and can hardly be used in the questionnaire scenario. In this work, we present a systematic solution to help analysts effectively and efficiently explore questionnaire data and derive causality. Based on the association mining algorithm, we dig question combinations with potential inner causality and help analysts interactively explore the causal sub-graph of each question combination. Furthermore, leveraging the requirements collected from the experts, we built a visualization tool and conducted a comparative study with the state-of-the-art system to show the usability and efficiency of our system.",
                "AuthorNamesDeduped": "Renzhong Li;Weiwei Cui;Tianqi Song;Xiao Xie;Rui Ding 0001;Yun Wang 0012;Haidong Zhang;Hong Zhou 0004;Yingcai Wu",
                "AuthorNames": "Renzhong Li;Weiwei Cui;Tianqi Song;Xiao Xie;Rui Ding;Yun Wang;Haidong Zhang;Hong Zhou;Yingcai Wu",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University, China;Microsoft Research Asia, China;State Key Lab of CAD&CG, Zhejiang University, China;Department of Sports Science, Zhejiang University, China;Microsoft Research Asia, China;Microsoft Research Asia, China;Microsoft Research Asia, China;College of Computer Science and Software Engineering, Shenzhen University, China;State Key Lab of CAD&CG, Zhejiang University, China",
                "InternalReferences": "10.1109/tvcg.2021.3114875;10.1109/tvcg.2022.3209484;10.1109/tvcg.2020.3030465;10.1109/tvcg.2021.3114824;10.1109/tvcg.2014.2346248;10.1109/tvcg.2020.3030347;10.1109/tvcg.2009.108;10.1109/tvcg.2015.2467931;10.1109/vast.2017.8585647;10.1109/tvcg.2020.3028957;10.1109/tvcg.2019.2934399",
                "AuthorKeywords": "Causal analysis,Questionnaire,Design study",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 304,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 77,
                "i": [
                    77
                ]
            }
        },
        {
            "name": "Madison A. Elliott",
            "value": 10,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1048,
            "x": -100.79599190474333,
            "y": 307.7176758262985,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "A Design Space of Vision Science Methods for Visualization Research",
                "DOI": "10.1109/tvcg.2020.3029413",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3029413",
                "FirstPage": 1117,
                "LastPage": 1127,
                "PaperType": "J",
                "Abstract": "A growing number of efforts aim to understand what people see when using a visualization. These efforts provide scientific grounding to complement design intuitions, leading to more effective visualization practice. However, published visualization research currently reflects a limited set of available methods for understanding how people process visualized data. Alternative methods from vision science offer a rich suite of tools for understanding visualizations, but no curated collection of these methods exists in either perception or visualization research. We introduce a design space of experimental methods for empirically investigating the perceptual processes involved with viewing data visualizations to ultimately inform visualization design guidelines. This paper provides a shared lexicon for facilitating experimental visualization research. We discuss popular experimental paradigms, adjustment types, response types, and dependent measures used in vision science research, rooting each in visualization examples. We then discuss the advantages and limitations of each technique. Researchers can use this design space to create innovative studies and progress scientific understanding of design choices and evaluations in visualization. We highlight a history of collaborative success between visualization and vision science research and advocate for a deeper relationship between the two fields that can elaborate on and extend the methodological design space for understanding visualization and vision.",
                "AuthorNamesDeduped": "Madison A. Elliott;Christine Nothelfer;Cindy Xiong;Danielle Albers Szafir",
                "AuthorNames": "Madison A. Elliott;Christine Nothelfer;Cindy Xiong;Danielle Albers Szafir",
                "AuthorAffiliation": "The University of British Columbia;Northwestern University;University of Massachusetts Amherst;University of Colorado Boulder",
                "InternalReferences": "0.1109/tvcg.2015.2467732;10.1109/infvis.1997.636792;10.1109/tvcg.2013.183;10.1109/tvcg.2016.2598918;10.1109/tvcg.2012.233;10.1109/tvcg.2014.2346979;10.1109/tvcg.2018.2864909;10.1109/tvcg.2019.2934801;10.1109/tvcg.2018.2865147;10.1109/tvcg.2019.2934284;10.1109/tvcg.2017.2744359;10.1109/tvcg.2012.196;10.1109/tvcg.2019.2934400;10.1109/tvcg.2013.234",
                "AuthorKeywords": "Perception,human vision,empirical research,evaluation,HCI",
                "AminerCitationCount": 18,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 822,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 392,
                "i": [
                    392
                ]
            }
        },
        {
            "name": "Stefan Jänicke",
            "value": 26,
            "numPapers": 44,
            "cluster": "1",
            "visible": 1,
            "index": 1049,
            "x": -133.60014052064128,
            "y": -295.1287896035643,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Interactive Visual Profiling of Musicians",
                "DOI": "10.1109/tvcg.2015.2467620",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467620",
                "FirstPage": 200,
                "LastPage": 209,
                "PaperType": "J",
                "Abstract": "Determining similar objects based upon the features of an object of interest is a common task for visual analytics systems. This process is called profiling, if the object of interest is a person with individual attributes. The profiling of musicians similar to a musician of interest with the aid of visual means became an interesting research question for musicologists working with the Bavarian Musicians Encyclopedia Online. This paper illustrates the development of a visual analytics profiling system that is used to address such research questions. Taking musicological knowledge into account, we outline various steps of our collaborative digital humanities project, priority (1) the definition of various measures to determine the similarity of musicians' attributes, and (2) the design of an interactive profiling system that supports musicologists in iteratively determining similar musicians. The utility of the profiling system is emphasized by various usage scenarios illustrating current research questions in musicology.",
                "AuthorNamesDeduped": "Stefan Jänicke;Josef Focht;Gerik Scheuermann",
                "AuthorNames": "Stefan Jänicke;Josef Focht;Gerik Scheuermann",
                "AuthorAffiliation": "Image and Signal Processing Group, Leipzig University, Germany;Museum of Musical Instruments, Leipzig University, Germany;Image and Signal Processing Group, Leipzig University, Germany",
                "InternalReferences": "0.1109/vast.2011.6102454;10.1109/tvcg.2010.159;10.1109/tvcg.2014.2346431;10.1109/tvcg.2007.70617;10.1109/vast.2009.5333443;10.1109/tvcg.2014.2346433;10.1109/tvcg.2008.175;10.1109/tvcg.2012.252;10.1109/vast.2012.6400485;10.1109/infvis.2005.1532126;10.1109/tvcg.2012.277;10.1109/vast.2012.6400491;10.1109/vast.2007.4389004;10.1109/tvcg.2014.2346677;10.1109/tvcg.2009.111;10.1109/tvcg.2006.122;10.1109/vast.2010.5652931;10.1109/vast.2009.5333023;10.1109/vast.2007.4389006;10.1109/vast.2009.5333248;10.1109/vast.2008.4677370;10.1109/vast.2010.5652520;10.1109/tvcg.2008.172;10.1109/tvcg.2008.166;10.1109/tvcg.2010.210",
                "AuthorKeywords": "visual analytics, profiling system, musicians database visualization, digital humanities, musicology",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1202,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1127,
                "i": [
                    1127
                ]
            }
        },
        {
            "name": "Louis Bavoil",
            "value": 136,
            "numPapers": 7,
            "cluster": "6",
            "visible": 1,
            "index": 1050,
            "x": 298.01109493435456,
            "y": 127.43385459141993,
            "vy": 0,
            "vx": 0,
            "r": 1.1565918249856073,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "VisTrails: enabling interactive multiple-view visualizations",
                "DOI": "10.1109/visual.2005.1532788",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532788",
                "FirstPage": 135,
                "LastPage": 142,
                "PaperType": "C",
                "Abstract": "VisTrails is a new system that enables interactive multiple-view visualizations by simplifying the creation and maintenance of visualization pipelines, and by optimizing their execution. It provides a general infrastructure that can be combined with existing visualization systems and libraries. A key component of VisTrails is the visualization trail (vistrail), a formal specification of a pipeline. Unlike existing dataflow-based systems, in VisTrails there is a clear separation between the specification of a pipeline and its execution instances. This separation enables powerful scripting capabilities and provides a scalable mechanism for generating a large number of visualizations. VisTrails also leverages the vistrail specification to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. In this paper, we describe the design and implementation of VisTrails, and show its effectiveness in different application scenarios.",
                "AuthorNamesDeduped": "Louis Bavoil;Steven P. Callahan;Carlos Eduardo Scheidegger;Huy T. Vo;Patricia Crossno;Cláudio T. Silva;Juliana Freire",
                "AuthorNames": "L. Bavoil;S.P. Callahan;P.J. Crossno;J. Freire;C.E. Scheidegger;C.T. Silva;H.T. Vo",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Sandia National Laboratories, USA;School of Computing, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1998.745299;10.1109/infvis.2004.2;10.1109/visual.2004.112;10.1109/visual.2002.1183791",
                "AuthorKeywords": "interrogative visualization, dataflow, caching, coordinated views",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 66,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1291,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2364,
                "i": [
                    2364
                ]
            }
        },
        {
            "name": "Steven P. Callahan",
            "value": 170,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1051,
            "x": -305.96996540911965,
            "y": 107.38892059957634,
            "vy": 0,
            "vx": 0,
            "r": 1.1957397812320092,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "VisTrails: enabling interactive multiple-view visualizations",
                "DOI": "10.1109/visual.2005.1532788",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532788",
                "FirstPage": 135,
                "LastPage": 142,
                "PaperType": "C",
                "Abstract": "VisTrails is a new system that enables interactive multiple-view visualizations by simplifying the creation and maintenance of visualization pipelines, and by optimizing their execution. It provides a general infrastructure that can be combined with existing visualization systems and libraries. A key component of VisTrails is the visualization trail (vistrail), a formal specification of a pipeline. Unlike existing dataflow-based systems, in VisTrails there is a clear separation between the specification of a pipeline and its execution instances. This separation enables powerful scripting capabilities and provides a scalable mechanism for generating a large number of visualizations. VisTrails also leverages the vistrail specification to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. In this paper, we describe the design and implementation of VisTrails, and show its effectiveness in different application scenarios.",
                "AuthorNamesDeduped": "Louis Bavoil;Steven P. Callahan;Carlos Eduardo Scheidegger;Huy T. Vo;Patricia Crossno;Cláudio T. Silva;Juliana Freire",
                "AuthorNames": "L. Bavoil;S.P. Callahan;P.J. Crossno;J. Freire;C.E. Scheidegger;C.T. Silva;H.T. Vo",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Sandia National Laboratories, USA;School of Computing, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1998.745299;10.1109/infvis.2004.2;10.1109/visual.2004.112;10.1109/visual.2002.1183791",
                "AuthorKeywords": "interrogative visualization, dataflow, caching, coordinated views",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 66,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1291,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2364,
                "i": [
                    2364
                ]
            }
        },
        {
            "name": "Patricia Crossno",
            "value": 165,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1052,
            "x": 153.1453269568369,
            "y": -286.00088956379756,
            "vy": 0,
            "vx": 0,
            "r": 1.1899827288428324,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "VisTrails: enabling interactive multiple-view visualizations",
                "DOI": "10.1109/visual.2005.1532788",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532788",
                "FirstPage": 135,
                "LastPage": 142,
                "PaperType": "C",
                "Abstract": "VisTrails is a new system that enables interactive multiple-view visualizations by simplifying the creation and maintenance of visualization pipelines, and by optimizing their execution. It provides a general infrastructure that can be combined with existing visualization systems and libraries. A key component of VisTrails is the visualization trail (vistrail), a formal specification of a pipeline. Unlike existing dataflow-based systems, in VisTrails there is a clear separation between the specification of a pipeline and its execution instances. This separation enables powerful scripting capabilities and provides a scalable mechanism for generating a large number of visualizations. VisTrails also leverages the vistrail specification to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. In this paper, we describe the design and implementation of VisTrails, and show its effectiveness in different application scenarios.",
                "AuthorNamesDeduped": "Louis Bavoil;Steven P. Callahan;Carlos Eduardo Scheidegger;Huy T. Vo;Patricia Crossno;Cláudio T. Silva;Juliana Freire",
                "AuthorNames": "L. Bavoil;S.P. Callahan;P.J. Crossno;J. Freire;C.E. Scheidegger;C.T. Silva;H.T. Vo",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Sandia National Laboratories, USA;School of Computing, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1998.745299;10.1109/infvis.2004.2;10.1109/visual.2004.112;10.1109/visual.2002.1183791",
                "AuthorKeywords": "interrogative visualization, dataflow, caching, coordinated views",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 66,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1291,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2364,
                "i": [
                    2364
                ]
            }
        },
        {
            "name": "Alan J. Dix",
            "value": 238,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 1053,
            "x": 80.304349300698,
            "y": 314.48563001096164,
            "vy": 0,
            "vx": 0,
            "r": 1.274035693724813,
            "node": {
                "Conference": "InfoVis",
                "Year": 2006,
                "Title": "Enabling Automatic Clutter Reduction in Parallel Coordinate Plots",
                "DOI": "10.1109/tvcg.2006.138",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.138",
                "FirstPage": 717,
                "LastPage": 724,
                "PaperType": "J",
                "Abstract": "We have previously shown that random sampling is an effective clutter reduction technique and that a sampling lens can facilitate focus+context viewing of particular regions. This demands an efficient method of estimating the overlap or occlusion of large numbers of intersecting lines in order to automatically adjust the sampling rate within the lens. This paper proposes several ways for measuring occlusion in parallel coordinate plots. An empirical study into the accuracy and efficiency of the occlusion measures show that a probabilistic approach combined with a 'binning' technique is very fast and yet approaches the accuracy of the more expensive 'true' complete measurement",
                "AuthorNamesDeduped": "Geoffrey P. Ellis;Alan J. Dix",
                "AuthorNames": "Geoffrey Ellis;Alan Dix",
                "AuthorAffiliation": "Lancaster University, UK;Lancaster University, UK",
                "InternalReferences": "0.1109/visual.2004.5;10.1109/visual.2005.1532819;10.1109/infvis.2004.64;10.1109/visual.1999.809866;10.1109/infvis.2004.15;10.1109/infvis.2004.68",
                "AuthorKeywords": "Sampling, random sampling, lens, clutter, occlusion, density reduction, overplotting, information visualisation, parallel coordinates",
                "AminerCitationCount": 161,
                "CitationCountCrossRef": 83,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 772,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2222,
                "i": [
                    2222
                ]
            }
        },
        {
            "name": "Tom Horak",
            "value": 50,
            "numPapers": 27,
            "cluster": "5",
            "visible": 1,
            "index": 1054,
            "x": -271.7748135736392,
            "y": -177.7314004530816,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Responsive Matrix Cells: A Focus+Context Approach for Exploring and Editing Multivariate Graphs",
                "DOI": "10.1109/tvcg.2020.3030371",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030371",
                "FirstPage": 1644,
                "LastPage": 1654,
                "PaperType": "J",
                "Abstract": "Matrix visualizations are a useful tool to provide a general overview of a graph's structure. For multivariate graphs, a remaining challenge is to cope with the attributes that are associated with nodes and edges. Addressing this challenge, we propose responsive matrix cells as a focus+context approach for embedding additional interactive views into a matrix. Responsive matrix cells are local zoomable regions of interest that provide auxiliary data exploration and editing facilities for multivariate graphs. They behave responsively by adapting their visual contents to the cell location, the available display space, and the user task. Responsive matrix cells enable users to reveal details about the graph, compare node and edge attributes, and edit data values directly in a matrix without resorting to external views or tools. We report the general design considerations for responsive matrix cells covering the visual and interactive means necessary to support a seamless data exploration and editing. Responsive matrix cells have been implemented in a web-based prototype based on which we demonstrate the utility of our approach. We describe a walk-through for the use case of analyzing a graph of soccer players and report on insights from a preliminary user feedback session.",
                "AuthorNamesDeduped": "Tom Horak;Philip Berger;Heidrun Schumann;Raimund Dachselt;Christian Tominski",
                "AuthorNames": "Tom Horak;Philip Berger;Heidrun Schumann;Raimund Dachselt;Christian Tominski",
                "AuthorAffiliation": "Interactive Media Lab, Technische Universitat Dresden;Inst. for Visual & Analytic Computing, University of Rostock;Inst. for Visual & Analytic Computing, University of Rostock;Interactive Media Lab, Technische Universitat Dresden;Inst. for Visual & Analytic Computing, University of Rostock",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2017.2743990;10.1109/tvcg.2014.2346575;10.1109/tvcg.2011.213;10.1109/tvcg.2007.70582;10.1109/infvis.2004.2;10.1109/tvcg.2008.109;10.1109/tvcg.2018.2865151;10.1109/infvis.2002.1173149;10.1109/tvcg.2009.151;10.1109/tvcg.2018.2865149;10.1109/tvcg.2014.2346279;10.1109/tvcg.2017.2745219;10.1109/tvcg.2014.2346441;10.1109/tvcg.2015.2467202",
                "AuthorKeywords": "Multivariate graph visualization,matrix visualization,focus+context,embedded visualizations,responsive visualization,graph editing",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 83,
                "DownloadsXplore": 1186,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 399,
                "i": [
                    399
                ]
            }
        },
        {
            "name": "Ghulam Jilani Quadri",
            "value": 36,
            "numPapers": 37,
            "cluster": "5",
            "visible": 1,
            "index": 1055,
            "x": 320.6060353655287,
            "y": -52.552545963038895,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering",
                "DOI": "10.1109/tvcg.2023.3327201",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327201",
                "FirstPage": 770,
                "LastPage": 780,
                "PaperType": "J",
                "Abstract": "Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.",
                "AuthorNamesDeduped": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen 0001;Danielle Albers Szafir;Jinwook Seo",
                "AuthorNames": "Hyeon Jeon;Ghulam Jilani Quadri;Hyunwook Lee;Paul Rosen;Danielle Albers Szafir;Jinwook Seo",
                "AuthorAffiliation": "Seoul National University, South Korea;University of North Carolina, Chapel Hill, USA;UNIST, South Korea;University of Utah, USA;Seoul National University, South Korea;Seoul National University, South Korea",
                "InternalReferences": "10.1109/infvis.2005.1532136;10.1109/tvcg.2011.229;10.1109/tvcg.2013.124;10.1109/tvcg.2014.2346572;10.1109/tvcg.2021.3114833;10.1109/tvcg.2017.2744718;10.1109/tvcg.2019.2934811;10.1109/tvcg.2018.2865240;10.1109/tvcg.2020.3030365;10.1109/tvcg.2017.2744184;10.1109/tvcg.2018.2864912;10.1109/tvcg.2021.3114694",
                "AuthorKeywords": "Cluster,scatterplot,perception,cluster analysis,cluster ambiguity,visual quality measure",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 382,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 30,
                "i": [
                    30
                ]
            }
        },
        {
            "name": "Shuyue Zhou",
            "value": 61,
            "numPapers": 52,
            "cluster": "5",
            "visible": 1,
            "index": 1056,
            "x": -201.00132642807375,
            "y": 255.4377943338748,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Evaluating Perceptual Bias During Geometric Scaling of Scatterplots",
                "DOI": "10.1109/tvcg.2019.2934208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934208",
                "FirstPage": 321,
                "LastPage": 331,
                "PaperType": "J",
                "Abstract": "Scatterplots are frequently scaled to fit display areas in multi-view and multi-device data analysis environments. A common method used for scaling is to enlarge or shrink the entire scatterplot together with the inside points synchronously and proportionally. This process is called geometric scaling. However, geometric scaling of scatterplots may cause a perceptual bias, that is, the perceived and physical values of visual features may be dissociated with respect to geometric scaling. For example, if a scatterplot is projected from a laptop to a large projector screen, then observers may feel that the scatterplot shown on the projector has fewer points than that viewed on the laptop. This paper presents an evaluation study on the perceptual bias of visual features in scatterplots caused by geometric scaling. The study focuses on three fundamental visual features (i.e., numerosity, correlation, and cluster separation) and three hypotheses that are formulated on the basis of our experience. We carefully design three controlled experiments by using well-prepared synthetic data and recruit participants to complete the experiments on the basis of their subjective experience. With a detailed analysis of the experimental results, we obtain a set of instructive findings. First, geometric scaling causes a bias that has a linear relationship with the scale ratio. Second, no significant difference exists between the biases measured from normally and uniformly distributed scatterplots. Third, changing the point radius can correct the bias to a certain extent. These findings can be used to inspire the design decisions of scatterplots in various scenarios.",
                "AuthorNamesDeduped": "Yating Wei;Honghui Mei;Ying Zhao 0001;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen 0001",
                "AuthorNames": "Yating Wei;Honghui Mei;Ying Zhao;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen",
                "AuthorAffiliation": "The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/tvcg.2018.2865142;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.124;10.1109/vast.2010.5652460;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2006.163;10.1109/vast.2012.6400487;10.1109/tvcg.2015.2467671;10.1109/tvcg.2018.2864884;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.120;10.1109/tvcg.2013.153;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2007.70596;10.1109/tvcg.2017.2744138;10.1109/tvcg.2018.2864912;10.1109/tvcg.2018.2865266;10.1109/tvcg.2017.2744098;10.1109/tvcg.2006.184;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Evaluation,scatterplot,geometric scaling,bias,perceptual consistency",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 1007,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 623,
                "i": [
                    623
                ]
            }
        },
        {
            "name": "James Eagan",
            "value": 221,
            "numPapers": 6,
            "cluster": "5",
            "visible": 1,
            "index": 1057,
            "x": -24.345141727382888,
            "y": -324.2796849546293,
            "vy": 0,
            "vx": 0,
            "r": 1.254461715601612,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "Low-level components of analytic activity in information visualization",
                "DOI": "10.1109/infvis.2005.1532136",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532136",
                "FirstPage": 111,
                "LastPage": 117,
                "PaperType": "C",
                "Abstract": "Existing system level taxonomies of visualization tasks are geared more towards the design of particular representations than the facilitation of user analytic activity. We present a set of ten low level analysis tasks that largely capture people's activities while employing information visualization tools for understanding data. To help develop these tasks, we collected nearly 200 sample questions from students about how they would analyze five particular data sets from different domains. The questions, while not being totally comprehensive, illustrated the sheer variety of analytic questions typically posed by users when employing information visualization systems. We hope that the presented set of tasks is useful for information visualization system designers as a kind of common substrate to discuss the relative analytic capabilities of the systems. Further, the tasks may provide a form of checklist for system designers.",
                "AuthorNamesDeduped": "Robert A. Amar;James Eagan;John T. Stasko",
                "AuthorNames": "R. Amar;J. Eagan;J. Stasko",
                "AuthorAffiliation": "Georgia Institute of Technology,College of Computing, GVU Center;Georgia Institute of Technology,College of Computing, GVU Center;Georgia Institute of Technology,College of Computing, GVU Center",
                "InternalReferences": "0.1109/visual.1990.146375;10.1109/infvis.1998.729560;10.1109/infvis.2000.885092;10.1109/infvis.2004.5;10.1109/infvis.2001.963289",
                "AuthorKeywords": "Analytic activity, taxonomy, knowledge discovery, design, evaluation",
                "AminerCitationCount": 844,
                "CitationCountCrossRef": 178,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 3816,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2330,
                "i": [
                    2330
                ]
            }
        },
        {
            "name": "Wouter Meulemans",
            "value": 62,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1058,
            "x": 237.11115950295303,
            "y": 222.77409642767077,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Small Multiples with Gaps",
                "DOI": "10.1109/tvcg.2016.2598542",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598542",
                "FirstPage": 381,
                "LastPage": 390,
                "PaperType": "J",
                "Abstract": "Small multiples enable comparison by providing different views of a single data set in a dense and aligned manner. A common frame defines each view, which varies based upon values of a conditioning variable. An increasingly popular use of this technique is to project two-dimensional locations into a gridded space (e.g. grid maps), using the underlying distribution both as the conditioning variable and to determine the grid layout. Using whitespace in this layout has the potential to carry information, especially in a geographic context. Yet, the effects of doing so on the spatial properties of the original units are not understood. We explore the design space offered by such small multiples with gaps. We do so by constructing a comprehensive suite of metrics that capture properties of the layout used to arrange the small multiples for comparison (e.g. compactness and alignment) and the preservation of the original data (e.g. distance, topology and shape). We study these metrics in geographic data sets with varying properties and numbers of gaps. We use simulated annealing to optimize for each metric and measure the effects on the others. To explore these effects systematically, we take a new approach, developing a system to visualize this design space using a set of interactive matrices. We find that adding small amounts of whitespace to small multiple arrays improves some of the characteristics of 2D layouts, such as shape, distance and direction. This comes at the cost of other metrics, such as the retention of topology. Effects vary according to the input maps, with degree of variation in size of input regions found to be a factor. Optima exist for particular metrics in many cases, but at different amounts of whitespace for different maps. We suggest multiple metrics be used in optimized layouts, finding topology to be a primary factor in existing manually-crafted solutions, followed by a trade-off between shape and displacement. But the rich range of possible optimized layouts leads us to challenge single-solution thinking; we suggest to consider alternative optimized layouts for small multiples with gaps. Key to our work is the systematic, quantified and visual approach to exploring design spaces when facing a trade-off between many competing criteria-an approach likely to be of value to the analysis of other design spaces.",
                "AuthorNamesDeduped": "Wouter Meulemans;Jason Dykes;Aidan Slingsby;Cagatay Turkay;Jo Wood",
                "AuthorNames": "Wouter Meulemans;Jason Dykes;Aidan Slingsby;Cagatay Turkay;Jo Wood",
                "AuthorAffiliation": "giCentre, City University, London;giCentre, City University, London;giCentre, City University, London;giCentre, City University, London;giCentre, City University, London",
                "InternalReferences": "0.1109/tvcg.2014.2346276;10.1109/tvcg.2011.174;10.1109/tvcg.2016.2598862;10.1109/tvcg.2008.165",
                "AuthorKeywords": "Geographic visualization;small multiples;whitespace;design space;metrics;optimization",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1021,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 913,
                "i": [
                    913
                ]
            }
        },
        {
            "name": "Jen Rogers",
            "value": 2,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1059,
            "x": -325.4737636844498,
            "y": -4.102335076384344,
            "vy": 0,
            "vx": 0,
            "r": 1.0023028209556706,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Insights From Experiments With Rigor in an EvoBio Design Study",
                "DOI": "10.1109/tvcg.2020.3030405",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030405",
                "FirstPage": 1106,
                "LastPage": 1116,
                "PaperType": "J",
                "Abstract": "Design study is an established approach of conducting problem-driven visualization research. The academic visualization community has produced a large body of work for reporting on design studies, informed by a handful of theoretical frameworks, and applied to a broad range of application areas. The result is an abundance of reported insights into visualization design, with an emphasis on novel visualization techniques and systems as the primary contribution of these studies. In recent work we proposed a new, interpretivist perspective on design study and six companion criteria for rigor that highlight the opportunities for researchers to contribute knowledge that extends beyond visualization idioms and software. In this work we conducted a year-long collaboration with evolutionary biologists to develop an interactive tool for visual exploration of multivariate datasets and phylogenetic trees. During this design study we experimented with methods to support three of the rigor criteria: ABUNDANT, REFLEXIVE, and TRANSPARENT. As a result we contribute two novel visualization techniques for the analysis of multivariate phylogenetic datasets, three methodological recommendations for conducting design studies drawn from reflections over our process of experimentation, and two writing devices for reporting interpretivist design study. We offer this work as an example for implementing the rigor criteria to produce a diverse range of knowledge contributions.",
                "AuthorNamesDeduped": "Jen Rogers;Austin H. Patton;Luke Harmon;Alexander Lex;Miriah Meyer",
                "AuthorNames": "Jen Rogers;Austin H. Patton;Luke Harmon;Alexander Lex;Miriah Meyer",
                "AuthorAffiliation": "University of Utah;Washington State University;University of Idaho;University of Utah;University of Utah",
                "InternalReferences": "0.1109/tvcg.2012.272;10.1109/tvcg.2014.2346431;10.1109/tvcg.2019.2934790;10.1109/tvcg.2015.2467452;10.1109/tvcg.2018.2865241;10.1109/tvcg.2018.2864526;10.1109/tvcg.2018.2864913;10.1109/tvcg.2015.2467811;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/tvcg.2009.111;10.1109/tvcg.2018.2865149;10.1109/tvcg.2019.2934788;10.1109/tvcg.2019.2934281;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864836;10.1109/tvcg.2018.2865076;10.1109/tvcg.2013.231",
                "AuthorKeywords": "Methodologies,Application Motivated Visualization,Guidelines,Life Sciences Visualization,Health,Medicine,Biology,Bioinformatics,Genomics",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 505,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 407,
                "i": [
                    407
                ]
            }
        },
        {
            "name": "Austin H. Patton",
            "value": 2,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1060,
            "x": 242.8798504595011,
            "y": -216.93173636139636,
            "vy": 0,
            "vx": 0,
            "r": 1.0023028209556706,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Insights From Experiments With Rigor in an EvoBio Design Study",
                "DOI": "10.1109/tvcg.2020.3030405",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030405",
                "FirstPage": 1106,
                "LastPage": 1116,
                "PaperType": "J",
                "Abstract": "Design study is an established approach of conducting problem-driven visualization research. The academic visualization community has produced a large body of work for reporting on design studies, informed by a handful of theoretical frameworks, and applied to a broad range of application areas. The result is an abundance of reported insights into visualization design, with an emphasis on novel visualization techniques and systems as the primary contribution of these studies. In recent work we proposed a new, interpretivist perspective on design study and six companion criteria for rigor that highlight the opportunities for researchers to contribute knowledge that extends beyond visualization idioms and software. In this work we conducted a year-long collaboration with evolutionary biologists to develop an interactive tool for visual exploration of multivariate datasets and phylogenetic trees. During this design study we experimented with methods to support three of the rigor criteria: ABUNDANT, REFLEXIVE, and TRANSPARENT. As a result we contribute two novel visualization techniques for the analysis of multivariate phylogenetic datasets, three methodological recommendations for conducting design studies drawn from reflections over our process of experimentation, and two writing devices for reporting interpretivist design study. We offer this work as an example for implementing the rigor criteria to produce a diverse range of knowledge contributions.",
                "AuthorNamesDeduped": "Jen Rogers;Austin H. Patton;Luke Harmon;Alexander Lex;Miriah Meyer",
                "AuthorNames": "Jen Rogers;Austin H. Patton;Luke Harmon;Alexander Lex;Miriah Meyer",
                "AuthorAffiliation": "University of Utah;Washington State University;University of Idaho;University of Utah;University of Utah",
                "InternalReferences": "0.1109/tvcg.2012.272;10.1109/tvcg.2014.2346431;10.1109/tvcg.2019.2934790;10.1109/tvcg.2015.2467452;10.1109/tvcg.2018.2865241;10.1109/tvcg.2018.2864526;10.1109/tvcg.2018.2864913;10.1109/tvcg.2015.2467811;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/tvcg.2009.111;10.1109/tvcg.2018.2865149;10.1109/tvcg.2019.2934788;10.1109/tvcg.2019.2934281;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864836;10.1109/tvcg.2018.2865076;10.1109/tvcg.2013.231",
                "AuthorKeywords": "Methodologies,Application Motivated Visualization,Guidelines,Life Sciences Visualization,Health,Medicine,Biology,Bioinformatics,Genomics",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 505,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 407,
                "i": [
                    407
                ]
            }
        },
        {
            "name": "Luke Harmon",
            "value": 2,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1061,
            "x": -32.572106516948054,
            "y": 324.1744250816967,
            "vy": 0,
            "vx": 0,
            "r": 1.0023028209556706,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Insights From Experiments With Rigor in an EvoBio Design Study",
                "DOI": "10.1109/tvcg.2020.3030405",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030405",
                "FirstPage": 1106,
                "LastPage": 1116,
                "PaperType": "J",
                "Abstract": "Design study is an established approach of conducting problem-driven visualization research. The academic visualization community has produced a large body of work for reporting on design studies, informed by a handful of theoretical frameworks, and applied to a broad range of application areas. The result is an abundance of reported insights into visualization design, with an emphasis on novel visualization techniques and systems as the primary contribution of these studies. In recent work we proposed a new, interpretivist perspective on design study and six companion criteria for rigor that highlight the opportunities for researchers to contribute knowledge that extends beyond visualization idioms and software. In this work we conducted a year-long collaboration with evolutionary biologists to develop an interactive tool for visual exploration of multivariate datasets and phylogenetic trees. During this design study we experimented with methods to support three of the rigor criteria: ABUNDANT, REFLEXIVE, and TRANSPARENT. As a result we contribute two novel visualization techniques for the analysis of multivariate phylogenetic datasets, three methodological recommendations for conducting design studies drawn from reflections over our process of experimentation, and two writing devices for reporting interpretivist design study. We offer this work as an example for implementing the rigor criteria to produce a diverse range of knowledge contributions.",
                "AuthorNamesDeduped": "Jen Rogers;Austin H. Patton;Luke Harmon;Alexander Lex;Miriah Meyer",
                "AuthorNames": "Jen Rogers;Austin H. Patton;Luke Harmon;Alexander Lex;Miriah Meyer",
                "AuthorAffiliation": "University of Utah;Washington State University;University of Idaho;University of Utah;University of Utah",
                "InternalReferences": "0.1109/tvcg.2012.272;10.1109/tvcg.2014.2346431;10.1109/tvcg.2019.2934790;10.1109/tvcg.2015.2467452;10.1109/tvcg.2018.2865241;10.1109/tvcg.2018.2864526;10.1109/tvcg.2018.2864913;10.1109/tvcg.2015.2467811;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/tvcg.2009.111;10.1109/tvcg.2018.2865149;10.1109/tvcg.2019.2934788;10.1109/tvcg.2019.2934281;10.1109/tvcg.2012.213;10.1109/tvcg.2018.2864836;10.1109/tvcg.2018.2865076;10.1109/tvcg.2013.231",
                "AuthorKeywords": "Methodologies,Application Motivated Visualization,Guidelines,Life Sciences Visualization,Health,Medicine,Biology,Bioinformatics,Genomics",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 505,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 407,
                "i": [
                    407
                ]
            }
        },
        {
            "name": "Chuan Bu",
            "value": 0,
            "numPapers": 8,
            "cluster": "1",
            "visible": 1,
            "index": 1062,
            "x": -195.05083038500226,
            "y": -261.161202260445,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects",
                "DOI": "10.1109/tvcg.2020.3030404",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030404",
                "FirstPage": 1634,
                "LastPage": 1643,
                "PaperType": "J",
                "Abstract": "In this paper, we propose SineStream, a new variant of streamgraphs that improves their readability by minimizing sine illusion effects. Such effects reflect the tendency of humans to take the orthogonal rather than the vertical distance between two curves as their distance. In SineStream, we connect the readability of streamgraphs with minimizing sine illusions and by doing so provide a perceptual foundation for their design. As the geometry of a streamgraph is controlled by its baseline (the bottom-most curve) and the ordering of the layers, we re-interpret baseline computation and layer ordering algorithms in terms of reducing sine illusion effects. For baseline computation, we improve previous methods by introducing a Gaussian weight to penalize layers with large thickness changes. For layer ordering, three design requirements are proposed and implemented through a hierarchical clustering algorithm. Quantitative experiments and user studies demonstrate that SineStream improves the readability and aesthetics of streamgraphs compared to state-of-the-art methods.",
                "AuthorNamesDeduped": "Chuan Bu;Quanjie Zhang;Qianwen Wang;Jian Zhang 0070;Michael Sedlmair;Oliver Deussen;Yunhai Wang",
                "AuthorNames": "Chuan Bu;Quanjie Zhang;Qianwen Wang;Jian Zhang;Michael Sedlmair;Oliver Deussen;Yunhai Wang",
                "AuthorAffiliation": "Shandong University, Qingdao, China;Shandong University, Qingdao, China;HongKong University of Science and Technology, Hong Kong, China;CNIC, CAS;VISUS, University of Stuttgart, Germany;SIAT, Konstanz University, Germany and Shenzhen VisuCA Key Lab, China;Shandong University, Qingdao, China",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/tvcg.2013.162;10.1109/tvcg.2010.129;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70541;10.1109/tvcg.2014.2346919;10.1109/tvcg.2013.221",
                "AuthorKeywords": "Streamgraphs,Sine Illusion,Readability",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 497,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 408,
                "i": [
                    408
                ]
            }
        },
        {
            "name": "Quanjie Zhang",
            "value": 0,
            "numPapers": 8,
            "cluster": "1",
            "visible": 1,
            "index": 1063,
            "x": 320.38693331268735,
            "y": 60.845813023507574,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "SineStream: Improving the Readability of Streamgraphs by Minimizing Sine Illusion Effects",
                "DOI": "10.1109/tvcg.2020.3030404",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030404",
                "FirstPage": 1634,
                "LastPage": 1643,
                "PaperType": "J",
                "Abstract": "In this paper, we propose SineStream, a new variant of streamgraphs that improves their readability by minimizing sine illusion effects. Such effects reflect the tendency of humans to take the orthogonal rather than the vertical distance between two curves as their distance. In SineStream, we connect the readability of streamgraphs with minimizing sine illusions and by doing so provide a perceptual foundation for their design. As the geometry of a streamgraph is controlled by its baseline (the bottom-most curve) and the ordering of the layers, we re-interpret baseline computation and layer ordering algorithms in terms of reducing sine illusion effects. For baseline computation, we improve previous methods by introducing a Gaussian weight to penalize layers with large thickness changes. For layer ordering, three design requirements are proposed and implemented through a hierarchical clustering algorithm. Quantitative experiments and user studies demonstrate that SineStream improves the readability and aesthetics of streamgraphs compared to state-of-the-art methods.",
                "AuthorNamesDeduped": "Chuan Bu;Quanjie Zhang;Qianwen Wang;Jian Zhang 0070;Michael Sedlmair;Oliver Deussen;Yunhai Wang",
                "AuthorNames": "Chuan Bu;Quanjie Zhang;Qianwen Wang;Jian Zhang;Michael Sedlmair;Oliver Deussen;Yunhai Wang",
                "AuthorAffiliation": "Shandong University, Qingdao, China;Shandong University, Qingdao, China;HongKong University of Science and Technology, Hong Kong, China;CNIC, CAS;VISUS, University of Stuttgart, Germany;SIAT, Konstanz University, Germany and Shenzhen VisuCA Key Lab, China;Shandong University, Qingdao, China",
                "InternalReferences": "0.1109/tvcg.2008.166;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/tvcg.2013.162;10.1109/tvcg.2010.129;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70541;10.1109/tvcg.2014.2346919;10.1109/tvcg.2013.221",
                "AuthorKeywords": "Streamgraphs,Sine Illusion,Readability",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 497,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 408,
                "i": [
                    408
                ]
            }
        },
        {
            "name": "Yaniv Frishman",
            "value": 63,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1064,
            "x": -277.4744712274371,
            "y": 171.6330906819959,
            "vy": 0,
            "vx": 0,
            "r": 1.072538860103627,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Dynamic Drawing of Clustered Graphs",
                "DOI": "10.1109/infvis.2004.18",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.18",
                "FirstPage": 191,
                "LastPage": 198,
                "PaperType": "C",
                "Abstract": "This paper presents an algorithm for drawing a sequence of graphs that contain an inherent grouping of their vertex set into clusters. It differs from previous work on dynamic graph drawing in the emphasis that is put on maintaining the clustered structure of the graph during incremental layout. The algorithm works online and allows arbitrary modifications to the graph. It is generic and can be implemented using a wide range of static force-directed graph layout tools. The paper introduces several metrics for measuring layout quality of dynamic clustered graphs. The performance of our algorithm is analyzed using these metrics. The algorithm has been successfully applied to visualizing mobile object software",
                "AuthorNamesDeduped": "Yaniv Frishman;Ayellet Tal",
                "AuthorNames": "Y. Frishman;Ayellet Tal",
                "AuthorAffiliation": "Department of Computer Science, Technion-Israel Institute of Technology, Israel;Department of Computer Science, Technion-Israel Institute of Technology, Israel",
                "InternalReferences": "0.1109/infvis.1999.801859",
                "AuthorKeywords": "graph drawing, dynamic layout, mobile objects, software visualization",
                "AminerCitationCount": 168,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 604,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2460,
                "i": [
                    2460
                ]
            }
        },
        {
            "name": "Ayellet Tal",
            "value": 65,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1065,
            "x": 88.70618894115793,
            "y": -314.1356586628388,
            "vy": 0,
            "vx": 0,
            "r": 1.0748416810592976,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Dynamic Drawing of Clustered Graphs",
                "DOI": "10.1109/infvis.2004.18",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.18",
                "FirstPage": 191,
                "LastPage": 198,
                "PaperType": "C",
                "Abstract": "This paper presents an algorithm for drawing a sequence of graphs that contain an inherent grouping of their vertex set into clusters. It differs from previous work on dynamic graph drawing in the emphasis that is put on maintaining the clustered structure of the graph during incremental layout. The algorithm works online and allows arbitrary modifications to the graph. It is generic and can be implemented using a wide range of static force-directed graph layout tools. The paper introduces several metrics for measuring layout quality of dynamic clustered graphs. The performance of our algorithm is analyzed using these metrics. The algorithm has been successfully applied to visualizing mobile object software",
                "AuthorNamesDeduped": "Yaniv Frishman;Ayellet Tal",
                "AuthorNames": "Y. Frishman;Ayellet Tal",
                "AuthorAffiliation": "Department of Computer Science, Technion-Israel Institute of Technology, Israel;Department of Computer Science, Technion-Israel Institute of Technology, Israel",
                "InternalReferences": "0.1109/infvis.1999.801859",
                "AuthorKeywords": "graph drawing, dynamic layout, mobile objects, software visualization",
                "AminerCitationCount": 168,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 604,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2460,
                "i": [
                    2460
                ]
            }
        },
        {
            "name": "Michael Stonebraker",
            "value": 14,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 1066,
            "x": 146.85527080253954,
            "y": 291.6908113696981,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Kyrix-S: Authoring Scalable Scatterplot Visualizations of Big Data",
                "DOI": "10.1109/tvcg.2020.3030372",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030372",
                "FirstPage": 401,
                "LastPage": 411,
                "PaperType": "J",
                "Abstract": "Static scatterplots often suffer from the overdraw problem on big datasets where object overlap causes undesirable visual clutter. The use of zooming in scatterplots can help alleviate this problem. With multiple zoom levels, more screen real estate is available, allowing objects to be placed in a less crowded way. We call this type of visualization scalable scatterplot visualizations, or SSV for short. Despite the potential of SSVs, existing systems and toolkits fall short in supporting the authoring of SSVs due to three limitations. First, many systems have limited scalability, assuming that data fits in the memory of one computer. Second, too much developer work, e.g., using custom code to generate mark layouts or render objects, is required. Third, many systems focus on only a small subset of the SSV design space (e.g. supporting a specific type of visual marks). To address these limitations, we have developed Kyrix-S, a system for easy authoring of SSVs at scale. Kyrix-S derives a declarative grammar that enables specification of a variety of SSVs in a few tens of lines of code, based on an existing survey of scatterplot tasks and designs. The declarative grammar is supported by a distributed layout algorithm which automatically places visual marks onto zoom levels. We store data in a multi-node database and use multi-node spatial indexes to achieve interactive browsing of large SSVs. Extensive experiments show that 1) Kyrix-S enables interactive browsing of SSVs of billions of objects, with response times under 500ms and 2) Kyrix-S achieves 4X-9X reduction in specification compared to a state-of-the-art authoring system.",
                "AuthorNamesDeduped": "Wenbo Tao;Xinli Hou;Adam Sah;Leilani Battle;Remco Chang;Michael Stonebraker",
                "AuthorNames": "Wenbo Tao;Xinli Hou;Adam Sah;Leilani Battle;Remco Chang;Michael Stonebraker",
                "AuthorAffiliation": "Massachusetts Institute of Technology;Zhejiang University;Zhejiang University;University of Maryland;Tufts University;Massachusetts Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346594;10.1109/tvcg.2019.2934541;10.1109/tvcg.2006.161;10.1109/infvis.2003.1249019;10.1109/tvcg.2007.70535;10.1109/infvis.2002.1173156;10.1109/tvcg.2018.2865141;10.1109/visual.1998.745301;10.1109/tvcg.2013.179;10.1109/tvcg.2019.2934434;10.1109/tvcg.2014.2346452;10.1109/tvcg.2016.2598624;10.1109/tvcg.2017.2744184;10.1109/tvcg.2016.2599030;10.1109/infvis.2003.1249018;10.1109/tvcg.2019.2934555",
                "AuthorKeywords": "pan/zoom visualization,declarative grammar,scalability,performance optimization",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 731,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 417,
                "i": [
                    417
                ]
            }
        },
        {
            "name": "Katy Williams",
            "value": 25,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 1067,
            "x": -305.46392623817275,
            "y": -115.93873281677772,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "Guidelines For Pursuing and Revealing Data Abstractions",
                "DOI": "10.1109/tvcg.2020.3030355",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030355",
                "FirstPage": 1503,
                "LastPage": 1513,
                "PaperType": "J",
                "Abstract": "Many data abstraction types, such as networks or set relationships, remain unfamiliar to data workers beyond the visualization research community. We conduct a survey and series of interviews about how people describe their data, either directly or indirectly. We refer to the latter as latent data abstractions. We conduct a Grounded Theory analysis that (1) interprets the extent to which latent data abstractions exist, (2) reveals the far-reaching effects that the interventionist pursuit of such abstractions can have on data workers, (3) describes why and when data workers may resist such explorations, and (4) suggests how to take advantage of opportunities and mitigate risks through transparency about visualization research perspectives and agendas. We then use the themes and codes discovered in the Grounded Theory analysis to develop guidelines for data abstraction in visualization projects. To continue the discussion, we make our dataset open along with a visual interface for further exploration.",
                "AuthorNamesDeduped": "Alex Bigelow;Katy Williams;Katherine E. Isaacs",
                "AuthorNames": "Alex Bigelow;Katy Williams;Katherine E. Isaacs",
                "AuthorAffiliation": "University of Arizona;University of Arizona;University of Arizona",
                "InternalReferences": "0.1109/vast47406.2019.8986909;10.1109/infvis.2000.885092;10.1109/tvcg.2013.145;10.1109/vast.2011.6102441;10.1109/tvcg.2018.2865241;10.1109/tvcg.2014.2346331;10.1109/tvcg.2019.2934539;10.1109/tvcg.2009.111;10.1109/tvcg.2009.116;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744843;10.1109/tvcg.2019.2934538;10.1109/tvcg.2019.2934285",
                "AuthorKeywords": "Data abstraction,Grounded theory,Survey design,Data wrangling",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 434,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 422,
                "i": [
                    422
                ]
            }
        },
        {
            "name": "Natascha Sauber",
            "value": 105,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1068,
            "x": 303.69722841439363,
            "y": -120.90489424922217,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Multifield-Graphs: An Approach to Visualizing Correlations in Multifield Scalar Data",
                "DOI": "10.1109/tvcg.2006.165",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.165",
                "FirstPage": 917,
                "LastPage": 924,
                "PaperType": "J",
                "Abstract": "We present an approach to visualizing correlations in 3D multifield scalar data. The core of our approach is the computation of correlation fields, which are scalar fields containing the local correlations of subsets of the multiple fields. While the visualization of the correlation fields can be done using standard 3D volume visualization techniques, their huge number makes selection and handling a challenge. We introduce the multifield-graph to give an overview of which multiple fields correlate and to show the strength of their correlation. This information guides the selection of informative correlation fields for visualization. We use our approach to visually analyze a number of real and synthetic multifield datasets",
                "AuthorNamesDeduped": "Natascha Sauber;Holger Theisel;Hans-Peter Seidel",
                "AuthorNames": "Natascha Sauber;Holger Theisel;Hans-peter Seidel",
                "AuthorAffiliation": "MPI Informatik Saarbrücken, Germany;MPI Informatik Saarbrücken, Germany;MPI Informatik Saarbrücken, Germany",
                "InternalReferences": "0.1109/visual.1999.809865;10.1109/visual.2004.68;10.1109/visual.2004.46;10.1109/visual.1999.809905;10.1109/visual.2003.1250362",
                "AuthorKeywords": "Visualization, multifield, correlation",
                "AminerCitationCount": 151,
                "CitationCountCrossRef": 89,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 855,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2273,
                "i": [
                    2273
                ]
            }
        },
        {
            "name": "David Kouril",
            "value": 30,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1069,
            "x": -142.33335956917873,
            "y": 294.4337187785239,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Visualization Multi-Pipeline for Communicating Biology",
                "DOI": "10.1109/tvcg.2017.2744518",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744518",
                "FirstPage": 883,
                "LastPage": 892,
                "PaperType": "J",
                "Abstract": "We propose a system to facilitate biology communication by developing a pipeline to support the instructional visualization of heterogeneous biological data on heterogeneous user-devices. Discoveries and concepts in biology are typically summarized with illustrations assembled manually from the interpretation and application of heterogenous data. The creation of such illustrations is time consuming, which makes it incompatible with frequent updates to the measured data as new discoveries are made. Illustrations are typically non-interactive, and when an illustration is updated, it still has to reach the user. Our system is designed to overcome these three obstacles. It supports the integration of heterogeneous datasets, reflecting the knowledge that is gained from different data sources in biology. After pre-processing the datasets, the system transforms them into visual representations as inspired by scientific illustrations. As opposed to traditional scientific illustration these representations are generated in real-time - they are interactive. The code generating the visualizations can be embedded in various software environments. To demonstrate this, we implemented both a desktop application and a remote-rendering server in which the pipeline is embedded. The remote-rendering server supports multi-threaded rendering and it is able to handle multiple users simultaneously. This scalability to different hardware environments, including multi-GPU setups, makes our system useful for efficient public dissemination of biological discoveries.",
                "AuthorNamesDeduped": "Peter Mindek;David Kouril;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorNames": "Peter Mindek;David Kouřil;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorAffiliation": "TU Wien;TU Wien;TU Wien;Allen Institute for Cell Science;Allen Institute for Cell Science;Allen Institute for Cell Science;VRVis Research Center and TU Wien;TU Wien",
                "InternalReferences": "0.1109/visual.2005.1532856;10.1109/visual.2000.885729;10.1109/scivis.2015.7429514",
                "AuthorKeywords": "Biological visualization,remote rendering,public dissemination",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 739,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 828,
                "i": [
                    828
                ]
            }
        },
        {
            "name": "Johannes Sorger",
            "value": 59,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1070,
            "x": -93.97883476712362,
            "y": -313.3974770412386,
            "vy": 0,
            "vx": 0,
            "r": 1.0679332181922856,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "LiteVis: Integrated Visualization for Simulation-Based Decision Support in Lighting Design",
                "DOI": "10.1109/tvcg.2015.2468011",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2468011",
                "FirstPage": 290,
                "LastPage": 299,
                "PaperType": "J",
                "Abstract": "State-of-the-art lighting design is based on physically accurate lighting simulations of scenes such as offices. The simulation results support lighting designers in the creation of lighting configurations, which must meet contradicting customer objectives regarding quality and price while conforming to industry standards. However, current tools for lighting design impede rapid feedback cycles. On the one side, they decouple analysis and simulation specification. On the other side, they lack capabilities for a detailed comparison of multiple configurations. The primary contribution of this paper is a design study of LiteVis, a system for efficient decision support in lighting design. LiteVis tightly integrates global illumination-based lighting simulation, a spatial representation of the scene, and non-spatial visualizations of parameters and result indicators. This enables an efficient iterative cycle of simulation parametrization and analysis. Specifically, a novel visualization supports decision making by ranking simulated lighting configurations with regard to a weight-based prioritization of objectives that considers both spatial and non-spatial characteristics. In the spatial domain, novel concepts support a detailed comparison of illumination scenarios. We demonstrate LiteVis using a real-world use case and report qualitative feedback of lighting designers. This feedback indicates that LiteVis successfully supports lighting designers to achieve key tasks more efficiently and with greater certainty.",
                "AuthorNamesDeduped": "Johannes Sorger;Thomas Ortner;Christian Luksch;Michael Schwärzler;M. Eduard Gröller;Harald Piringer",
                "AuthorNames": "Johannes Sorger;Thomas Ortner;Christian Luksch;Michael Schwärzler;Eduard Gröller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;VRVis Research Center;VRVis Research Center;VRVis Research Center;TU Wien;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2014.2346626;10.1109/tvcg.2011.185;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/infvis.2003.1249032;10.1109/tvcg.2013.173;10.1109/tvcg.2009.110;10.1109/tvcg.2014.2346321;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Integrating Spatial and Non-Spatial Data Visualization, Visualization in Physical Sciences and Engineering, Coordinated and Multiple Views, Visual Knowledge Discovery",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 823,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1131,
                "i": [
                    1131
                ]
            }
        },
        {
            "name": "Daniel Toloudis",
            "value": 30,
            "numPapers": 2,
            "cluster": "6",
            "visible": 1,
            "index": 1071,
            "x": 281.1252355954468,
            "y": 167.686021812805,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Visualization Multi-Pipeline for Communicating Biology",
                "DOI": "10.1109/tvcg.2017.2744518",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744518",
                "FirstPage": 883,
                "LastPage": 892,
                "PaperType": "J",
                "Abstract": "We propose a system to facilitate biology communication by developing a pipeline to support the instructional visualization of heterogeneous biological data on heterogeneous user-devices. Discoveries and concepts in biology are typically summarized with illustrations assembled manually from the interpretation and application of heterogenous data. The creation of such illustrations is time consuming, which makes it incompatible with frequent updates to the measured data as new discoveries are made. Illustrations are typically non-interactive, and when an illustration is updated, it still has to reach the user. Our system is designed to overcome these three obstacles. It supports the integration of heterogeneous datasets, reflecting the knowledge that is gained from different data sources in biology. After pre-processing the datasets, the system transforms them into visual representations as inspired by scientific illustrations. As opposed to traditional scientific illustration these representations are generated in real-time - they are interactive. The code generating the visualizations can be embedded in various software environments. To demonstrate this, we implemented both a desktop application and a remote-rendering server in which the pipeline is embedded. The remote-rendering server supports multi-threaded rendering and it is able to handle multiple users simultaneously. This scalability to different hardware environments, including multi-GPU setups, makes our system useful for efficient public dissemination of biological discoveries.",
                "AuthorNamesDeduped": "Peter Mindek;David Kouril;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorNames": "Peter Mindek;David Kouřil;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorAffiliation": "TU Wien;TU Wien;TU Wien;Allen Institute for Cell Science;Allen Institute for Cell Science;Allen Institute for Cell Science;VRVis Research Center and TU Wien;TU Wien",
                "InternalReferences": "0.1109/visual.2005.1532856;10.1109/visual.2000.885729;10.1109/scivis.2015.7429514",
                "AuthorKeywords": "Biological visualization,remote rendering,public dissemination",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 739,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 828,
                "i": [
                    828
                ]
            }
        },
        {
            "name": "Blair Lyons",
            "value": 30,
            "numPapers": 2,
            "cluster": "6",
            "visible": 1,
            "index": 1072,
            "x": -320.71283112836454,
            "y": 66.2818221658784,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Visualization Multi-Pipeline for Communicating Biology",
                "DOI": "10.1109/tvcg.2017.2744518",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744518",
                "FirstPage": 883,
                "LastPage": 892,
                "PaperType": "J",
                "Abstract": "We propose a system to facilitate biology communication by developing a pipeline to support the instructional visualization of heterogeneous biological data on heterogeneous user-devices. Discoveries and concepts in biology are typically summarized with illustrations assembled manually from the interpretation and application of heterogenous data. The creation of such illustrations is time consuming, which makes it incompatible with frequent updates to the measured data as new discoveries are made. Illustrations are typically non-interactive, and when an illustration is updated, it still has to reach the user. Our system is designed to overcome these three obstacles. It supports the integration of heterogeneous datasets, reflecting the knowledge that is gained from different data sources in biology. After pre-processing the datasets, the system transforms them into visual representations as inspired by scientific illustrations. As opposed to traditional scientific illustration these representations are generated in real-time - they are interactive. The code generating the visualizations can be embedded in various software environments. To demonstrate this, we implemented both a desktop application and a remote-rendering server in which the pipeline is embedded. The remote-rendering server supports multi-threaded rendering and it is able to handle multiple users simultaneously. This scalability to different hardware environments, including multi-GPU setups, makes our system useful for efficient public dissemination of biological discoveries.",
                "AuthorNamesDeduped": "Peter Mindek;David Kouril;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorNames": "Peter Mindek;David Kouřil;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorAffiliation": "TU Wien;TU Wien;TU Wien;Allen Institute for Cell Science;Allen Institute for Cell Science;Allen Institute for Cell Science;VRVis Research Center and TU Wien;TU Wien",
                "InternalReferences": "0.1109/visual.2005.1532856;10.1109/visual.2000.885729;10.1109/scivis.2015.7429514",
                "AuthorKeywords": "Biological visualization,remote rendering,public dissemination",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 739,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 828,
                "i": [
                    828
                ]
            }
        },
        {
            "name": "Graham Johnson",
            "value": 30,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1073,
            "x": 191.80028781793035,
            "y": -265.63631075769564,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Visualization Multi-Pipeline for Communicating Biology",
                "DOI": "10.1109/tvcg.2017.2744518",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744518",
                "FirstPage": 883,
                "LastPage": 892,
                "PaperType": "J",
                "Abstract": "We propose a system to facilitate biology communication by developing a pipeline to support the instructional visualization of heterogeneous biological data on heterogeneous user-devices. Discoveries and concepts in biology are typically summarized with illustrations assembled manually from the interpretation and application of heterogenous data. The creation of such illustrations is time consuming, which makes it incompatible with frequent updates to the measured data as new discoveries are made. Illustrations are typically non-interactive, and when an illustration is updated, it still has to reach the user. Our system is designed to overcome these three obstacles. It supports the integration of heterogeneous datasets, reflecting the knowledge that is gained from different data sources in biology. After pre-processing the datasets, the system transforms them into visual representations as inspired by scientific illustrations. As opposed to traditional scientific illustration these representations are generated in real-time - they are interactive. The code generating the visualizations can be embedded in various software environments. To demonstrate this, we implemented both a desktop application and a remote-rendering server in which the pipeline is embedded. The remote-rendering server supports multi-threaded rendering and it is able to handle multiple users simultaneously. This scalability to different hardware environments, including multi-GPU setups, makes our system useful for efficient public dissemination of biological discoveries.",
                "AuthorNamesDeduped": "Peter Mindek;David Kouril;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorNames": "Peter Mindek;David Kouřil;Johannes Sorger;Daniel Toloudis;Blair Lyons;Graham Johnson;M. Eduard Gröller;Ivan Viola",
                "AuthorAffiliation": "TU Wien;TU Wien;TU Wien;Allen Institute for Cell Science;Allen Institute for Cell Science;Allen Institute for Cell Science;VRVis Research Center and TU Wien;TU Wien",
                "InternalReferences": "0.1109/visual.2005.1532856;10.1109/visual.2000.885729;10.1109/scivis.2015.7429514",
                "AuthorKeywords": "Biological visualization,remote rendering,public dissemination",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 739,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 828,
                "i": [
                    828
                ]
            }
        },
        {
            "name": "Chris R. Johnson 0001",
            "value": 35,
            "numPapers": 41,
            "cluster": "11",
            "visible": 1,
            "index": 1074,
            "x": 38.024885018524785,
            "y": 325.5827208549741,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Direct Volume Rendering with Nonparametric Models of Uncertainty",
                "DOI": "10.1109/tvcg.2020.3030394",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030394",
                "FirstPage": 1797,
                "LastPage": 1807,
                "PaperType": "J",
                "Abstract": "We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Bo Ma 0002;Elham Sakhaee;Chris R. Johnson 0001;Alireza Entezari",
                "AuthorNames": "Tushar M. Athawale;Bo Ma;Elham Sakhaee;Chris R. Johnson;Alireza Entezari",
                "AuthorAffiliation": "University of Utah, Scientific Computing & Imaging (SCI) Institute, Salt Lake City;Department of CISE, Gainesville, University of Florida;Department of CISE, Gainesville, University of Florida;University of Utah, Scientific Computing & Imaging (SCI) Institute, Salt Lake City;Department of CISE, Gainesville, University of Florida",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2018.2864505;10.1109/tvcg.2015.2467958;10.1109/vast.2009.5332611;10.1109/tvcg.2012.227;10.1109/tvcg.2018.2864432;10.1109/tvcg.2012.227;10.1109/visual.2001.964519;10.1109/visual.2005.1532807;10.1109/tvcg.2007.70518;10.1109/tvcg.2014.2346455;10.1109/visual.1997.663848;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Volumes,uncertainty,nonparametric,2D transfer function",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 581,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 442,
                "i": [
                    442
                ]
            }
        },
        {
            "name": "Duong Hoang",
            "value": 12,
            "numPapers": 23,
            "cluster": "11",
            "visible": 1,
            "index": 1075,
            "x": -248.08169473504978,
            "y": -214.48886390063606,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "A Study of the Trade-off Between Reducing Precision and Reducing Resolution for Data Analysis and Visualization",
                "DOI": "10.1109/tvcg.2018.2864853",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864853",
                "FirstPage": 1193,
                "LastPage": 1203,
                "PaperType": "J",
                "Abstract": "There currently exist two dominant strategies to reduce data sizes in analysis and visualization: reducing the precision of the data, e.g., through quantization, or reducing its resolution, e.g., by subsampling. Both have advantages and disadvantages and both face fundamental limits at which the reduced information ceases to be useful. The paper explores the additional gains that could be achieved by combining both strategies. In particular, we present a common framework that allows us to study the trade-off in reducing precision and/or resolution in a principled manner. We represent data reduction schemes as progressive streams of bits and study how various bit orderings such as by resolution, by precision, etc., impact the resulting approximation error across a variety of data sets as well as analysis tasks. Furthermore, we compute streams that are optimized for different tasks to serve as lower bounds on the achievable error. Scientific data management systems can use the results presented in this paper as guidance on how to store and stream data to make efficient use of the limited storage and bandwidth in practice.",
                "AuthorNamesDeduped": "Duong Hoang;Pavol Klacansky;Harsh Bhatia;Peer-Timo Bremer;Peter Lindstrom 0001;Valerio Pascucci",
                "AuthorNames": "Duong Hoang;Pavol Klacansky;Harsh Bhatia;Peer-Timo Bremer;Peter Lindstrom;Valerio Pascucci",
                "AuthorAffiliation": "University of Utah, Salt Lake City, UT, US;University of Utah, Salt Lake City, UT, US;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;University of Utah, Salt Lake City, UT, US",
                "InternalReferences": "0.1109/tvcg.2009.194;10.1109/tvcg.2007.70516;10.1109/visual.2002.1183757;10.1109/tvcg.2012.240;10.1109/visual.1999.809908;10.1109/tvcg.2014.2346458;10.1109/tvcg.2006.143;10.1109/visual.2004.51;10.1109/visual.2003.1250385;10.1109/tvcg.2011.214;10.1109/tvcg.2012.274;10.1109/tvcg.2015.2467412",
                "AuthorKeywords": "data compression,bit ordering,multi-resolution,data analysis",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 758,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 710,
                "i": [
                    710
                ]
            }
        },
        {
            "name": "Harsh Bhatia",
            "value": 12,
            "numPapers": 37,
            "cluster": "11",
            "visible": 1,
            "index": 1076,
            "x": 327.96523151861203,
            "y": -9.423742088111256,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "A Study of the Trade-off Between Reducing Precision and Reducing Resolution for Data Analysis and Visualization",
                "DOI": "10.1109/tvcg.2018.2864853",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864853",
                "FirstPage": 1193,
                "LastPage": 1203,
                "PaperType": "J",
                "Abstract": "There currently exist two dominant strategies to reduce data sizes in analysis and visualization: reducing the precision of the data, e.g., through quantization, or reducing its resolution, e.g., by subsampling. Both have advantages and disadvantages and both face fundamental limits at which the reduced information ceases to be useful. The paper explores the additional gains that could be achieved by combining both strategies. In particular, we present a common framework that allows us to study the trade-off in reducing precision and/or resolution in a principled manner. We represent data reduction schemes as progressive streams of bits and study how various bit orderings such as by resolution, by precision, etc., impact the resulting approximation error across a variety of data sets as well as analysis tasks. Furthermore, we compute streams that are optimized for different tasks to serve as lower bounds on the achievable error. Scientific data management systems can use the results presented in this paper as guidance on how to store and stream data to make efficient use of the limited storage and bandwidth in practice.",
                "AuthorNamesDeduped": "Duong Hoang;Pavol Klacansky;Harsh Bhatia;Peer-Timo Bremer;Peter Lindstrom 0001;Valerio Pascucci",
                "AuthorNames": "Duong Hoang;Pavol Klacansky;Harsh Bhatia;Peer-Timo Bremer;Peter Lindstrom;Valerio Pascucci",
                "AuthorAffiliation": "University of Utah, Salt Lake City, UT, US;University of Utah, Salt Lake City, UT, US;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;University of Utah, Salt Lake City, UT, US",
                "InternalReferences": "0.1109/tvcg.2009.194;10.1109/tvcg.2007.70516;10.1109/visual.2002.1183757;10.1109/tvcg.2012.240;10.1109/visual.1999.809908;10.1109/tvcg.2014.2346458;10.1109/tvcg.2006.143;10.1109/visual.2004.51;10.1109/visual.2003.1250385;10.1109/tvcg.2011.214;10.1109/tvcg.2012.274;10.1109/tvcg.2015.2467412",
                "AuthorKeywords": "data compression,bit ordering,multi-resolution,data analysis",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 758,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 710,
                "i": [
                    710
                ]
            }
        },
        {
            "name": "Pavol Klacansky",
            "value": 46,
            "numPapers": 26,
            "cluster": "11",
            "visible": 1,
            "index": 1077,
            "x": -235.57504944063592,
            "y": 228.5922047687583,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "A Study of the Trade-off Between Reducing Precision and Reducing Resolution for Data Analysis and Visualization",
                "DOI": "10.1109/tvcg.2018.2864853",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864853",
                "FirstPage": 1193,
                "LastPage": 1203,
                "PaperType": "J",
                "Abstract": "There currently exist two dominant strategies to reduce data sizes in analysis and visualization: reducing the precision of the data, e.g., through quantization, or reducing its resolution, e.g., by subsampling. Both have advantages and disadvantages and both face fundamental limits at which the reduced information ceases to be useful. The paper explores the additional gains that could be achieved by combining both strategies. In particular, we present a common framework that allows us to study the trade-off in reducing precision and/or resolution in a principled manner. We represent data reduction schemes as progressive streams of bits and study how various bit orderings such as by resolution, by precision, etc., impact the resulting approximation error across a variety of data sets as well as analysis tasks. Furthermore, we compute streams that are optimized for different tasks to serve as lower bounds on the achievable error. Scientific data management systems can use the results presented in this paper as guidance on how to store and stream data to make efficient use of the limited storage and bandwidth in practice.",
                "AuthorNamesDeduped": "Duong Hoang;Pavol Klacansky;Harsh Bhatia;Peer-Timo Bremer;Peter Lindstrom 0001;Valerio Pascucci",
                "AuthorNames": "Duong Hoang;Pavol Klacansky;Harsh Bhatia;Peer-Timo Bremer;Peter Lindstrom;Valerio Pascucci",
                "AuthorAffiliation": "University of Utah, Salt Lake City, UT, US;University of Utah, Salt Lake City, UT, US;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA;University of Utah, Salt Lake City, UT, US",
                "InternalReferences": "0.1109/tvcg.2009.194;10.1109/tvcg.2007.70516;10.1109/visual.2002.1183757;10.1109/tvcg.2012.240;10.1109/visual.1999.809908;10.1109/tvcg.2014.2346458;10.1109/tvcg.2006.143;10.1109/visual.2004.51;10.1109/visual.2003.1250385;10.1109/tvcg.2011.214;10.1109/tvcg.2012.274;10.1109/tvcg.2015.2467412",
                "AuthorKeywords": "data compression,bit ordering,multi-resolution,data analysis",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 70,
                "DownloadsXplore": 758,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 710,
                "i": [
                    710
                ]
            }
        },
        {
            "name": "Kenneth I. Joy",
            "value": 429,
            "numPapers": 100,
            "cluster": "11",
            "visible": 1,
            "index": 1078,
            "x": 19.302845251005913,
            "y": -327.83745997859324,
            "vy": 0,
            "vx": 0,
            "r": 1.4939550949913645,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Comparative Visual Analysis of Lagrangian Transport in CFD Ensembles",
                "DOI": "10.1109/tvcg.2013.141",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.141",
                "FirstPage": 2743,
                "LastPage": 2752,
                "PaperType": "J",
                "Abstract": "Sets of simulation runs based on parameter and model variation, so-called ensembles, are increasingly used to model physical behaviors whose parameter space is too large or complex to be explored automatically. Visualization plays a key role in conveying important properties in ensembles, such as the degree to which members of the ensemble agree or disagree in their behavior. For ensembles of time-varying vector fields, there are numerous challenges for providing an expressive comparative visualization, among which is the requirement to relate the effect of individual flow divergence to joint transport characteristics of the ensemble. Yet, techniques developed for scalar ensembles are of little use in this context, as the notion of transport induced by a vector field cannot be modeled using such tools. We develop a Lagrangian framework for the comparison of flow fields in an ensemble. Our techniques evaluate individual and joint transport variance and introduce a classification space that facilitates incorporation of these properties into a common ensemble visualization. Variances of Lagrangian neighborhoods are computed using pathline integration and Principal Components Analysis. This allows for an inclusion of uncertainty measurements into the visualization and analysis approach. Our results demonstrate the usefulness and expressiveness of the presented method on several practical examples.",
                "AuthorNamesDeduped": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorNames": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorAffiliation": "University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA;University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA",
                "InternalReferences": "0.1109/tvcg.2011.203;10.1109/visual.1996.568116;10.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2007.70551",
                "AuthorKeywords": "Ensemble, flow field, time-varying, comparison, visualization, Lagrangian, variance, principal components analysis",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 793,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1338,
                "i": [
                    1338
                ]
            }
        },
        {
            "name": "Renato Pajarola",
            "value": 120,
            "numPapers": 32,
            "cluster": "11",
            "visible": 1,
            "index": 1079,
            "x": 207.31375025578785,
            "y": 254.8941132213155,
            "vy": 0,
            "vx": 0,
            "r": 1.1381692573402418,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Implant sprays: compression of progressive tetrahedral mesh connectivity",
                "DOI": "10.1109/visual.1999.809901",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809901",
                "FirstPage": 299,
                "LastPage": 305,
                "PaperType": "C",
                "Abstract": "Irregular tetrahedral meshes, which are popular in many engineering and scientific applications, often contain a large number of vertices. A mesh of V vertices and T tetrahedra requires 48 V bits or less to store the vertex coordinates, 4/spl middot/T/spl middot/log/sub 2/(V) bits to store the tetrahedra-vertex incidence relations, also called connectivity information, and kV bits to store the k-bit value samples associated with the vertices. Given that T is 5 to 7 times larger than V and that V often exceeds 32/sup 3/, the storage space required for the connectivity is larger than 300 V bits and thus dominates the overall storage cost. Our \"implants spray\" compression approach introduced in the paper reduces this cost to about 30 V bits or less-a 10:1 compression ratio. Furthermore, implant spray supports the progressive refinement of a crude model through a series of vertex-splits operations.",
                "AuthorNamesDeduped": "Renato Pajarola;Jarek Rossignac;Andrzej Szymczak",
                "AuthorNames": "R. Pajarola;J. Rossignac;A. Szymczak",
                "AuthorAffiliation": "GVU Center, Georgia Institute of Technology, College of Computing Georgia Institute of Technology, Atlanta, GA, USA;Graphics, Visualization & Usability Center, Georgia Institute of Technology, USA;Graphics, Visualization & Usability Center, Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/visual.1998.745315;10.1109/visual.1998.745329",
                "AuthorKeywords": "tetrahedral meshes, compression, multiresolution models, progressive incremental reconstruction",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 37,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3087,
                "i": [
                    3087
                ]
            }
        },
        {
            "name": "Theodoros Damoulas",
            "value": 29,
            "numPapers": 11,
            "cluster": "3",
            "visible": 1,
            "index": 1080,
            "x": -325.1957257242038,
            "y": -47.93474700786928,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Using Topological Analysis to Support Event-Guided Exploration in Urban Data",
                "DOI": "10.1109/tvcg.2014.2346449",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346449",
                "FirstPage": 2634,
                "LastPage": 2643,
                "PaperType": "J",
                "Abstract": "The explosion in the volume of data about urban environments has opened up opportunities to inform both policy and administration and thereby help governments improve the lives of their citizens, increase the efficiency of public services, and reduce the environmental harms of development. However, cities are complex systems and exploring the data they generate is challenging. The interaction between the various components in a city creates complex dynamics where interesting facts occur at multiple scales, requiring users to inspect a large number of data slices over time and space. Manual exploration of these slices is ineffective, time consuming, and in many cases impractical. In this paper, we propose a technique that supports event-guided exploration of large, spatio-temporal urban data. We model the data as time-varying scalar functions and use computational topology to automatically identify events in different data slices. To handle a potentially large number of events, we develop an algorithm to group and index them, thus allowing users to interactively explore and query event patterns on the fly. A visual exploration interface helps guide users towards data slices that display interesting events and trends. We demonstrate the effectiveness of our technique on two different data sets from New York City (NYC): data about taxi trips and subway service. We also report on the feedback we received from analysts at different NYC agencies.",
                "AuthorNamesDeduped": "Harish Doraiswamy;Nivan Ferreira;Theodoros Damoulas;Juliana Freire;Cláudio T. Silva",
                "AuthorNames": "Harish Doraiswamy;Nivan Ferreira;Theodoros Damoulas;Juliana Freire;Cláudio T. Silva",
                "AuthorAffiliation": "New York University;New York University;New York University;New York University;New York University",
                "InternalReferences": "0.1109/tvcg.2013.130;10.1109/tvcg.2007.70574;10.1109/vast.2008.4677356;10.1109/visual.2004.96;10.1109/tvcg.2013.179;10.1109/tvcg.2006.186;10.1109/vast.2008.4677354;10.1109/tvcg.2013.226;10.1109/tvcg.2013.228;10.1109/vast.2012.6400557;10.1109/vast.2011.6102454;10.1109/tvcg.2013.131",
                "AuthorKeywords": "Computational topology, event detection, spatio-temporal index, urban data, visual exploration",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 1202,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1213,
                "i": [
                    1213
                ]
            }
        },
        {
            "name": "Paulo J. S. Silva",
            "value": 5,
            "numPapers": 13,
            "cluster": "11",
            "visible": 1,
            "index": 1081,
            "x": 272.2945803705093,
            "y": -184.40624040646844,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "TopoMap: A 0-dimensional Homology Preserving Projection of High-Dimensional Data",
                "DOI": "10.1109/tvcg.2020.3030441",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030441",
                "FirstPage": 561,
                "LastPage": 571,
                "PaperType": "J",
                "Abstract": "Multidimensional Projection is a fundamental tool for high-dimensional data analytics and visualization. With very few exceptions, projection techniques are designed to map data from a high-dimensional space to a visual space so as to preserve some dissimilarity (similarity) measure, such as the Euclidean distance for example. In fact, although adopting distinct mathematical formulations designed to favor different aspects of the data, most multidimensional projection methods strive to preserve dissimilarity measures that encapsulate geometric properties such as distances or the proximity relation between data objects. However, geometric relations are not the only interesting property to be preserved in a projection. For instance, the analysis of particular structures such as clusters and outliers could be more reliably performed if the mapping process gives some guarantee as to topological invariants such as connected components and loops. This paper introduces TopoMap, a novel projection technique which provides topological guarantees during the mapping process. In particular, the proposed method performs the mapping from a high-dimensional space to a visual space, while preserving the 0-dimensional persistence diagram of the Rips filtration of the high-dimensional data, ensuring that the filtrations generate the same connected components when applied to the original as well as projected data. The presented case studies show that the topological guarantee provided by TopoMap not only brings confidence to the visual analytic process but also can be used to assist in the assessment of other projection methods.",
                "AuthorNamesDeduped": "Harish Doraiswamy;Julien Tierny;Paulo J. S. Silva;Luis Gustavo Nonato;Cláudio T. Silva",
                "AuthorNames": "Harish Doraiswamy;Julien Tierny;Paulo J. S. Silva;Luis Gustavo Nonato;Claudio Silva",
                "AuthorAffiliation": "New York University;CNRS and Sorbonne Université;University of Campinas;University of Sao Paulo, Sao Carlos;New York University",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/visual.2004.96;10.1109/tvcg.2014.2346449;10.1109/tvcg.2010.213;10.1109/tvcg.2014.2346403;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2011.220;10.1109/tvcg.2011.249;10.1109/tvcg.2006.186;10.1109/vast.2010.5652940;10.1109/tvcg.2016.2598495;10.1109/tvcg.2017.2743938;10.1109/tvcg.2007.70601",
                "AuthorKeywords": "Topological data analysis,computational topology,high-dimensional data,projection",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 676,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 433,
                "i": [
                    433
                ]
            }
        },
        {
            "name": "Jonathas Costa",
            "value": 6,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 1082,
            "x": -76.2521521701948,
            "y": 320.0556346784313,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Interactive Visualization of Atmospheric Effects for Celestial Bodies",
                "DOI": "10.1109/tvcg.2020.3030333",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030333",
                "FirstPage": 785,
                "LastPage": 795,
                "PaperType": "J",
                "Abstract": "We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful visualization of planetary environments is gaining increasing interest in space research, mission planning, and science communication and education. Atmospheric effects are crucial in data analysis and to provide contextual information for planetary data. Our model correctly accounts for the non-linear path of the light inside the atmosphere (in Earth's case), the light absorption effects by molecules and dust particles, such as the ozone layer and the Martian dust, and a wavelength-dependent phase function for Mie scattering. The mode focuses on interactivity, versatility, and customization, and a comprehensive set of interactive controls make it possible to adapt its appearance dynamically. We demonstrate our results using Earth and Mars as examples. However, it can be readily adapted for the exploration of other atmospheres found on, for example, of exoplanets. For Earth's atmosphere, we visually compare our results with pictures taken from the International Space Station and against the CIE clear sky model. The Martian atmosphere is reproduced based on available scientific data, feedback from domain experts, and is compared to images taken by the Curiosity rover. The work presented here has been implemented in the OpenSpace system, which enables interactive parameter setting and real-time feedback visualization targeting presentations in a wide range of environments, from immersive dome theaters to virtual reality headsets.",
                "AuthorNamesDeduped": "Jonathas Costa;Alexander Bock 0002;Carter Emmart;Charles D. Hansen;Anders Ynnerman;Cláudio T. Silva",
                "AuthorNames": "Jonathas Costa;Alexander Bock;Carter Emmart;Charles Hansen;Anders Ynnerman;Cláudio Silva",
                "AuthorAffiliation": "New York University;Linköping University, University of Utah;American Museum of Natural History;University of Utah;Linköping University, University of Utah;New York University",
                "InternalReferences": "0.1109/tvcg.2017.2743958;10.1109/tvcg.2017.2743958;10.1109/tvcg.2019.2934259;10.1109/tvcg.2018.2864508",
                "AuthorKeywords": "Physical & Environmental Sciences,Engineering,Mathematics,Computer Graphics Techniques",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 592,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 435,
                "i": [
                    435
                ]
            }
        },
        {
            "name": "Carter Emmart",
            "value": 28,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1083,
            "x": -160.0423823534809,
            "y": -287.63941984822293,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization",
                "DOI": "10.1109/tvcg.2017.2743958",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743958",
                "FirstPage": 802,
                "LastPage": 811,
                "PaperType": "J",
                "Abstract": "Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes, such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters, interactive touch tables, and virtual reality headsets.",
                "AuthorNamesDeduped": "Karl Bladin;Emil Axelsson;Erik Broberg;Carter Emmart;Patric Ljung;Alexander Bock 0002;Anders Ynnerman",
                "AuthorNames": "Karl Bladin;Emil Axelsson;Erik Broberg;Carter Emmart;Patric Ljung;Alexander Bock;Anders Ynnerman",
                "AuthorAffiliation": "Linköping University;Linköping University;Linköping University;American Museum of Natural History;Linköping University;New York University, Linköping University;Linköping University",
                "InternalReferences": "0.1109/scivis.2015.7429503;10.1109/visual.2003.1250366;10.1109/visual.1997.663860",
                "AuthorKeywords": "Astronomical visualization,globe rendering,public dissemination,science communication,space mission visualization",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 1559,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 825,
                "i": [
                    825
                ]
            }
        },
        {
            "name": "Rodolfo Ostilla Monico",
            "value": 6,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 1084,
            "x": 312.4519949766758,
            "y": 104.03725695680068,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Visual Analysis of Spatio-temporal Relations of Pairwise Attributes in Unsteady Flow",
                "DOI": "10.1109/tvcg.2018.2864817",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864817",
                "FirstPage": 1246,
                "LastPage": 1256,
                "PaperType": "J",
                "Abstract": "Despite significant advances in the analysis and visualization of unsteady flow, the interpretation of it's behavior still remains a challenge. In this work, we focus on the linear correlation and non-linear dependency of different physical attributes of unsteady flows to aid their study from a new perspective. Specifically, we extend the existing spatial correlation quantification, i.e. the Local Correlation Coefficient (LCC), to the spatio-temporal domain to study the correlation of attribute-pairs from both the Eulerian and Lagrangian views. To study the dependency among attributes, which need not be linear, we extend and compute the mutual information (MI) among attributes over time. To help visualize and interpret the derived correlation and dependency among attributes associated with a particle, we encode the correlation and dependency values on individual pathlines. Finally, to utilize the correlation and MI computation results to identify regions with interesting flow behavior, we propose a segmentation strategy of the flow domain based on the ranking of the strength of the attributes relations. We have applied our correlation and dependency metrics to a number of 2D and 3D unsteady flows with varying spatio-temporal kernel sizes to demonstrate and assess their effectiveness.",
                "AuthorNamesDeduped": "Marzieh Berenjkoub;Rodolfo Ostilla Monico;Robert S. Laramee;Guoning Chen",
                "AuthorNames": "Marzieh Berenjkoub;Rodolfo Ostilla Monico;Robert S. Laramee;Guoning Chen",
                "AuthorAffiliation": "University of Houston, Houston, TX, US;University of Houston, Houston, TX, US;Swansea University, Swansea, West Glamorgan, GB;University of Houston, Houston, TX, US",
                "InternalReferences": "0.1109/tvcg.2010.131;10.1109/visual.2004.99;10.1109/tvcg.2010.198;10.1109/tvcg.2015.2467200;10.1109/tvcg.2009.200;10.1109/tvcg.2010.131;10.1109/tvcg.2013.133;10.1109/tvcg.2011.249;10.1109/tvcg.2010.132;10.1109/tvcg.2006.165",
                "AuthorKeywords": "Unsteady flow,correlation study,mutual information",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": null,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 722,
                "i": [
                    722
                ]
            }
        },
        {
            "name": "Teng-Yok Lee",
            "value": 127,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 1085,
            "x": -300.80712317372365,
            "y": 134.40637874724644,
            "vy": 0,
            "vx": 0,
            "r": 1.1462291306850891,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "An Information-Theoretic Framework for Flow Visualization",
                "DOI": "10.1109/tvcg.2010.131",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.131",
                "FirstPage": 1216,
                "LastPage": 1224,
                "PaperType": "J",
                "Abstract": "The process of visualization can be seen as a visual communication channel where the input to the channel is the raw data, and the output is the result of a visualization algorithm. From this point of view, we can evaluate the effectiveness of visualization by measuring how much information in the original data is being communicated through the visual communication channel. In this paper, we present an information-theoretic framework for flow visualization with a special focus on streamline generation. In our framework, a vector field is modeled as a distribution of directions from which Shannon's entropy is used to measure the information content in the field. The effectiveness of the streamlines displayed in visualization can be measured by first constructing a new distribution of vectors derived from the existing streamlines, and then comparing this distribution with that of the original data set using the conditional entropy. The conditional entropy between these two distributions indicates how much information in the original data remains hidden after the selected streamlines are displayed. The quality of the visualization can be improved by progressively introducing new streamlines until the conditional entropy converges to a small value. We describe the key components of our framework with detailed analysis, and show that the framework can effectively visualize 2D and 3D flow data.",
                "AuthorNamesDeduped": "Lijie Xu;Teng-Yok Lee;Han-Wei Shen",
                "AuthorNames": "Lijie Xu;Teng-Yok Lee;Han-Wei Shen",
                "AuthorAffiliation": "Ohio State Uinversity, USA;Ohio State Uinversity, USA;Ohio State Uinversity, USA",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/tvcg.2007.70595;10.1109/tvcg.2007.70615;10.1109/tvcg.2006.152;10.1109/tvcg.2006.116;10.1109/visual.2005.1532832;10.1109/visual.2005.1532831;10.1109/tvcg.2008.140;10.1109/visual.2000.885690;10.1109/visual.2005.1532833;10.1109/tvcg.2007.70579;10.1109/visual.2002.1183785",
                "AuthorKeywords": "Flow field visualization, information theory, streamline generation",
                "AminerCitationCount": 180,
                "CitationCountCrossRef": 106,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 1677,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1767,
                "i": [
                    1767
                ]
            }
        },
        {
            "name": "Alireza Entezari",
            "value": 81,
            "numPapers": 63,
            "cluster": "6",
            "visible": 1,
            "index": 1086,
            "x": 131.07594075511426,
            "y": -302.43858509648163,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Direct Volume Rendering with Nonparametric Models of Uncertainty",
                "DOI": "10.1109/tvcg.2020.3030394",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030394",
                "FirstPage": 1797,
                "LastPage": 1807,
                "PaperType": "J",
                "Abstract": "We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Bo Ma 0002;Elham Sakhaee;Chris R. Johnson 0001;Alireza Entezari",
                "AuthorNames": "Tushar M. Athawale;Bo Ma;Elham Sakhaee;Chris R. Johnson;Alireza Entezari",
                "AuthorAffiliation": "University of Utah, Scientific Computing & Imaging (SCI) Institute, Salt Lake City;Department of CISE, Gainesville, University of Florida;Department of CISE, Gainesville, University of Florida;University of Utah, Scientific Computing & Imaging (SCI) Institute, Salt Lake City;Department of CISE, Gainesville, University of Florida",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2018.2864505;10.1109/tvcg.2015.2467958;10.1109/vast.2009.5332611;10.1109/tvcg.2012.227;10.1109/tvcg.2018.2864432;10.1109/tvcg.2012.227;10.1109/visual.2001.964519;10.1109/visual.2005.1532807;10.1109/tvcg.2007.70518;10.1109/tvcg.2014.2346455;10.1109/visual.1997.663848;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Volumes,uncertainty,nonparametric,2D transfer function",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 581,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 442,
                "i": [
                    442
                ]
            }
        },
        {
            "name": "Raghu Machiraju",
            "value": 138,
            "numPapers": 20,
            "cluster": "6",
            "visible": 1,
            "index": 1087,
            "x": 107.69253466196287,
            "y": 311.69266590358194,
            "vy": 0,
            "vx": 0,
            "r": 1.1588946459412781,
            "node": {
                "Conference": "VAST",
                "Year": 2006,
                "Title": "Visual Exploration of Spatio-temporal Relationships for Scientific Data",
                "DOI": "10.1109/vast.2006.261451",
                "Link": "http://dx.doi.org/10.1109/VAST.2006.261451",
                "FirstPage": 11,
                "LastPage": 18,
                "PaperType": "C",
                "Abstract": "Spatio-temporal relationships among features extracted from temporally-varying scientific datasets can provide useful information about the evolution of an individual feature and its interactions with other features. However, extracting such useful relationships without user guidance is cumbersome and often an error prone process. In this paper, we present a visual analysis system that interactively discovers such relationships from the trajectories of derived features. We describe analysis algorithms to derive various spatial and spatio-temporal relationships. A visual interface is presented using which the user can interactively select spatial and temporal extents to guide the knowledge discovery process. We show the usefulness of our proposed algorithms on datasets originating from computational fluid dynamics. We also demonstrate how the derived relationships can help in explaining the occurrence of critical events like merging and bifurcation of the vortices",
                "AuthorNamesDeduped": "Bryan Mehta;Srinivasan Parthasarathy 0001;Raghu Machiraju",
                "AuthorNames": "Sameep Mehta;Srinivasan Parthasarathy;Raghu Machiraju",
                "AuthorAffiliation": "Computer Science & Engineering, Ohio State Uinversity, USA;Computer Science & Engineering, Ohio State Uinversity, USA;Computer Science & Engineering, Ohio State Uinversity, USA",
                "InternalReferences": "0.1109/visual.2002.1183789",
                "AuthorKeywords": "Knowledge Discovery, Scientific Analytics, Trajectory Analysis, Feature Extraction, Spatio-temporal Predicates, Visual Analytics",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 269,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2265,
                "i": [
                    2265
                ]
            }
        },
        {
            "name": "Roni Yagel",
            "value": 148,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 1088,
            "x": -290.0877758500895,
            "y": -157.1594168427341,
            "vy": 0,
            "vx": 0,
            "r": 1.1704087507196315,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "A comparison of normal estimation schemes",
                "DOI": "10.1109/visual.1997.663848",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663848",
                "FirstPage": 19,
                "LastPage": 26,
                "PaperType": "C",
                "Abstract": "The task of reconstructing the derivative of a discrete function is essential for its shading and rendering as well as being widely used in image processing and analysis. We survey the possible methods for normal estimation in volume rendering and divide them into two classes based on the delivered numerical accuracy. The three members of the first class determine the normal in two steps by employing both interpolation and derivative filters. Among these is a new method which has never been realized. The members of the first class are all equally accurate. The second class has only one member and employs a continuous derivative filter obtained through the analytic derivation of an interpolation filter. We use the new method to analytically compare the accuracy of the first class with that of the second. As a result of our analysis we show that even inexpensive schemes can in fact be more accurate than high order methods. We describe the theoretical computational cost of applying the schemes in a volume rendering application and provide guidelines for helping one choose a scheme for estimating derivatives. In particular we find that the new method can be very inexpensive and can compete with the normal estimations which pre-shade and pre-classify the volume (M. Levoy, 1988).",
                "AuthorNamesDeduped": "Torsten Möller;Raghu Machiraju;Klaus Mueller 0001;Roni Yagel",
                "AuthorNames": "T. Moller;R. Machiraju;K. Mueller;R. Yagel",
                "AuthorAffiliation": "The Ohio State University, Columbus, Ohio;Mississippi State University, Mississippi;The Ohio State University, Columbus, Ohio;The Ohio State University, Columbus, Ohio",
                "InternalReferences": "0.1109/visual.1994.346331",
                "AuthorKeywords": "interpolation filters, derivative filters, filter design, normal estimation, Taylor series expansion, efficient volume rendering",
                "AminerCitationCount": 129,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 185,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3244,
                "i": [
                    3244
                ]
            }
        },
        {
            "name": "Chandrajit L. Bajaj",
            "value": 168,
            "numPapers": 18,
            "cluster": "11",
            "visible": 1,
            "index": 1089,
            "x": 320.20834425816315,
            "y": -80.10378435158783,
            "vy": 0,
            "vx": 0,
            "r": 1.1934369602763386,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Case study: Interactive rendering of adaptive mesh refinement data",
                "DOI": "10.1109/visual.2002.1183820",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183820",
                "FirstPage": 521,
                "LastPage": 524,
                "PaperType": "C",
                "Abstract": "Adaptive mesh refinement (AMR) is a popular computational simulation technique used in various scientific and engineering fields. Although AMR data is organized in a hierarchical multi-resolution data structure, the traditional volume visualization algorithms such as ray-casting and splatting cannot handle the form without converting it to a sophisticated data structure. In this paper, we present a hierarchical multi-resolution splatting technique using k-d trees and octrees for AMR data that is suitable for implementation on the latest consumer PC graphics hardware. We describe a graphical user interface to set transfer function and viewing/rendering parameters interactively. Experimental results obtained on a general purpose PC equipped with NVIDIA GeForce card are presented to demonstrate that the technique can interactively render AMR data (over 20 frames per second). Our scheme can easily be applied to parallel rendering of time-varying AMR data.",
                "AuthorNamesDeduped": "Sanghun Park;Chandrajit L. Bajaj;Vinay Siddavanahalli",
                "AuthorNames": "Sanghun Park;C.L. Bajaj;V. Siddavanahalli",
                "AuthorAffiliation": "CCV TICAM, University of Technology, Austin, USA;CCV TICAM, Department of Computer Sciences, University of Technology, Austin, USA;CCV TICAM, Department of Computer Sciences, University of Technology, Austin, USA",
                "InternalReferences": "0.1109/visual.1993.398877",
                "AuthorKeywords": "AMR, K-d trees, Octree, Hierarchical splatting, Texture mapping",
                "AminerCitationCount": 33,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 10,
                "DownloadsXplore": 119,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2830,
                "i": [
                    2830
                ]
            }
        },
        {
            "name": "Khaled A. Al-Thelaya",
            "value": 2,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1090,
            "x": -182.08584516242308,
            "y": 275.4900088777923,
            "vy": 0,
            "vx": 0,
            "r": 1.0023028209556706,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "The Mixture Graph-A Data Structure for Compressing, Rendering, and Querying Segmentation Histograms",
                "DOI": "10.1109/tvcg.2020.3030451",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030451",
                "FirstPage": 645,
                "LastPage": 655,
                "PaperType": "J",
                "Abstract": "In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178 x speed-up over naive parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions.",
                "AuthorNamesDeduped": "Khaled A. Al-Thelaya;Marco Agus;Jens Schneider 0002",
                "AuthorNames": "Khaled Ai- Thelaya;Marco Agus;Jens Schneider",
                "AuthorAffiliation": "Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar;Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar;Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar",
                "InternalReferences": "0.1109/tvcg.2015.2467441;10.1109/tvcg.2014.2346312;10.1109/tvcg.2013.142;10.1109/tvcg.2018.2864847;10.1109/tvcg.2007.70516;10.1109/tvcg.2017.2744238;10.1109/visual.2003.1250386;10.1109/tvcg.2014.2346371;10.1109/tvcg.2009.178;10.1109/tvcg.2010.168;10.1109/tvcg.2012.240;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Segmented Volumes,Data Structures,Sparse Data",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 438,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 444,
                "i": [
                    444
                ]
            }
        },
        {
            "name": "Marco Agus",
            "value": 39,
            "numPapers": 47,
            "cluster": "6",
            "visible": 1,
            "index": 1091,
            "x": -51.85014903036296,
            "y": -326.2844802400646,
            "vy": 0,
            "vx": 0,
            "r": 1.0449050086355787,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "The Mixture Graph-A Data Structure for Compressing, Rendering, and Querying Segmentation Histograms",
                "DOI": "10.1109/tvcg.2020.3030451",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030451",
                "FirstPage": 645,
                "LastPage": 655,
                "PaperType": "J",
                "Abstract": "In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178 x speed-up over naive parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions.",
                "AuthorNamesDeduped": "Khaled A. Al-Thelaya;Marco Agus;Jens Schneider 0002",
                "AuthorNames": "Khaled Ai- Thelaya;Marco Agus;Jens Schneider",
                "AuthorAffiliation": "Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar;Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar;Hamad Bin Khalifa University (HBKU), College of Science and Engineering (CSE), Education City, Doha, Qatar",
                "InternalReferences": "0.1109/tvcg.2015.2467441;10.1109/tvcg.2014.2346312;10.1109/tvcg.2013.142;10.1109/tvcg.2018.2864847;10.1109/tvcg.2007.70516;10.1109/tvcg.2017.2744238;10.1109/visual.2003.1250386;10.1109/tvcg.2014.2346371;10.1109/tvcg.2009.178;10.1109/tvcg.2010.168;10.1109/tvcg.2012.240;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Segmented Volumes,Data Structures,Sparse Data",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 438,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 444,
                "i": [
                    444
                ]
            }
        },
        {
            "name": "Torin McDonald",
            "value": 5,
            "numPapers": 12,
            "cluster": "11",
            "visible": 1,
            "index": 1092,
            "x": 258.7531353880101,
            "y": 205.66189468852534,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Steve Petruzza",
            "value": 21,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 1093,
            "x": -329.86998958687946,
            "y": 23.14713740297114,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Frederick Federer",
            "value": 32,
            "numPapers": 13,
            "cluster": "11",
            "visible": 1,
            "index": 1094,
            "x": 227.70424328773586,
            "y": -240.00161997111522,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Alessandra Angelucci",
            "value": 32,
            "numPapers": 13,
            "cluster": "11",
            "visible": 1,
            "index": 1095,
            "x": -5.785898869261685,
            "y": 330.9328079448677,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements",
                "DOI": "10.1109/tvcg.2020.3030363",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030363",
                "FirstPage": 744,
                "LastPage": 754,
                "PaperType": "J",
                "Abstract": "Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.",
                "AuthorNamesDeduped": "Torin McDonald;Will Usher 0001;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorNames": "Torin McDonald;Will Usher;Nate Morrical;Attila Gyulassy;Steve Petruzza;Frederick Federer;Alessandra Angelucci;Valerio Pascucci",
                "AuthorAffiliation": "SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah, Utah State University;Moran Eye Institute, University of Utah;Moran Eye Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2017.2743980;10.1109/tvcg.2018.2864848;10.1109/tvcg.2007.70603;10.1109/tvcg.2015.2467432;10.1109/tvcg.2009.178;10.1109/tvcg.2006.186;10.1109/tvcg.2019.2934620;10.1109/tvcg.2017.2744321;10.1109/tvcg.2012.213;10.1109/tvcg.2017.2744079;10.1109/tvcg.2018.2865152;10.1109/tvcg.2017.2743938;10.1109/tvcg.2018.2864852",
                "AuthorKeywords": "Virtual Reality,Morse-Smale Complex,Semi-automatic Neuron Tracing",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 71,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 446,
                "i": [
                    446
                ]
            }
        },
        {
            "name": "Tobias Rapp",
            "value": 12,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1096,
            "x": -219.3756153599819,
            "y": -248.04100343578136,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Visual Analysis of Large Multivariate Scattered Data using Clustering and Probabilistic Summaries",
                "DOI": "10.1109/tvcg.2020.3030379",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030379",
                "FirstPage": 1580,
                "LastPage": 1590,
                "PaperType": "J",
                "Abstract": "Rapidly growing data sizes of scientific simulations pose significant challenges for interactive visualization and analysis techniques. In this work, we propose a compact probabilistic representation to interactively visualize large scattered datasets. In contrast to previous approaches that represent blocks of volumetric data using probability distributions, we model clusters of arbitrarily structured multivariate data. In detail, we discuss how to efficiently represent and store a high-dimensional distribution for each cluster. We observe that it suffices to consider low-dimensional marginal distributions for two or three data dimensions at a time to employ common visual analysis techniques. Based on this observation, we represent high-dimensional distributions by combinations of low-dimensional Gaussian mixture models. We discuss the application of common interactive visual analysis techniques to this representation. In particular, we investigate several frequency-based views, such as density plots in 1D and 2D, density-based parallel coordinates, and a time histogram. We visualize the uncertainty introduced by the representation, discuss a level-of-detail mechanism, and explicitly visualize outliers. Furthermore, we propose a spatial visualization by splatting anisotropic 3D Gaussians for which we derive a closed-form solution. Lastly, we describe the application of brushing and linking to this clustered representation. Our evaluation on several large, real-world datasets demonstrates the scaling of our approach.",
                "AuthorNamesDeduped": "Tobias Rapp;Christoph Peters 0002;Carsten Dachsbacher",
                "AuthorNames": "Tobias Rapp;Christoph Peters;Carsten Dachsbacher",
                "AuthorAffiliation": "Karlsruhe Institute of Technology;Karlsruhe Institute of Technology;Karlsruhe Institute of Technology",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2008.119;10.1109/tvcg.2008.131;10.1109/visual.2003.1250389;10.1109/tvcg.2016.2598604;10.1109/tvcg.2010.176;10.1109/tvcg.2017.2744099;10.1109/tvcg.2018.2864801;10.1109/tvcg.2009.131;10.1109/infvis.2005.1532138;10.1109/tvcg.2006.170;10.1109/tvcg.2019.2934335;10.1109/tvcg.2014.2346324;10.1109/visual.2001.964490",
                "AuthorKeywords": "interactive visual analysis,probabilistic data summaries,multivariate data,scattered data,Gaussian mixture models,Gaussian rendering",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 496,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 447,
                "i": [
                    447
                ]
            }
        },
        {
            "name": "Christoph Peters 0002",
            "value": 12,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1097,
            "x": 329.460171731671,
            "y": 34.72744221128052,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Visual Analysis of Large Multivariate Scattered Data using Clustering and Probabilistic Summaries",
                "DOI": "10.1109/tvcg.2020.3030379",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030379",
                "FirstPage": 1580,
                "LastPage": 1590,
                "PaperType": "J",
                "Abstract": "Rapidly growing data sizes of scientific simulations pose significant challenges for interactive visualization and analysis techniques. In this work, we propose a compact probabilistic representation to interactively visualize large scattered datasets. In contrast to previous approaches that represent blocks of volumetric data using probability distributions, we model clusters of arbitrarily structured multivariate data. In detail, we discuss how to efficiently represent and store a high-dimensional distribution for each cluster. We observe that it suffices to consider low-dimensional marginal distributions for two or three data dimensions at a time to employ common visual analysis techniques. Based on this observation, we represent high-dimensional distributions by combinations of low-dimensional Gaussian mixture models. We discuss the application of common interactive visual analysis techniques to this representation. In particular, we investigate several frequency-based views, such as density plots in 1D and 2D, density-based parallel coordinates, and a time histogram. We visualize the uncertainty introduced by the representation, discuss a level-of-detail mechanism, and explicitly visualize outliers. Furthermore, we propose a spatial visualization by splatting anisotropic 3D Gaussians for which we derive a closed-form solution. Lastly, we describe the application of brushing and linking to this clustered representation. Our evaluation on several large, real-world datasets demonstrates the scaling of our approach.",
                "AuthorNamesDeduped": "Tobias Rapp;Christoph Peters 0002;Carsten Dachsbacher",
                "AuthorNames": "Tobias Rapp;Christoph Peters;Carsten Dachsbacher",
                "AuthorAffiliation": "Karlsruhe Institute of Technology;Karlsruhe Institute of Technology;Karlsruhe Institute of Technology",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2008.119;10.1109/tvcg.2008.131;10.1109/visual.2003.1250389;10.1109/tvcg.2016.2598604;10.1109/tvcg.2010.176;10.1109/tvcg.2017.2744099;10.1109/tvcg.2018.2864801;10.1109/tvcg.2009.131;10.1109/infvis.2005.1532138;10.1109/tvcg.2006.170;10.1109/tvcg.2019.2934335;10.1109/tvcg.2014.2346324;10.1109/visual.2001.964490",
                "AuthorKeywords": "interactive visual analysis,probabilistic data summaries,multivariate data,scattered data,Gaussian mixture models,Gaussian rendering",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 496,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 447,
                "i": [
                    447
                ]
            }
        },
        {
            "name": "Christian Dick",
            "value": 71,
            "numPapers": 23,
            "cluster": "6",
            "visible": 1,
            "index": 1098,
            "x": -266.5130627005245,
            "y": 197.0299150128891,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Multi-Charts for Comparative 3D Ensemble Visualization",
                "DOI": "10.1109/tvcg.2014.2346448",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346448",
                "FirstPage": 2694,
                "LastPage": 2703,
                "PaperType": "J",
                "Abstract": "A comparative visualization of multiple volume data sets is challenging due to the inherent occlusion effects, yet it is important to effectively reveal uncertainties, correlations and reliable trends in 3D ensemble fields. In this paper we present bidirectional linking of multi-charts and volume visualization as a means to analyze visually 3D scalar ensemble fields at the data level. Multi-charts are an extension of conventional bar and line charts: They linearize the 3D data points along a space-filling curve and draw them as multiple charts in the same plot area. The bar charts encode statistical information on ensemble members, such as histograms and probability densities, and line charts are overlayed to allow comparing members against the ensemble. Alternative linearizations based on histogram similarities or ensemble variation allow clustering of spatial locations depending on data distribution. Multi-charts organize the data at multiple scales to quickly provide overviews and enable users to select regions exhibiting interesting behavior interactively. They are further put into a spatial context by allowing the user to brush or query value intervals and specific distributions, and to simultaneously visualize the corresponding spatial points via volume rendering. By providing a picking mechanism in 3D and instantly highlighting the corresponding data points in the chart, the user can go back and forth between the abstract and the 3D view to focus the analysis.",
                "AuthorNamesDeduped": "Ismail Demir;Christian Dick;Rüdiger Westermann",
                "AuthorNames": "Ismail Demir;Christian Dick;Rüdiger Westermann",
                "AuthorAffiliation": "Computer Graphics and Visualization Group, Technische Universität München Informatik 15, Garching, Germany;Computer Graphics and Visualization Group, Technische Universität München Informatik 15, Garching, Germany;Computer Graphics and Visualization Group, Technische Universität München Informatik 15, Garching, Germany",
                "InternalReferences": "0.1109/tvcg.2013.143;10.1109/visual.2000.885739;10.1109/tvcg.2006.159;10.1109/tvcg.2008.139;10.1109/tvcg.2007.70518;10.1109/tvcg.2010.181;10.1109/tvcg.2009.198;10.1109/infvis.2002.1173157;10.1109/visual.1999.809921",
                "AuthorKeywords": "Ensemble visualization, brushing and linking, statistical analysis",
                "AminerCitationCount": 86,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 1382,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1212,
                "i": [
                    1212
                ]
            }
        },
        {
            "name": "Rainer Burgkart",
            "value": 36,
            "numPapers": 15,
            "cluster": "6",
            "visible": 1,
            "index": 1099,
            "x": 63.4555058589515,
            "y": -325.45875126686116,
            "vy": 0,
            "vx": 0,
            "r": 1.0414507772020725,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Distance Visualization for Interactive 3D Implant Planning",
                "DOI": "10.1109/tvcg.2011.189",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.189",
                "FirstPage": 2173,
                "LastPage": 2182,
                "PaperType": "J",
                "Abstract": "An instant and quantitative assessment of spatial distances between two objects plays an important role in interactive applications such as virtual model assembly, medical operation planning, or computational steering. While some research has been done on the development of distance-based measures between two objects, only very few attempts have been reported to visualize such measures in interactive scenarios. In this paper we present two different approaches for this purpose, and we investigate the effectiveness of these approaches for intuitive 3D implant positioning in a medical operation planning system. The first approach uses cylindrical glyphs to depict distances, which smoothly adapt their shape and color to changing distances when the objects are moved. This approach computes distances directly on the polygonal object representations by means of ray/triangle mesh intersection. The second approach introduces a set of slices as additional geometric structures, and uses color coding on surfaces to indicate distances. This approach obtains distances from a precomputed distance field of each object. The major findings of the performed user study indicate that a visualization that can facilitate an instant and quantitative analysis of distances between two objects in interactive 3D scenarios is demanding, yet can be achieved by including additional monocular cues into the visualization.",
                "AuthorNamesDeduped": "Christian Dick;Rainer Burgkart;Rüdiger Westermann",
                "AuthorNames": "Christian Dick;Rainer Burgkart;Rudiger Westermann",
                "AuthorAffiliation": "Computer Graphics and Visualization Group, Technische Universität München, Germany;Klinik für Orthopädie u. Unfallchirurgie am Klinikum Rechts der Isar, Technische Universität München, Germany;Computer Graphics and Visualization Group, Technische Universität München, Germany",
                "InternalReferences": "0.1109/visual.2002.1183752;10.1109/tvcg.2009.184",
                "AuthorKeywords": "Distance visualization, biomedical visualization, implant planning, glyphs, distance fields",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 600,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1667,
                "i": [
                    1667
                ]
            }
        },
        {
            "name": "Pepe Eulzer",
            "value": 4,
            "numPapers": 30,
            "cluster": "6",
            "visible": 1,
            "index": 1100,
            "x": 173.13279134597937,
            "y": 282.97532853722345,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "GRay: Ray Casting for Visualization and Interactive Data Exploration of Gaussian Mixture Models",
                "DOI": "10.1109/tvcg.2022.3209374",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209374",
                "FirstPage": 526,
                "LastPage": 536,
                "PaperType": "J",
                "Abstract": "The Gaussian mixture model (GMM) describes the distribution of random variables from several different populations. GMMs have widespread applications in probability theory, statistics, machine learning for unsupervised cluster analysis and topic modeling, as well as in deep learning pipelines. So far, few efforts have been made to explore the underlying point distribution in combination with the GMMs, in particular when the data becomes high-dimensional and when the GMMs are composed of many Gaussians. We present an analysis tool comprising various GPU-based visualization techniques to explore such complex GMMs. To facilitate the exploration of high-dimensional data, we provide a novel navigation system to analyze the underlying data. Instead of projecting the data to 2D, we utilize interactive 3D views to better support users in understanding the spatial arrangements of the Gaussian distributions. The interactive system is composed of two parts: (1) raycasting-based views that visualize cluster memberships, spatial arrangements, and support the discovery of new modes. (2) overview visualizations that enable the comparison of Gaussians with each other, as well as small multiples of different choices of basis vectors. Users are supported in their exploration with customization tools and smooth camera navigations. Our tool was developed and assessed by five domain experts, and its usefulness was evaluated with 23 participants. To demonstrate the effectiveness, we identify interesting features in several data sets.",
                "AuthorNamesDeduped": "Kai Lawonn;Monique Meuschke;Pepe Eulzer;Matthias Mitterreiter;Joachim Giesen;Tobias Günther",
                "AuthorNames": "Kai Lawonn;Monique Meuschke;Pepe Eulzer;Matthias Mitterreiter;Joachim Giesen;Tobias Günther",
                "AuthorAffiliation": "Friedrich Schiller University of Jena, Germany;Otto von Guericke University of Magdeburg, Germany;Friedrich Schiller University of Jena, Germany;Friedrich Schiller University of Jena, Germany;Friedrich Schiller University of Jena, Germany;Friedrich-Alexander-Universitä t Erlangen-Nürnberg, Germany",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.229;10.1109/tvcg.2011.201;10.1109/tvcg.2008.153;10.1109/infvis.2005.1532141;10.1109/vast.2010.5652484;10.1109/tvcg.2013.160;10.1109/vast.2010.5652398;10.1109/tvcg.2020.3030379;10.1109/visual.2000.885740;10.1109/infvis.2004.3;10.1109/vast.2009.5332628;10.1109/tvcg.2007.70589;10.1109/tvcg.2009.179",
                "AuthorKeywords": "Scientific visualization,Gaussian mixture models,ray casting,volume visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 493,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 244,
                "i": [
                    244
                ]
            }
        },
        {
            "name": "Sabine Bauer 0001",
            "value": 0,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1101,
            "x": -318.95463488037655,
            "y": -91.74933726368647,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Visualization of Human Spine Biomechanics for Spinal Surgery",
                "DOI": "10.1109/tvcg.2020.3030388",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030388",
                "FirstPage": 700,
                "LastPage": 710,
                "PaperType": "J",
                "Abstract": "We propose a visualization application, designed for the exploration of human spine simulation data. Our goal is to support research in biomechanical spine simulation and advance efforts to implement simulation-backed analysis in surgical applications. Biomechanical simulation is a state-of-the-art technique for analyzing load distributions of spinal structures. Through the inclusion of patient-specific data, such simulations may facilitate personalized treatment and customized surgical interventions. Difficulties in spine modelling and simulation can be partly attributed to poor result representation, which may also be a hindrance when introducing such techniques into a clinical environment. Comparisons of measurements across multiple similar anatomical structures and the integration of temporal data make commonly available diagrams and charts insufficient for an intuitive and systematic display of results. Therefore, we facilitate methods such as multiple coordinated views, abstraction and focus and context to display simulation outcomes in a dedicated tool. $\\mathrm{By}$ linking the result data with patient-specific anatomy, we make relevant parameters tangible for clinicians. Furthermore, we introduce new concepts to show the directions of impact force vectors, which were not accessible before. We integrated our toolset into a spine segmentation and simulation pipeline and evaluated our methods with both surgeons and biomechanical researchers. When comparing our methods against standard representations that are currently in use, we found increases in accuracy and speed in data exploration tasks. $\\mathrm{in}$ a qualitative review, domain experts deemed the tool highly useful when dealing with simulation result data, which typically combines time-dependent patient movement and the resulting force distributions on spinal structures.",
                "AuthorNamesDeduped": "Pepe Eulzer;Sabine Bauer 0001;Francis Kilian;Kai Lawonn",
                "AuthorNames": "Pepe Eulzer;Sabine Bauer;Francis Kilian;Kai Lawonn",
                "AuthorAffiliation": "University of Jena, Germany;University of Koblenz-Landau, Germany;Department of Spine Surgery, Cath. Clinic Koblenz-Montabaur, Germany;University of Jena, Germany",
                "InternalReferences": "0.1109/tvcg.2014.2346448;10.1109/tvcg.2011.189;10.1109/tvcg.2009.184;10.1109/tvcg.2019.2934337;10.1109/tvcg.2015.2467198;10.1109/tvcg.2014.2346591;10.1109/tvcg.2015.2467961;10.1109/tvcg.2016.2598795;10.1109/tvcg.2008.155;10.1109/tvcg.2018.2864510;10.1109/tvcg.2015.2467435",
                "AuthorKeywords": "Medical visualization,bioinformatics,coordinated views,focus and context,biomechanical simulation",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 671,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 452,
                "i": [
                    452
                ]
            }
        },
        {
            "name": "Francis Kilian",
            "value": 0,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1102,
            "x": 297.29786765843033,
            "y": -147.86472833556502,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Visualization of Human Spine Biomechanics for Spinal Surgery",
                "DOI": "10.1109/tvcg.2020.3030388",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030388",
                "FirstPage": 700,
                "LastPage": 710,
                "PaperType": "J",
                "Abstract": "We propose a visualization application, designed for the exploration of human spine simulation data. Our goal is to support research in biomechanical spine simulation and advance efforts to implement simulation-backed analysis in surgical applications. Biomechanical simulation is a state-of-the-art technique for analyzing load distributions of spinal structures. Through the inclusion of patient-specific data, such simulations may facilitate personalized treatment and customized surgical interventions. Difficulties in spine modelling and simulation can be partly attributed to poor result representation, which may also be a hindrance when introducing such techniques into a clinical environment. Comparisons of measurements across multiple similar anatomical structures and the integration of temporal data make commonly available diagrams and charts insufficient for an intuitive and systematic display of results. Therefore, we facilitate methods such as multiple coordinated views, abstraction and focus and context to display simulation outcomes in a dedicated tool. $\\mathrm{By}$ linking the result data with patient-specific anatomy, we make relevant parameters tangible for clinicians. Furthermore, we introduce new concepts to show the directions of impact force vectors, which were not accessible before. We integrated our toolset into a spine segmentation and simulation pipeline and evaluated our methods with both surgeons and biomechanical researchers. When comparing our methods against standard representations that are currently in use, we found increases in accuracy and speed in data exploration tasks. $\\mathrm{in}$ a qualitative review, domain experts deemed the tool highly useful when dealing with simulation result data, which typically combines time-dependent patient movement and the resulting force distributions on spinal structures.",
                "AuthorNamesDeduped": "Pepe Eulzer;Sabine Bauer 0001;Francis Kilian;Kai Lawonn",
                "AuthorNames": "Pepe Eulzer;Sabine Bauer;Francis Kilian;Kai Lawonn",
                "AuthorAffiliation": "University of Jena, Germany;University of Koblenz-Landau, Germany;Department of Spine Surgery, Cath. Clinic Koblenz-Montabaur, Germany;University of Jena, Germany",
                "InternalReferences": "0.1109/tvcg.2014.2346448;10.1109/tvcg.2011.189;10.1109/tvcg.2009.184;10.1109/tvcg.2019.2934337;10.1109/tvcg.2015.2467198;10.1109/tvcg.2014.2346591;10.1109/tvcg.2015.2467961;10.1109/tvcg.2016.2598795;10.1109/tvcg.2008.155;10.1109/tvcg.2018.2864510;10.1109/tvcg.2015.2467435",
                "AuthorKeywords": "Medical visualization,bioinformatics,coordinated views,focus and context,biomechanical simulation",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 671,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 452,
                "i": [
                    452
                ]
            }
        },
        {
            "name": "Sylvia Glaßer",
            "value": 43,
            "numPapers": 15,
            "cluster": "6",
            "visible": 1,
            "index": 1103,
            "x": -119.39111510057104,
            "y": 309.99316385211176,
            "vy": 0,
            "vx": 0,
            "r": 1.04951065054692,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Occlusion-free Blood Flow Animation with Wall Thickness Visualization",
                "DOI": "10.1109/tvcg.2015.2467961",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467961",
                "FirstPage": 728,
                "LastPage": 737,
                "PaperType": "J",
                "Abstract": "We present the first visualization tool that combines pathlines from blood flow and wall thickness information. Our method uses illustrative techniques to provide occlusion-free visualization of the flow. We thus offer medical researchers an effective visual analysis tool for aneurysm treatment risk assessment. Such aneurysms bear a high risk of rupture and significant treatment-related risks. Therefore, to get a fully informed decision it is essential to both investigate the vessel morphology and the hemodynamic data. Ongoing research emphasizes the importance of analyzing the wall thickness in risk assessment. Our combination of blood flow visualization and wall thickness representation is a significant improvement for the exploration and analysis of aneurysms. As all presented information is spatially intertwined, occlusion problems occur. We solve these occlusion problems by dynamic cutaway surfaces. We combine this approach with a glyph-based blood flow representation and a visual mapping of wall thickness onto the vessel surface. We developed a GPU-based implementation of our visualizations which facilitates wall thickness analysis through real-time rendering and flexible interactive data exploration mechanisms. We designed our techniques in collaboration with domain experts, and we provide details about the evaluation of the technique and tool.",
                "AuthorNamesDeduped": "Kai Lawonn;Sylvia Glaßer;Anna Vilanova;Bernhard Preim;Tobias Isenberg 0001",
                "AuthorNames": "Kai Lawonn;Sylvia Glaßer;Anna Vilanova;Bernhard Preim;Tobias Isenberg",
                "AuthorAffiliation": "University of Magdeburg, Germany and Research Campus STIMULATE and Inria, France and TU Delft, Netherlands;Research Campus STIMULATE and University of Magdeburg, Germany;TU Delft, Netherlands;Research Campus STIMULATE and University of Magdeburg, Germany;Inria, France",
                "InternalReferences": "0.1109/tvcg.2009.138;10.1109/tvcg.2011.243;10.1109/tvcg.2014.2346406;10.1109/tvcg.2010.153;10.1109/tvcg.2011.215;10.1109/visual.2004.48",
                "AuthorKeywords": "Medical visualization, aneurysms, blood flow, wall thickness, illustrative visualization",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 784,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1054,
                "i": [
                    1054
                ]
            }
        },
        {
            "name": "Helmut Doleisch",
            "value": 230,
            "numPapers": 50,
            "cluster": "6",
            "visible": 1,
            "index": 1104,
            "x": -121.41705803819468,
            "y": -309.36693103392554,
            "vy": 0,
            "vx": 0,
            "r": 1.2648244099021302,
            "node": {
                "Conference": "InfoVis",
                "Year": 2002,
                "Title": "Angular brushing of extended parallel coordinates",
                "DOI": "10.1109/infvis.2002.1173157",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2002.1173157",
                "FirstPage": 127,
                "LastPage": 130,
                "PaperType": "C",
                "Abstract": "In this paper we present angular brushing for parallel coordinates (PC) as a new approach to highlighting rational data-properties, i.e., features which - in a non-separable way - depend on two data dimensions. We also demonstrate smooth brushing as an intuitive tool for specifying nonbinary degree-of-interest functions (for focus+context visualization). We also briefly describe our implementation as well as its application to the visualization of CFD data.",
                "AuthorNamesDeduped": "Helwig Hauser;Florian Ledermann;Helmut Doleisch",
                "AuthorNames": "H. Hauser;F. Ledermann;H. Doleisch",
                "AuthorAffiliation": "Research Center, VRVis Research Center, Austria;Research Center, VRVis Research Center, Austria;Research Center, VRVis Research Center, Austria",
                "InternalReferences": "0.1109/infvis.1996.559216;10.1109/visual.2000.885739;10.1109/visual.1994.346302;10.1109/visual.1995.485139;10.1109/visual.1990.146402",
                "AuthorKeywords": "information visualization, parallel coordinates, brushing, linear correlations, focus+context visualization",
                "AminerCitationCount": 355,
                "CitationCountCrossRef": 96,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 906,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2738,
                "i": [
                    2738
                ]
            }
        },
        {
            "name": "Mario Jelovic",
            "value": 145,
            "numPapers": 20,
            "cluster": "6",
            "visible": 1,
            "index": 1105,
            "x": 298.6386191822954,
            "y": 146.16762682924013,
            "vy": 0,
            "vx": 0,
            "r": 1.1669545192861255,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Interactive Visual Analysis of Multiple Simulation Runs Using the Simulation Model View: Understanding and Tuning of an Electronic Unit Injector",
                "DOI": "10.1109/tvcg.2010.171",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.171",
                "FirstPage": 1449,
                "LastPage": 1457,
                "PaperType": "J",
                "Abstract": "Multiple simulation runs using the same simulation model with different values of control parameters generate a large data set that captures the behavior of the modeled phenomenon. However, there is a conceptual and visual gap between the simulation model behavior and the data set that makes data analysis more difficult. We propose a simulation model view that helps to bridge that gap by visually combining the simulation model description and the generated data. The simulation model view provides a visual outline of the simulation process and the corresponding simulation model. The view is integrated in a Coordinated Multiple Views; (CMV) system. As the simulation model view provides a limited display space, we use three levels of details. We explored the use of the simulation model view, in close collaboration with a domain expert, to understand and tune an electronic unit injector (EUI). We also developed analysis procedures based on the view. The EUI is mostly used in heavy duty Diesel engines. We were mainly interested in understanding the model and how to tune it for three different operation modes: low emission, low consumption, and high power. Very positive feedback from the domain expert shows that the use of the simulation model view and the corresponding ;analysis procedures within a CMV system represents an effective technique for interactive visual analysis of multiple simulation runs.",
                "AuthorNamesDeduped": "Kresimir Matkovic;Denis Gracanin;Mario Jelovic;Andreas Ammer;Alan Lez;Helwig Hauser",
                "AuthorNames": "Kresimir Matkovic;Denis Gracanin;Mario Jelovic;Andreas Ammer;Alan Lez;Helwig Hauser",
                "AuthorAffiliation": "VRVis Research Center Vienna, Austria;Virginia Technology, USA;AVL AST d.o.o., Zagreb, Croatia;VRVis Research Center Vienna, Austria;VRVis Research Center Vienna, Austria;University of Bergen, Norway",
                "InternalReferences": "0.1109/tvcg.2009.155;10.1109/infvis.2002.1173149;10.1109/infvis.1995.528685;10.1109/infvis.2002.1173157",
                "AuthorKeywords": "Visualization in physical sciences and engineering, time series data, coordinated multiple views",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 774,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1788,
                "i": [
                    1788
                ]
            }
        },
        {
            "name": "Haojin Jiang",
            "value": 29,
            "numPapers": 10,
            "cluster": "3",
            "visible": 1,
            "index": 1106,
            "x": -319.0858564391977,
            "y": 93.99051133206862,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Preserving Minority Structures in Graph Sampling",
                "DOI": "10.1109/tvcg.2020.3030428",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030428",
                "FirstPage": 1698,
                "LastPage": 1708,
                "PaperType": "J",
                "Abstract": "Sampling is a widely used graph reduction technique to accelerate graph computations and simplify graph visualizations. By comprehensively analyzing the literature on graph sampling, we assume that existing algorithms cannot effectively preserve minority structures that are rare and small in a graph but are very important in graph analysis. In this work, we initially conduct a pilot user study to investigate representative minority structures that are most appealing to human viewers. We then perform an experimental study to evaluate the performance of existing graph sampling algorithms regarding minority structure preservation. Results confirm our assumption and suggest key points for designing a new graph sampling approach named mino-centric graph sampling (MCGS). In this approach, a triangle-based algorithm and a cut-point-based algorithm are proposed to efficiently identify minority structures. A set of importance assessment criteria are designed to guide the preservation of important minority structures. Three optimization objectives are introduced into a greedy strategy to balance the preservation between minority and majority structures and suppress the generation of new minority structures. A series of experiments and case studies are conducted to evaluate the effectiveness of the proposed MCGS.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorNames": "Ying Zhao;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Software, Tsinghua University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Information, Zhejiang University of Finance and Economics China",
                "InternalReferences": "0.1109/tvcg.2018.2865139;10.1109/tvcg.2008.130;10.1109/tvcg.2011.233;10.1109/tvcg.2013.223;10.1109/tvcg.2016.2598831;10.1109/visual.2005.1532819;10.1109/tvcg.2019.2934208;10.1109/tvcg.2016.2598867;10.1109/tvcg.2017.2744098;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Graph sampling,graph visualization,node-link diagram",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 57,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1182,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 459,
                "i": [
                    459
                ]
            }
        },
        {
            "name": "Qi'an Chen",
            "value": 29,
            "numPapers": 10,
            "cluster": "3",
            "visible": 1,
            "index": 1107,
            "x": 171.87191388286575,
            "y": -284.9737623330274,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Preserving Minority Structures in Graph Sampling",
                "DOI": "10.1109/tvcg.2020.3030428",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030428",
                "FirstPage": 1698,
                "LastPage": 1708,
                "PaperType": "J",
                "Abstract": "Sampling is a widely used graph reduction technique to accelerate graph computations and simplify graph visualizations. By comprehensively analyzing the literature on graph sampling, we assume that existing algorithms cannot effectively preserve minority structures that are rare and small in a graph but are very important in graph analysis. In this work, we initially conduct a pilot user study to investigate representative minority structures that are most appealing to human viewers. We then perform an experimental study to evaluate the performance of existing graph sampling algorithms regarding minority structure preservation. Results confirm our assumption and suggest key points for designing a new graph sampling approach named mino-centric graph sampling (MCGS). In this approach, a triangle-based algorithm and a cut-point-based algorithm are proposed to efficiently identify minority structures. A set of importance assessment criteria are designed to guide the preservation of important minority structures. Three optimization objectives are introduced into a greedy strategy to balance the preservation between minority and majority structures and suppress the generation of new minority structures. A series of experiments and case studies are conducted to evaluate the effectiveness of the proposed MCGS.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorNames": "Ying Zhao;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Software, Tsinghua University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Information, Zhejiang University of Finance and Economics China",
                "InternalReferences": "0.1109/tvcg.2018.2865139;10.1109/tvcg.2008.130;10.1109/tvcg.2011.233;10.1109/tvcg.2013.223;10.1109/tvcg.2016.2598831;10.1109/visual.2005.1532819;10.1109/tvcg.2019.2934208;10.1109/tvcg.2016.2598867;10.1109/tvcg.2017.2744098;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Graph sampling,graph visualization,node-link diagram",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 57,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1182,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 459,
                "i": [
                    459
                ]
            }
        },
        {
            "name": "Yaqi Qin",
            "value": 29,
            "numPapers": 10,
            "cluster": "3",
            "visible": 1,
            "index": 1108,
            "x": 65.79369386117915,
            "y": 326.3758413977687,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Preserving Minority Structures in Graph Sampling",
                "DOI": "10.1109/tvcg.2020.3030428",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030428",
                "FirstPage": 1698,
                "LastPage": 1708,
                "PaperType": "J",
                "Abstract": "Sampling is a widely used graph reduction technique to accelerate graph computations and simplify graph visualizations. By comprehensively analyzing the literature on graph sampling, we assume that existing algorithms cannot effectively preserve minority structures that are rare and small in a graph but are very important in graph analysis. In this work, we initially conduct a pilot user study to investigate representative minority structures that are most appealing to human viewers. We then perform an experimental study to evaluate the performance of existing graph sampling algorithms regarding minority structure preservation. Results confirm our assumption and suggest key points for designing a new graph sampling approach named mino-centric graph sampling (MCGS). In this approach, a triangle-based algorithm and a cut-point-based algorithm are proposed to efficiently identify minority structures. A set of importance assessment criteria are designed to guide the preservation of important minority structures. Three optimization objectives are introduced into a greedy strategy to balance the preservation between minority and majority structures and suppress the generation of new minority structures. A series of experiments and case studies are conducted to evaluate the effectiveness of the proposed MCGS.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorNames": "Ying Zhao;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Software, Tsinghua University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Information, Zhejiang University of Finance and Economics China",
                "InternalReferences": "0.1109/tvcg.2018.2865139;10.1109/tvcg.2008.130;10.1109/tvcg.2011.233;10.1109/tvcg.2013.223;10.1109/tvcg.2016.2598831;10.1109/visual.2005.1532819;10.1109/tvcg.2019.2934208;10.1109/tvcg.2016.2598867;10.1109/tvcg.2017.2744098;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Graph sampling,graph visualization,node-link diagram",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 57,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1182,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 459,
                "i": [
                    459
                ]
            }
        },
        {
            "name": "Yitao Wu",
            "value": 29,
            "numPapers": 10,
            "cluster": "3",
            "visible": 1,
            "index": 1109,
            "x": -269.09923320764625,
            "y": -196.30487178635383,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Preserving Minority Structures in Graph Sampling",
                "DOI": "10.1109/tvcg.2020.3030428",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030428",
                "FirstPage": 1698,
                "LastPage": 1708,
                "PaperType": "J",
                "Abstract": "Sampling is a widely used graph reduction technique to accelerate graph computations and simplify graph visualizations. By comprehensively analyzing the literature on graph sampling, we assume that existing algorithms cannot effectively preserve minority structures that are rare and small in a graph but are very important in graph analysis. In this work, we initially conduct a pilot user study to investigate representative minority structures that are most appealing to human viewers. We then perform an experimental study to evaluate the performance of existing graph sampling algorithms regarding minority structure preservation. Results confirm our assumption and suggest key points for designing a new graph sampling approach named mino-centric graph sampling (MCGS). In this approach, a triangle-based algorithm and a cut-point-based algorithm are proposed to efficiently identify minority structures. A set of importance assessment criteria are designed to guide the preservation of important minority structures. Three optimization objectives are introduced into a greedy strategy to balance the preservation between minority and majority structures and suppress the generation of new minority structures. A series of experiments and case studies are conducted to evaluate the effectiveness of the proposed MCGS.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorNames": "Ying Zhao;Haojin Jiang;Qi'an Chen;Yaqi Qin;Huixuan Xie;Yitao Wu;Shixia Liu;Zhiguang Zhou;Jiazhi Xia;Fangfang Zhou",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Software, Tsinghua University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Information, Zhejiang University of Finance and Economics China",
                "InternalReferences": "0.1109/tvcg.2018.2865139;10.1109/tvcg.2008.130;10.1109/tvcg.2011.233;10.1109/tvcg.2013.223;10.1109/tvcg.2016.2598831;10.1109/visual.2005.1532819;10.1109/tvcg.2019.2934208;10.1109/tvcg.2016.2598867;10.1109/tvcg.2017.2744098;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Graph sampling,graph visualization,node-link diagram",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 57,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1182,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 459,
                "i": [
                    459
                ]
            }
        },
        {
            "name": "Fenjin Ye",
            "value": 92,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 1110,
            "x": 331.1765803461255,
            "y": -37.04149875809976,
            "vy": 0,
            "vx": 0,
            "r": 1.105929763960852,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "LDSScanner: Exploratory Analysis of Low-Dimensional Structures in High-Dimensional Datasets",
                "DOI": "10.1109/tvcg.2017.2744098",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744098",
                "FirstPage": 236,
                "LastPage": 245,
                "PaperType": "J",
                "Abstract": "Many approaches for analyzing a high-dimensional dataset assume that the dataset contains specific structures, e.g., clusters in linear subspaces or non-linear manifolds. This yields a trial-and-error process to verify the appropriate model and parameters. This paper contributes an exploratory interface that supports visual identification of low-dimensional structures in a high-dimensional dataset, and facilitates the optimized selection of data models and configurations. Our key idea is to abstract a set of global and local feature descriptors from the neighborhood graph-based representation of the latent low-dimensional structure, such as pairwise geodesic distance (GD) among points and pairwise local tangent space divergence (LTSD) among pointwise local tangent spaces (LTS). We propose a new LTSD-GD view, which is constructed by mapping LTSD and GD to the<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-1-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-2-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis using 1D multidimensional scaling, respectively. Unlike traditional dimensionality reduction methods that preserve various kinds of distances among points, the LTSD-GD view presents the distribution of pointwise LTS (<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-3-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis) and the variation of LTS in structures (the combination of<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-4-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-5-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis). We design and implement a suite of visual tools for navigating and reasoning about intrinsic structures of a high-dimensional dataset. Three case studies verify the effectiveness of our approach.",
                "AuthorNamesDeduped": "Jiazhi Xia;Fenjin Ye;Wei Chen 0001;Yusi Wang;Weifeng Chen 0002;Yuxin Ma;Anthony K. H. Tung",
                "AuthorNames": "Jiazhi Xia;Fenjin Ye;Wei Chen;Yusi Wang;Weifeng Chen;Yuxin Ma;Anthony K.H. Tung",
                "AuthorAffiliation": "Central South University;Central South University;Zhejiang University;Central South University;Zhejiang University of Finance & Economics;Zhejiang University;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2013.160;10.1109/vast.2010.5652392;10.1109/visual.1990.146402;10.1109/infvis.2003.1249013;10.1109/tvcg.2015.2467324;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598466;10.1109/infvis.2004.3;10.1109/tvcg.2015.2467717;10.1109/vast.2009.5332628;10.1109/vast.2012.6400488;10.1109/tvcg.2015.2467191;10.1109/vast.2016.7883514;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,low-dimensional structure,subspace,manifold,visual exploration",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 1524,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 857,
                "i": [
                    857
                ]
            }
        },
        {
            "name": "Yusi Wang",
            "value": 92,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 1111,
            "x": -219.27679282863045,
            "y": 251.13280973777577,
            "vy": 0,
            "vx": 0,
            "r": 1.105929763960852,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "LDSScanner: Exploratory Analysis of Low-Dimensional Structures in High-Dimensional Datasets",
                "DOI": "10.1109/tvcg.2017.2744098",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744098",
                "FirstPage": 236,
                "LastPage": 245,
                "PaperType": "J",
                "Abstract": "Many approaches for analyzing a high-dimensional dataset assume that the dataset contains specific structures, e.g., clusters in linear subspaces or non-linear manifolds. This yields a trial-and-error process to verify the appropriate model and parameters. This paper contributes an exploratory interface that supports visual identification of low-dimensional structures in a high-dimensional dataset, and facilitates the optimized selection of data models and configurations. Our key idea is to abstract a set of global and local feature descriptors from the neighborhood graph-based representation of the latent low-dimensional structure, such as pairwise geodesic distance (GD) among points and pairwise local tangent space divergence (LTSD) among pointwise local tangent spaces (LTS). We propose a new LTSD-GD view, which is constructed by mapping LTSD and GD to the<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-1-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-2-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis using 1D multidimensional scaling, respectively. Unlike traditional dimensionality reduction methods that preserve various kinds of distances among points, the LTSD-GD view presents the distribution of pointwise LTS (<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-3-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis) and the variation of LTS in structures (the combination of<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-4-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-5-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis). We design and implement a suite of visual tools for navigating and reasoning about intrinsic structures of a high-dimensional dataset. Three case studies verify the effectiveness of our approach.",
                "AuthorNamesDeduped": "Jiazhi Xia;Fenjin Ye;Wei Chen 0001;Yusi Wang;Weifeng Chen 0002;Yuxin Ma;Anthony K. H. Tung",
                "AuthorNames": "Jiazhi Xia;Fenjin Ye;Wei Chen;Yusi Wang;Weifeng Chen;Yuxin Ma;Anthony K.H. Tung",
                "AuthorAffiliation": "Central South University;Central South University;Zhejiang University;Central South University;Zhejiang University of Finance & Economics;Zhejiang University;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2013.160;10.1109/vast.2010.5652392;10.1109/visual.1990.146402;10.1109/infvis.2003.1249013;10.1109/tvcg.2015.2467324;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598466;10.1109/infvis.2004.3;10.1109/tvcg.2015.2467717;10.1109/vast.2009.5332628;10.1109/vast.2012.6400488;10.1109/tvcg.2015.2467191;10.1109/vast.2016.7883514;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,low-dimensional structure,subspace,manifold,visual exploration",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 1524,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 857,
                "i": [
                    857
                ]
            }
        },
        {
            "name": "Weifeng Chen 0002",
            "value": 239,
            "numPapers": 39,
            "cluster": "1",
            "visible": 1,
            "index": 1112,
            "x": -7.9534682279357325,
            "y": -333.4467608826741,
            "vy": 0,
            "vx": 0,
            "r": 1.2751871042026481,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "LDSScanner: Exploratory Analysis of Low-Dimensional Structures in High-Dimensional Datasets",
                "DOI": "10.1109/tvcg.2017.2744098",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744098",
                "FirstPage": 236,
                "LastPage": 245,
                "PaperType": "J",
                "Abstract": "Many approaches for analyzing a high-dimensional dataset assume that the dataset contains specific structures, e.g., clusters in linear subspaces or non-linear manifolds. This yields a trial-and-error process to verify the appropriate model and parameters. This paper contributes an exploratory interface that supports visual identification of low-dimensional structures in a high-dimensional dataset, and facilitates the optimized selection of data models and configurations. Our key idea is to abstract a set of global and local feature descriptors from the neighborhood graph-based representation of the latent low-dimensional structure, such as pairwise geodesic distance (GD) among points and pairwise local tangent space divergence (LTSD) among pointwise local tangent spaces (LTS). We propose a new LTSD-GD view, which is constructed by mapping LTSD and GD to the<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-1-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-2-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis using 1D multidimensional scaling, respectively. Unlike traditional dimensionality reduction methods that preserve various kinds of distances among points, the LTSD-GD view presents the distribution of pointwise LTS (<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-3-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis) and the variation of LTS in structures (the combination of<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-4-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-5-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis). We design and implement a suite of visual tools for navigating and reasoning about intrinsic structures of a high-dimensional dataset. Three case studies verify the effectiveness of our approach.",
                "AuthorNamesDeduped": "Jiazhi Xia;Fenjin Ye;Wei Chen 0001;Yusi Wang;Weifeng Chen 0002;Yuxin Ma;Anthony K. H. Tung",
                "AuthorNames": "Jiazhi Xia;Fenjin Ye;Wei Chen;Yusi Wang;Weifeng Chen;Yuxin Ma;Anthony K.H. Tung",
                "AuthorAffiliation": "Central South University;Central South University;Zhejiang University;Central South University;Zhejiang University of Finance & Economics;Zhejiang University;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2013.160;10.1109/vast.2010.5652392;10.1109/visual.1990.146402;10.1109/infvis.2003.1249013;10.1109/tvcg.2015.2467324;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598466;10.1109/infvis.2004.3;10.1109/tvcg.2015.2467717;10.1109/vast.2009.5332628;10.1109/vast.2012.6400488;10.1109/tvcg.2015.2467191;10.1109/vast.2016.7883514;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,low-dimensional structure,subspace,manifold,visual exploration",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 1524,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 857,
                "i": [
                    857
                ]
            }
        },
        {
            "name": "Anthony K. H. Tung",
            "value": 92,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 1113,
            "x": 231.208534519439,
            "y": 240.60883933341557,
            "vy": 0,
            "vx": 0,
            "r": 1.105929763960852,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "LDSScanner: Exploratory Analysis of Low-Dimensional Structures in High-Dimensional Datasets",
                "DOI": "10.1109/tvcg.2017.2744098",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744098",
                "FirstPage": 236,
                "LastPage": 245,
                "PaperType": "J",
                "Abstract": "Many approaches for analyzing a high-dimensional dataset assume that the dataset contains specific structures, e.g., clusters in linear subspaces or non-linear manifolds. This yields a trial-and-error process to verify the appropriate model and parameters. This paper contributes an exploratory interface that supports visual identification of low-dimensional structures in a high-dimensional dataset, and facilitates the optimized selection of data models and configurations. Our key idea is to abstract a set of global and local feature descriptors from the neighborhood graph-based representation of the latent low-dimensional structure, such as pairwise geodesic distance (GD) among points and pairwise local tangent space divergence (LTSD) among pointwise local tangent spaces (LTS). We propose a new LTSD-GD view, which is constructed by mapping LTSD and GD to the<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-1-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-2-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis using 1D multidimensional scaling, respectively. Unlike traditional dimensionality reduction methods that preserve various kinds of distances among points, the LTSD-GD view presents the distribution of pointwise LTS (<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-3-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis) and the variation of LTS in structures (the combination of<inline-formula><tex-math notation=\"LaTeX\">$x$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-4-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis and<inline-formula><tex-math notation=\"LaTeX\">$y$</tex-math><alternatives><inline-graphic xlink:href=\"24tvcg01-xia-2744098-ieq-5-source.tif\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"/></alternatives></inline-formula>axis). We design and implement a suite of visual tools for navigating and reasoning about intrinsic structures of a high-dimensional dataset. Three case studies verify the effectiveness of our approach.",
                "AuthorNamesDeduped": "Jiazhi Xia;Fenjin Ye;Wei Chen 0001;Yusi Wang;Weifeng Chen 0002;Yuxin Ma;Anthony K. H. Tung",
                "AuthorNames": "Jiazhi Xia;Fenjin Ye;Wei Chen;Yusi Wang;Weifeng Chen;Yuxin Ma;Anthony K.H. Tung",
                "AuthorAffiliation": "Central South University;Central South University;Zhejiang University;Central South University;Zhejiang University of Finance & Economics;Zhejiang University;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/vast.2010.5652450;10.1109/visual.1997.663916;10.1109/tvcg.2013.160;10.1109/vast.2010.5652392;10.1109/visual.1990.146402;10.1109/infvis.2003.1249013;10.1109/tvcg.2015.2467324;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598466;10.1109/infvis.2004.3;10.1109/tvcg.2015.2467717;10.1109/vast.2009.5332628;10.1109/vast.2012.6400488;10.1109/tvcg.2015.2467191;10.1109/vast.2016.7883514;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,low-dimensional structure,subspace,manifold,visual exploration",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 1524,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 857,
                "i": [
                    857
                ]
            }
        },
        {
            "name": "Erdem Kaya",
            "value": 30,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1114,
            "x": -333.1644150361315,
            "y": -21.247883509476118,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Designing Progressive and Interactive Analytics Processes for High-Dimensional Data Analysis",
                "DOI": "10.1109/tvcg.2016.2598470",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598470",
                "FirstPage": 131,
                "LastPage": 140,
                "PaperType": "J",
                "Abstract": "In interactive data analysis processes, the dialogue between the human and the computer is the enabling mechanism that can lead to actionable observations about the phenomena being investigated. It is of paramount importance that this dialogue is not interrupted by slow computational mechanisms that do not consider any known temporal human-computer interaction characteristics that prioritize the perceptual and cognitive capabilities of the users. In cases where the analysis involves an integrated computational method, for instance to reduce the dimensionality of the data or to perform clustering, such non-optimal processes are often likely. To remedy this, progressive computations, where results are iteratively improved, are getting increasing interest in visual analytics. In this paper, we present techniques and design considerations to incorporate progressive methods within interactive analysis processes that involve high-dimensional data. We define methodologies to facilitate processes that adhere to the perceptual characteristics of users and describe how online algorithms can be incorporated within these. A set of design recommendations and according methods to support analysts in accomplishing high-dimensional data analysis tasks are then presented. Our arguments and decisions here are informed by observations gathered over a series of analysis sessions with analysts from finance. We document observations and recommendations from this study and present evidence on how our approach contribute to the efficiency and productivity of interactive visual analysis sessions involving high-dimensional data.",
                "AuthorNamesDeduped": "Cagatay Turkay;Erdem Kaya;Selim Balcisoy;Helwig Hauser",
                "AuthorNames": "Cagatay Turkay;Erdem Kaya;Selim Balcisoy;Helwig Hauser",
                "AuthorAffiliation": "City University, London, UK;Sabanci University, Turkey;Sabanci University, Turkey;University of Bergen, Norway",
                "InternalReferences": "0.1109/tvcg.2007.70539;10.1109/vast.2008.4677361;10.1109/tvcg.2008.153;10.1109/tvcg.2014.2346481;10.1109/tvcg.2014.2346574;10.1109/tvcg.2007.70515;10.1109/tvcg.2012.213;10.1109/tvcg.2013.125;10.1109/tvcg.2012.256;10.1109/vast.2008.4677357;10.1109/tvcg.2015.2467613;10.1109/tvcg.2014.2346265;10.1109/tvcg.2011.178;10.1109/infvis.2005.1532136;10.1109/infvis.1996.559223;10.1109/tvcg.2011.229;10.1109/tvcg.2008.125",
                "AuthorKeywords": "Progressive analytics;high dimensional data;iterative refinement;visual analytics",
                "AminerCitationCount": 99,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2378,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 966,
                "i": [
                    966
                ]
            }
        },
        {
            "name": "Selim Balcisoy",
            "value": 30,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1115,
            "x": 260.1344360850741,
            "y": -209.475714971212,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Designing Progressive and Interactive Analytics Processes for High-Dimensional Data Analysis",
                "DOI": "10.1109/tvcg.2016.2598470",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598470",
                "FirstPage": 131,
                "LastPage": 140,
                "PaperType": "J",
                "Abstract": "In interactive data analysis processes, the dialogue between the human and the computer is the enabling mechanism that can lead to actionable observations about the phenomena being investigated. It is of paramount importance that this dialogue is not interrupted by slow computational mechanisms that do not consider any known temporal human-computer interaction characteristics that prioritize the perceptual and cognitive capabilities of the users. In cases where the analysis involves an integrated computational method, for instance to reduce the dimensionality of the data or to perform clustering, such non-optimal processes are often likely. To remedy this, progressive computations, where results are iteratively improved, are getting increasing interest in visual analytics. In this paper, we present techniques and design considerations to incorporate progressive methods within interactive analysis processes that involve high-dimensional data. We define methodologies to facilitate processes that adhere to the perceptual characteristics of users and describe how online algorithms can be incorporated within these. A set of design recommendations and according methods to support analysts in accomplishing high-dimensional data analysis tasks are then presented. Our arguments and decisions here are informed by observations gathered over a series of analysis sessions with analysts from finance. We document observations and recommendations from this study and present evidence on how our approach contribute to the efficiency and productivity of interactive visual analysis sessions involving high-dimensional data.",
                "AuthorNamesDeduped": "Cagatay Turkay;Erdem Kaya;Selim Balcisoy;Helwig Hauser",
                "AuthorNames": "Cagatay Turkay;Erdem Kaya;Selim Balcisoy;Helwig Hauser",
                "AuthorAffiliation": "City University, London, UK;Sabanci University, Turkey;Sabanci University, Turkey;University of Bergen, Norway",
                "InternalReferences": "0.1109/tvcg.2007.70539;10.1109/vast.2008.4677361;10.1109/tvcg.2008.153;10.1109/tvcg.2014.2346481;10.1109/tvcg.2014.2346574;10.1109/tvcg.2007.70515;10.1109/tvcg.2012.213;10.1109/tvcg.2013.125;10.1109/tvcg.2012.256;10.1109/vast.2008.4677357;10.1109/tvcg.2015.2467613;10.1109/tvcg.2014.2346265;10.1109/tvcg.2011.178;10.1109/infvis.2005.1532136;10.1109/infvis.1996.559223;10.1109/tvcg.2011.229;10.1109/tvcg.2008.125",
                "AuthorKeywords": "Progressive analytics;high dimensional data;iterative refinement;visual analytics",
                "AminerCitationCount": 99,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2378,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 966,
                "i": [
                    966
                ]
            }
        },
        {
            "name": "Rongwen Zhao",
            "value": 102,
            "numPapers": 4,
            "cluster": "1",
            "visible": 1,
            "index": 1116,
            "x": -50.33877310281645,
            "y": 330.32712259592483,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "EventThread: Visual Summarization and Stage Analysis of Event Sequence Data",
                "DOI": "10.1109/tvcg.2017.2745320",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745320",
                "FirstPage": 56,
                "LastPage": 65,
                "PaperType": "J",
                "Abstract": "Event sequence data such as electronic health records, a person's academic records, or car service records, are ordered series of events which have occurred over a period of time. Analyzing collections of event sequences can reveal common or semantically important sequential patterns. For example, event sequence analysis might reveal frequently used care plans for treating a disease, typical publishing patterns of professors, and the patterns of service that result in a well-maintained car. It is challenging, however, to visually explore large numbers of event sequences, or sequences with large numbers of event types. Existing methods focus on extracting explicitly matching patterns of events using statistical analysis to create stages of event progression over time. However, these methods fail to capture latent clusters of similar but not identical evolutions of event sequences. In this paper, we introduce a novel visualization system named EventThread which clusters event sequences into threads based on tensor analysis and visualizes the latent stage categories and evolution patterns by interactively grouping the threads by similarity into time-specific clusters. We demonstrate the effectiveness of EventThread through usage scenarios in three different application domains and via interviews with an expert user.",
                "AuthorNamesDeduped": "Shunan Guo;Ke Xu;Rongwen Zhao;David Gotz;Hongyuan Zha;Nan Cao 0001",
                "AuthorNames": "Shunan Guo;Ke Xu;Rongwen Zhao;David Gotz;Hongyuan Zha;Nan Cao",
                "AuthorAffiliation": "East China Normal University;Hong Kong University of Science and Technology;iDV Lab, Tongji University;University of North Carolina, Chapel Hill;East China Normal University;iDV Lab, Tongji University",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2014.2346682;10.1109/infvis.2003.1249017;10.1109/tvcg.2011.179;10.1109/tvcg.2013.200",
                "AuthorKeywords": "Visual Knowledge Representation,Visual Knowledge Discovery,Data Clustering,Time Series Data,Illustrative Visualization",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 72,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 2161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 845,
                "i": [
                    845
                ]
            }
        },
        {
            "name": "Melanie Görner",
            "value": 46,
            "numPapers": 3,
            "cluster": "6",
            "visible": 1,
            "index": 1117,
            "x": -186.09780447102898,
            "y": -277.70057106722464,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "Visual analysis of graphs with multiple connected components",
                "DOI": "10.1109/vast.2009.5333893",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5333893",
                "FirstPage": 155,
                "LastPage": 162,
                "PaperType": "C",
                "Abstract": "In this paper, we present a system for the interactive visualization and exploration of graphs with many weakly connected components. The visualization of large graphs has recently received much research attention. However, specific systems for visual analysis of graph data sets consisting of many components are rare. In our approach, we rely on graph clustering using an extensive set of topology descriptors. Specifically, we use the self-organizing-map algorithm in conjunction with a user-adaptable combination of graph features for clustering of graphs. It offers insight into the overall structure of the data set. The clustering output is presented in a grid containing clusters of the connected components of the input graph. Interactive feature selection and task-tailored data views allow the exploration of the whole graph space. The system provides also tools for assessment and display of cluster quality. We demonstrate the usefulness of our system by application to a shareholder network analysis problem based on a large real-world data set. While so far our approach is applied to weighted directed graphs only, it can be used for various graph types.",
                "AuthorNamesDeduped": "Tatiana von Landesberger;Melanie Görner;Tobias Schreck",
                "AuthorNames": "Tatiana von Landesberger;Melanie Gorner;Tobias Schreck",
                "AuthorAffiliation": "Interactive Graphics Systems Group, Technische Universität Darmstadt and Fraunhofer IGD, Germany;Interactive Graphics Systems Group, Technische Universität Darmstadt, Germany;Interactive Graphics Systems Group, Technische Universität Darmstadt, Germany",
                "InternalReferences": "0.1109/tvcg.2006.193;10.1109/tvcg.2008.135;10.1109/infvis.2003.1249011;10.1109/tvcg.2006.147",
                "AuthorKeywords": null,
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 698,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1863,
                "i": [
                    1863
                ]
            }
        },
        {
            "name": "Fan-Yin Tzeng",
            "value": 131,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 1118,
            "x": 324.9520646790105,
            "y": 79.0958637404521,
            "vy": 0,
            "vx": 0,
            "r": 1.1508347725964307,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "Opening the black box - data driven visualization of neural networks",
                "DOI": "10.1109/visual.2005.1532820",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532820",
                "FirstPage": 383,
                "LastPage": 390,
                "PaperType": "C",
                "Abstract": "Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task.",
                "AuthorNamesDeduped": "Fan-Yin Tzeng;Kwan-Liu Ma",
                "AuthorNames": "F.-Y. Tzeng;K.-L. Ma",
                "AuthorAffiliation": "Department of Computer Science, University of California, Davis, USA and University of California Davis, Davis, CA, US;Dept. of Comput. Sci., California Univ., Davis, CA, USA",
                "InternalReferences": "0.1109/infvis.2002.1173157;10.1109/visual.1999.809866",
                "AuthorKeywords": "Artificial Neural Network, Information Visualization, Visualization Application, Classification, Machine Learning",
                "AminerCitationCount": 191,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 1725,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2381,
                "i": [
                    2381
                ]
            }
        },
        {
            "name": "Arlen Fan",
            "value": 9,
            "numPapers": 35,
            "cluster": "1",
            "visible": 1,
            "index": 1119,
            "x": -293.16899435730306,
            "y": 161.25117285628536,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes",
                "DOI": "10.1109/tvcg.2020.3028888",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028888",
                "FirstPage": 1385,
                "LastPage": 1395,
                "PaperType": "J",
                "Abstract": "Many statistical learning models hold an assumption that the training data and the future unlabeled data are drawn from the same distribution. However, this assumption is difficult to fulfill in real-world scenarios and creates barriers in reusing existing labels from similar application domains. Transfer Learning is intended to relax this assumption by modeling relationships between domains, and is often applied in deep learning applications to reduce the demand for labeled data and training time. Despite recent advances in exploring deep learning models with visual analytics tools, little work has explored the issue of explaining and diagnosing the knowledge transfer process between deep learning models. In this paper, we present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks. Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks. Based on a comprehensive requirement and task analysis, we employ descriptive visualization with performance measures and detailed inspections of model behaviors from the statistical, instance, feature, and model structure levels. We demonstrate our framework through two case studies on image classification by fine-tuning AlexNets to illustrate how analysts can utilize our framework.",
                "AuthorNamesDeduped": "Yuxin Ma;Arlen Fan;Jingrui He;Arun Reddy Nelakurthi;Ross Maciejewski",
                "AuthorNames": "Yuxin Ma;Arlen Fan;Jingrui He;Arun Reddy Nelakurthi;Ross Maciejewski",
                "AuthorAffiliation": "Arizona State University;Arizona State University;University of Illinois at Urbana-Champaign;Samsung Research America;Arizona State University",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467618;10.1109/tvcg.2017.2744683;10.1109/tvcg.2013.124;10.1109/vast47406.2019.8986948;10.1109/tvcg.2011.188;10.1109/tvcg.2019.2934261;10.1109/tvcg.2014.2346594;10.1109/tvcg.2017.2744199;10.1109/tvcg.2019.2934659;10.1109/tvcg.2017.2744718;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/tvcg.2019.2934631;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2014.2346578;10.1109/tvcg.2017.2744358;10.1109/tvcg.2012.207;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2019.2934629;10.1109/tvcg.2018.2865044;10.1109/visual.2005.1532820;10.1109/tvcg.2018.2864504;10.1109/tvcg.2019.2934619;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2016.2598541;10.1109/tvcg.2018.2864475;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864500",
                "AuthorKeywords": "Transfer learning,deep learning,visual analytics",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 958,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 481,
                "i": [
                    481
                ]
            }
        },
        {
            "name": "Jingrui He",
            "value": 9,
            "numPapers": 35,
            "cluster": "1",
            "visible": 1,
            "index": 1120,
            "x": 107.29798045062626,
            "y": -317.07592685540953,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes",
                "DOI": "10.1109/tvcg.2020.3028888",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028888",
                "FirstPage": 1385,
                "LastPage": 1395,
                "PaperType": "J",
                "Abstract": "Many statistical learning models hold an assumption that the training data and the future unlabeled data are drawn from the same distribution. However, this assumption is difficult to fulfill in real-world scenarios and creates barriers in reusing existing labels from similar application domains. Transfer Learning is intended to relax this assumption by modeling relationships between domains, and is often applied in deep learning applications to reduce the demand for labeled data and training time. Despite recent advances in exploring deep learning models with visual analytics tools, little work has explored the issue of explaining and diagnosing the knowledge transfer process between deep learning models. In this paper, we present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks. Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks. Based on a comprehensive requirement and task analysis, we employ descriptive visualization with performance measures and detailed inspections of model behaviors from the statistical, instance, feature, and model structure levels. We demonstrate our framework through two case studies on image classification by fine-tuning AlexNets to illustrate how analysts can utilize our framework.",
                "AuthorNamesDeduped": "Yuxin Ma;Arlen Fan;Jingrui He;Arun Reddy Nelakurthi;Ross Maciejewski",
                "AuthorNames": "Yuxin Ma;Arlen Fan;Jingrui He;Arun Reddy Nelakurthi;Ross Maciejewski",
                "AuthorAffiliation": "Arizona State University;Arizona State University;University of Illinois at Urbana-Champaign;Samsung Research America;Arizona State University",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467618;10.1109/tvcg.2017.2744683;10.1109/tvcg.2013.124;10.1109/vast47406.2019.8986948;10.1109/tvcg.2011.188;10.1109/tvcg.2019.2934261;10.1109/tvcg.2014.2346594;10.1109/tvcg.2017.2744199;10.1109/tvcg.2019.2934659;10.1109/tvcg.2017.2744718;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/tvcg.2019.2934631;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2014.2346578;10.1109/tvcg.2017.2744358;10.1109/tvcg.2012.207;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2019.2934629;10.1109/tvcg.2018.2865044;10.1109/visual.2005.1532820;10.1109/tvcg.2018.2864504;10.1109/tvcg.2019.2934619;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2016.2598541;10.1109/tvcg.2018.2864475;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864500",
                "AuthorKeywords": "Transfer learning,deep learning,visual analytics",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 958,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 481,
                "i": [
                    481
                ]
            }
        },
        {
            "name": "Arun Reddy Nelakurthi",
            "value": 9,
            "numPapers": 35,
            "cluster": "1",
            "visible": 1,
            "index": 1121,
            "x": 135.12377553708433,
            "y": 306.41730578510675,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes",
                "DOI": "10.1109/tvcg.2020.3028888",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028888",
                "FirstPage": 1385,
                "LastPage": 1395,
                "PaperType": "J",
                "Abstract": "Many statistical learning models hold an assumption that the training data and the future unlabeled data are drawn from the same distribution. However, this assumption is difficult to fulfill in real-world scenarios and creates barriers in reusing existing labels from similar application domains. Transfer Learning is intended to relax this assumption by modeling relationships between domains, and is often applied in deep learning applications to reduce the demand for labeled data and training time. Despite recent advances in exploring deep learning models with visual analytics tools, little work has explored the issue of explaining and diagnosing the knowledge transfer process between deep learning models. In this paper, we present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks. Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks. Based on a comprehensive requirement and task analysis, we employ descriptive visualization with performance measures and detailed inspections of model behaviors from the statistical, instance, feature, and model structure levels. We demonstrate our framework through two case studies on image classification by fine-tuning AlexNets to illustrate how analysts can utilize our framework.",
                "AuthorNamesDeduped": "Yuxin Ma;Arlen Fan;Jingrui He;Arun Reddy Nelakurthi;Ross Maciejewski",
                "AuthorNames": "Yuxin Ma;Arlen Fan;Jingrui He;Arun Reddy Nelakurthi;Ross Maciejewski",
                "AuthorAffiliation": "Arizona State University;Arizona State University;University of Illinois at Urbana-Champaign;Samsung Research America;Arizona State University",
                "InternalReferences": "0.1109/tvcg.2019.2934262;10.1109/tvcg.2015.2467618;10.1109/tvcg.2017.2744683;10.1109/tvcg.2013.124;10.1109/vast47406.2019.8986948;10.1109/tvcg.2011.188;10.1109/tvcg.2019.2934261;10.1109/tvcg.2014.2346594;10.1109/tvcg.2017.2744199;10.1109/tvcg.2019.2934659;10.1109/tvcg.2017.2744718;10.1109/tvcg.2014.2346482;10.1109/tvcg.2018.2865027;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/tvcg.2019.2934631;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2864812;10.1109/tvcg.2014.2346578;10.1109/tvcg.2017.2744358;10.1109/tvcg.2012.207;10.1109/tvcg.2016.2598838;10.1109/tvcg.2016.2598828;10.1109/tvcg.2019.2934629;10.1109/tvcg.2018.2865044;10.1109/visual.2005.1532820;10.1109/tvcg.2018.2864504;10.1109/tvcg.2019.2934619;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2016.2598541;10.1109/tvcg.2018.2864475;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864500",
                "AuthorKeywords": "Transfer learning,deep learning,visual analytics",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 958,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 481,
                "i": [
                    481
                ]
            }
        },
        {
            "name": "Zengsheng Zhong",
            "value": 7,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 1122,
            "x": -306.7546523098577,
            "y": -134.72781185137055,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "SilkViser: A Visual Explorer of Blockchain-based Cryptocurrency Transaction Data",
                "DOI": "10.1109/vast50239.2020.00014",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00014",
                "FirstPage": 95,
                "LastPage": 106,
                "PaperType": "C",
                "Abstract": "Many blockchain-based cryptocurrencies provide users with online blockchain explorers for viewing online transaction data. However, traditional blockchain explorers mostly present transaction information in textual and tabular forms. Such forms make understanding cryptocurrency transaction mechanisms difficult for novice users (NUsers). They are also insufficiently informative for experienced users (EUsers) to recognize advanced transaction information. This study introduces a new online cryptocurrency transaction data viewing tool called SilkViser. Guided by detailed scenario and requirement analyses, we create a series of appreciating visualization designs, such as paper ledger-inspired block and blockchain visualizations and ancient copper coin-inspired transaction visualizations, to help users understand cryptocurrency transaction mechanisms and recognize advanced transaction information. We also provide a set of lightweight interactions to facilitate easy and free data exploration. Moreover, a controlled user study is conducted to quantitatively evaluate the usability and effectiveness of SilkViser. Results indicate that SilkViser can satisfy the requirements of NUsers and EUsers. Our visualization designs can compensate for the inexperience of NUsers in data viewing and attract potential users to participate in cryptocurrency transactions.",
                "AuthorNamesDeduped": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao 0001;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorNames": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China and Chongqing Engineering Technology Research Center for Information Management in Development, Chongqing Technology and Business University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China",
                "InternalReferences": "0.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2865021;10.1109/tvcg.2019.2934208;10.1109/tvcg.2018.2865020;10.1109/tvcg.2019.2934655;10.1109/tvcg.2018.2864814;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "visualization,visual analytics,blockchain,cryptocurrency,interactive interface",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 970,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 483,
                "i": [
                    483
                ]
            }
        },
        {
            "name": "Shuirun Wei",
            "value": 7,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 1123,
            "x": 317.33992280172316,
            "y": -107.91373126806619,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "SilkViser: A Visual Explorer of Blockchain-based Cryptocurrency Transaction Data",
                "DOI": "10.1109/vast50239.2020.00014",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00014",
                "FirstPage": 95,
                "LastPage": 106,
                "PaperType": "C",
                "Abstract": "Many blockchain-based cryptocurrencies provide users with online blockchain explorers for viewing online transaction data. However, traditional blockchain explorers mostly present transaction information in textual and tabular forms. Such forms make understanding cryptocurrency transaction mechanisms difficult for novice users (NUsers). They are also insufficiently informative for experienced users (EUsers) to recognize advanced transaction information. This study introduces a new online cryptocurrency transaction data viewing tool called SilkViser. Guided by detailed scenario and requirement analyses, we create a series of appreciating visualization designs, such as paper ledger-inspired block and blockchain visualizations and ancient copper coin-inspired transaction visualizations, to help users understand cryptocurrency transaction mechanisms and recognize advanced transaction information. We also provide a set of lightweight interactions to facilitate easy and free data exploration. Moreover, a controlled user study is conducted to quantitatively evaluate the usability and effectiveness of SilkViser. Results indicate that SilkViser can satisfy the requirements of NUsers and EUsers. Our visualization designs can compensate for the inexperience of NUsers in data viewing and attract potential users to participate in cryptocurrency transactions.",
                "AuthorNamesDeduped": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao 0001;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorNames": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China and Chongqing Engineering Technology Research Center for Information Management in Development, Chongqing Technology and Business University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China",
                "InternalReferences": "0.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2865021;10.1109/tvcg.2019.2934208;10.1109/tvcg.2018.2865020;10.1109/tvcg.2019.2934655;10.1109/tvcg.2018.2864814;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "visualization,visual analytics,blockchain,cryptocurrency,interactive interface",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 970,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 483,
                "i": [
                    483
                ]
            }
        },
        {
            "name": "Yeting Xu",
            "value": 7,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 1124,
            "x": -161.17358525173876,
            "y": 294.0630466704385,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "SilkViser: A Visual Explorer of Blockchain-based Cryptocurrency Transaction Data",
                "DOI": "10.1109/vast50239.2020.00014",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00014",
                "FirstPage": 95,
                "LastPage": 106,
                "PaperType": "C",
                "Abstract": "Many blockchain-based cryptocurrencies provide users with online blockchain explorers for viewing online transaction data. However, traditional blockchain explorers mostly present transaction information in textual and tabular forms. Such forms make understanding cryptocurrency transaction mechanisms difficult for novice users (NUsers). They are also insufficiently informative for experienced users (EUsers) to recognize advanced transaction information. This study introduces a new online cryptocurrency transaction data viewing tool called SilkViser. Guided by detailed scenario and requirement analyses, we create a series of appreciating visualization designs, such as paper ledger-inspired block and blockchain visualizations and ancient copper coin-inspired transaction visualizations, to help users understand cryptocurrency transaction mechanisms and recognize advanced transaction information. We also provide a set of lightweight interactions to facilitate easy and free data exploration. Moreover, a controlled user study is conducted to quantitatively evaluate the usability and effectiveness of SilkViser. Results indicate that SilkViser can satisfy the requirements of NUsers and EUsers. Our visualization designs can compensate for the inexperience of NUsers in data viewing and attract potential users to participate in cryptocurrency transactions.",
                "AuthorNamesDeduped": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao 0001;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorNames": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China and Chongqing Engineering Technology Research Center for Information Management in Development, Chongqing Technology and Business University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China",
                "InternalReferences": "0.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2865021;10.1109/tvcg.2019.2934208;10.1109/tvcg.2018.2865020;10.1109/tvcg.2019.2934655;10.1109/tvcg.2018.2864814;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "visualization,visual analytics,blockchain,cryptocurrency,interactive interface",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 970,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 483,
                "i": [
                    483
                ]
            }
        },
        {
            "name": "Ronghua Shi",
            "value": 7,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 1125,
            "x": -79.82781934401238,
            "y": -325.84892091087204,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "SilkViser: A Visual Explorer of Blockchain-based Cryptocurrency Transaction Data",
                "DOI": "10.1109/vast50239.2020.00014",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00014",
                "FirstPage": 95,
                "LastPage": 106,
                "PaperType": "C",
                "Abstract": "Many blockchain-based cryptocurrencies provide users with online blockchain explorers for viewing online transaction data. However, traditional blockchain explorers mostly present transaction information in textual and tabular forms. Such forms make understanding cryptocurrency transaction mechanisms difficult for novice users (NUsers). They are also insufficiently informative for experienced users (EUsers) to recognize advanced transaction information. This study introduces a new online cryptocurrency transaction data viewing tool called SilkViser. Guided by detailed scenario and requirement analyses, we create a series of appreciating visualization designs, such as paper ledger-inspired block and blockchain visualizations and ancient copper coin-inspired transaction visualizations, to help users understand cryptocurrency transaction mechanisms and recognize advanced transaction information. We also provide a set of lightweight interactions to facilitate easy and free data exploration. Moreover, a controlled user study is conducted to quantitatively evaluate the usability and effectiveness of SilkViser. Results indicate that SilkViser can satisfy the requirements of NUsers and EUsers. Our visualization designs can compensate for the inexperience of NUsers in data viewing and attract potential users to participate in cryptocurrency transactions.",
                "AuthorNamesDeduped": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao 0001;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorNames": "Zengsheng Zhong;Shuirun Wei;Yeting Xu;Ying Zhao;Fangfang Zhou;Feng Luo;Ronghua Shi",
                "AuthorAffiliation": "School of Computer Sciences and Engineering, Central South University, China and Chongqing Engineering Technology Research Center for Information Management in Development, Chongqing Technology and Business University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China;School of Computer Sciences and Engineering, Central South University, China",
                "InternalReferences": "0.1109/tvcg.2017.2744098;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2865021;10.1109/tvcg.2019.2934208;10.1109/tvcg.2018.2865020;10.1109/tvcg.2019.2934655;10.1109/tvcg.2018.2864814;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "visualization,visual analytics,blockchain,cryptocurrency,interactive interface",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 970,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 483,
                "i": [
                    483
                ]
            }
        },
        {
            "name": "Greg Smith",
            "value": 141,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1126,
            "x": 279.0942372780123,
            "y": 186.43070218771513,
            "vy": 0,
            "vx": 0,
            "r": 1.162348877374784,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "SketchStory: Telling More Engaging Stories with Data through Freeform Sketching",
                "DOI": "10.1109/tvcg.2013.191",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.191",
                "FirstPage": 2416,
                "LastPage": 2425,
                "PaperType": "J",
                "Abstract": "Presenting and communicating insights to an audience-telling a story-is one of the main goals of data exploration. Even though visualization as a storytelling medium has recently begun to gain attention, storytelling is still underexplored in information visualization and little research has been done to help people tell their stories with data. To create a new, more engaging form of storytelling with data, we leverage and extend the narrative storytelling attributes of whiteboard animation with pen and touch interactions. We present SketchStory, a data-enabled digital whiteboard that facilitates the creation of personalized and expressive data charts quickly and easily. SketchStory recognizes a small set of sketch gestures for chart invocation, and automatically completes charts by synthesizing the visuals from the presenter-provided example icon and binding them to the underlying data. Furthermore, SketchStory allows the presenter to move and resize the completed data charts with touch, and filter the underlying data to facilitate interactive exploration. We conducted a controlled experiment for both audiences and presenters to compare SketchStory with a traditional presentation system, Microsoft PowerPoint. Results show that the audience is more engaged by presentations done with SketchStory than PowerPoint. Eighteen out of 24 audience participants preferred SketchStory to PowerPoint. Four out of five presenter participants also favored SketchStory despite the extra effort required for presentation.",
                "AuthorNamesDeduped": "Bongshin Lee;Rubaiat Habib Kazi;Greg Smith",
                "AuthorNames": "Bongshin Lee;Rubaiat Habib Kazi;Greg Smith",
                "AuthorAffiliation": "Microsoft Research, USA;National University of Singapore, Singapore;Microsoft Research, USA",
                "InternalReferences": "0.1109/tvcg.2007.70577;10.1109/tvcg.2012.262;10.1109/tvcg.2010.179;10.1109/tvcg.2012.275;10.1109/tvcg.2008.137;10.1109/vast.2007.4388992",
                "AuthorKeywords": "Storytelling, data presentation, sketch, pen and touch, interaction, visualization",
                "AminerCitationCount": 181,
                "CitationCountCrossRef": 113,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2830,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1305,
                "i": [
                    1305
                ]
            }
        },
        {
            "name": "Meng Xia",
            "value": 38,
            "numPapers": 39,
            "cluster": "5",
            "visible": 1,
            "index": 1127,
            "x": -331.87473998068236,
            "y": 51.079907622806864,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "Exploring Interactions with Printed Data Visualizations in Augmented Reality",
                "DOI": "10.1109/tvcg.2022.3209386",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209386",
                "FirstPage": 418,
                "LastPage": 428,
                "PaperType": "J",
                "Abstract": "This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ($\\mathrm{N}=20$) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ($\\mathrm{N}=12$, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement “point” for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.",
                "AuthorNamesDeduped": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorNames": "Wai Tong;Zhutian Chen;Meng Xia;Leo Yu-Ho Lo;Linping Yuan;Benjamin Bach;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Hong Kong, USA;Harvard University, USA;Carnegie Mellon University, USA;Hong Kong University of Science and Technology, Hong Kong, USA;Hong Kong University of Science and Technology, Hong Kong, USA;University of Edinburgh, United Kingdom;Hong Kong University of Science and Technology, Hong Kong, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467201;10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114806;10.1109/tvcg.2021.3114861;10.1109/tvcg.2019.2934283;10.1109/tvcg.2020.3030334;10.1109/tvcg.2013.121;10.1109/tvcg.2013.134;10.1109/tvcg.2017.2744319;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2020.3028948;10.1109/tvcg.2010.177;10.1109/tvcg.2014.2346249;10.1109/tvcg.2015.2467091;10.1109/tvcg.2018.2865152;10.1109/tvcg.2012.237;10.1109/tvcg.2020.3030392;10.1109/tvcg.2007.70515;10.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2599211",
                "AuthorKeywords": "Interaction design,augmented reality,paper interaction,tangible user interface,printed data visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 84,
                "DownloadsXplore": 1055,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 147,
                "i": [
                    147
                ]
            }
        },
        {
            "name": "Shuai Chen 0001",
            "value": 28,
            "numPapers": 47,
            "cluster": "1",
            "visible": 1,
            "index": 1128,
            "x": 210.30332175741776,
            "y": -261.9589907939715,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "E-Map: A Visual Analytics Approach for Exploring Significant Event Evolutions in Social Media",
                "DOI": "10.1109/vast.2017.8585638",
                "Link": "http://dx.doi.org/10.1109/VAST.2017.8585638",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "Significant events are often discussed and spread through social media, involving many people. Reposting activities and opinions expressed in social media offer good opportunities to understand the evolution of events. However, the dynamics of reposting activities and the diversity of user comments pose challenges to understand event-related social media data. We propose E-Map, a visual analytics approach that uses map-like visualization tools to help multi-faceted analysis of social media data on a significant event and in-depth understanding of the development of the event. E-Map transforms extracted keywords, messages, and reposting behaviors into map features such as cities, towns, and rivers to build a structured and semantic space for users to explore. It also visualizes complex posting and reposting behaviors as simple trajectories and connections that can be easily followed. By supporting multi-level spatial temporal exploration, E-Map helps to reveal the patterns of event development and key players in an event, disclosing the ways they shape and affect the development of the event. Two cases analysing real-world events confirm the capacities of E-Map in facilitating the analysis of event evolution with social media data.",
                "AuthorNamesDeduped": "Siming Chen 0001;Shuai Chen 0001;Lijing Lin;Xiaoru Yuan;Jie Liang 0004;Xiaolong Zhang 0001",
                "AuthorNames": "Siming Chen;Shuai Chen;Lijing Lin;Xiaoru Yuan;Jie Liang;Xiaolong Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Faculty of Engineer and Information Technology, The University of Technology, Sydney, Australia;College of Information Sciences and Technology, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2013.186;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/vast.2012.6400557;10.1109/vast.2016.7883510;10.1109/tvcg.2015.2467619;10.1109/tvcg.2014.2346433;10.1109/tvcg.2010.129;10.1109/vast.2012.6400485;10.1109/tvcg.2013.162;10.1109/infvis.2005.1532126;10.1109/tvcg.2007.70582;10.1109/tvcg.2016.2598590;10.1109/tvcg.2015.2467554;10.1109/vast.2015.7347632;10.1109/tvcg.2013.196;10.1109/vast.2011.6102456;10.1109/tvcg.2016.2598919;10.1109/tvcg.2009.171;10.1109/vast.2016.7883511;10.1109/tvcg.2015.2467691;10.1109/tvcg.2014.2346920;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346922;10.1109/vast.2014.7042496;10.1109/tvcg.2014.2346919",
                "AuthorKeywords": "Social Media,Event Analysis,Map-like Visual Metaphor,Spatial Temporal Visual Analytics",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 994,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 871,
                "i": [
                    871
                ]
            }
        },
        {
            "name": "Arjun Choudhry",
            "value": 8,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 1129,
            "x": 21.88932332995756,
            "y": 335.36675077317454,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality",
                "DOI": "10.1109/tvcg.2020.3030358",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030358",
                "FirstPage": 1332,
                "LastPage": 1342,
                "PaperType": "J",
                "Abstract": "Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations-causal graphs and Hasse diagrams-with and without an associated textual narrative. Finally, we describe Causeworks, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate Causeworks through interviews with experts who used the system for understanding complex events.",
                "AuthorNamesDeduped": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorNames": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorAffiliation": "Virginia Tech, Arlington, VA, USA;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA;Uncharted Software, Toronto, Canada;Uncharted Software, Toronto, Canada;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA",
                "InternalReferences": "0.1109/infvis.2003.1249025;10.1109/tvcg.2011.255;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2007.70528;10.1109/tvcg.2018.2865022;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2744843;10.1109/tvcg.2015.2467931;10.1109/tvcg.2019.2934399",
                "AuthorKeywords": "Causality visualization,natural language generation,data-driven storytelling,temporal data,quantitative studies",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 855,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 492,
                "i": [
                    492
                ]
            }
        },
        {
            "name": "Mandar Sharma",
            "value": 8,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 1130,
            "x": -242.78489413880573,
            "y": -232.6058795000868,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality",
                "DOI": "10.1109/tvcg.2020.3030358",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030358",
                "FirstPage": 1332,
                "LastPage": 1342,
                "PaperType": "J",
                "Abstract": "Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations-causal graphs and Hasse diagrams-with and without an associated textual narrative. Finally, we describe Causeworks, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate Causeworks through interviews with experts who used the system for understanding complex events.",
                "AuthorNamesDeduped": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorNames": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorAffiliation": "Virginia Tech, Arlington, VA, USA;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA;Uncharted Software, Toronto, Canada;Uncharted Software, Toronto, Canada;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA",
                "InternalReferences": "0.1109/infvis.2003.1249025;10.1109/tvcg.2011.255;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2007.70528;10.1109/tvcg.2018.2865022;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2744843;10.1109/tvcg.2015.2467931;10.1109/tvcg.2019.2934399",
                "AuthorKeywords": "Causality visualization,natural language generation,data-driven storytelling,temporal data,quantitative studies",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 855,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 492,
                "i": [
                    492
                ]
            }
        },
        {
            "name": "Pramod Chundury",
            "value": 18,
            "numPapers": 19,
            "cluster": "5",
            "visible": 1,
            "index": 1131,
            "x": 336.29367709255257,
            "y": 7.52082093723945,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "TactualPlot: Spatializing Data as Sound Using Sensory Substitution for Touchscreen Accessibility",
                "DOI": "10.1109/tvcg.2023.3326937",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326937",
                "FirstPage": 836,
                "LastPage": 846,
                "PaperType": "J",
                "Abstract": "Tactile graphics are one of the best ways for a blind person to perceive a chart using touch, but their fabrication is often costly, time-consuming, and does not lend itself to dynamic exploration. Refreshable haptic displays tend to be expensive and thus unavailable to most blind individuals. We propose TactualPlot, an approach to sensory substitution where touch interaction yields auditory (sonified) feedback. The technique relies on embodied cognition for spatial awareness—i.e., individuals can perceive 2D touch locations of their fingers with reference to other 2D locations such as the relative locations of other fingers or chart characteristics that are visualized on touchscreens. Combining touch and sound in this way yields a scalable data exploration method for scatterplots where the data density under the user's fingertips is sampled. The sample regions can optionally be scaled based on how quickly the user moves their hand. Our development of TactualPlot was informed by formative design sessions with a blind collaborator, whose practice while using tactile scatterplots caused us to expand the technique for multiple fingers. We present results from an evaluation comparing our TactualPlot interaction technique to tactile graphics printed on swell touch paper.",
                "AuthorNamesDeduped": "Pramod Chundury;Yasmin Reyazuddin;J. Bern Jordan;Jonathan Lazar;Niklas Elmqvist",
                "AuthorNames": "Pramod Chundury;Yasmin Reyazuddin;J. Bern Jordan;Jonathan Lazar;Niklas Elmqvist",
                "AuthorAffiliation": "University of Maryland, College Park, College Park, MD, USA;National Federation of the Blind, Baltimore, MD, USA;University of Maryland, College Park, College Park, MD, USA;University of Maryland, College Park, College Park, MD, USA;Aarhus University, Aarhus, Denmark",
                "InternalReferences": "10.1109/tvcg.2013.124;10.1109/tvcg.2021.3114829;10.1109/tvcg.2021.3114846;10.1109/tvcg.2018.2865237;10.1109/tvcg.2017.2744184;10.1109/tvcg.2016.2598498",
                "AuthorKeywords": "Accessibility,sonification,multimodal interaction,crossmodal interaction,visualization",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 212,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 18,
                "i": [
                    18
                ]
            }
        },
        {
            "name": "Thomas Kapler",
            "value": 185,
            "numPapers": 15,
            "cluster": "3",
            "visible": 1,
            "index": 1132,
            "x": -253.164530034717,
            "y": 221.7154048150487,
            "vy": 0,
            "vx": 0,
            "r": 1.2130109383995396,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality",
                "DOI": "10.1109/tvcg.2020.3030358",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030358",
                "FirstPage": 1332,
                "LastPage": 1342,
                "PaperType": "J",
                "Abstract": "Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations-causal graphs and Hasse diagrams-with and without an associated textual narrative. Finally, we describe Causeworks, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate Causeworks through interviews with experts who used the system for understanding complex events.",
                "AuthorNamesDeduped": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorNames": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorAffiliation": "Virginia Tech, Arlington, VA, USA;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA;Uncharted Software, Toronto, Canada;Uncharted Software, Toronto, Canada;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA",
                "InternalReferences": "0.1109/infvis.2003.1249025;10.1109/tvcg.2011.255;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2007.70528;10.1109/tvcg.2018.2865022;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2744843;10.1109/tvcg.2015.2467931;10.1109/tvcg.2019.2934399",
                "AuthorKeywords": "Causality visualization,natural language generation,data-driven storytelling,temporal data,quantitative studies",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 855,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 492,
                "i": [
                    492
                ]
            }
        },
        {
            "name": "Derek W. S. Gray",
            "value": 8,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 1133,
            "x": 36.92533322700185,
            "y": -334.6438700560267,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality",
                "DOI": "10.1109/tvcg.2020.3030358",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030358",
                "FirstPage": 1332,
                "LastPage": 1342,
                "PaperType": "J",
                "Abstract": "Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations-causal graphs and Hasse diagrams-with and without an associated textual narrative. Finally, we describe Causeworks, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate Causeworks through interviews with experts who used the system for understanding complex events.",
                "AuthorNamesDeduped": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorNames": "Arjun Choudhry;Mandar Sharma;Pramod Chundury;Thomas Kapler;Derek W. S. Gray;Naren Ramakrishnan;Niklas Elmqvist",
                "AuthorAffiliation": "Virginia Tech, Arlington, VA, USA;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA;Uncharted Software, Toronto, Canada;Uncharted Software, Toronto, Canada;Virginia Tech, Arlington, VA, USA;University of Maryland, College Park, MD, USA",
                "InternalReferences": "0.1109/infvis.2003.1249025;10.1109/tvcg.2011.255;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2007.70528;10.1109/tvcg.2018.2865022;10.1109/tvcg.2010.179;10.1109/tvcg.2018.2865145;10.1109/tvcg.2017.2744843;10.1109/tvcg.2015.2467931;10.1109/tvcg.2019.2934399",
                "AuthorKeywords": "Causality visualization,natural language generation,data-driven storytelling,temporal data,quantitative studies",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 855,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 492,
                "i": [
                    492
                ]
            }
        },
        {
            "name": "Xiaolong Zhang 0001",
            "value": 42,
            "numPapers": 47,
            "cluster": "1",
            "visible": 1,
            "index": 1134,
            "x": 198.90877767942868,
            "y": 271.81850224382373,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Structure-Based Suggestive Exploration: A New Approach for Effective Exploration of Large Networks",
                "DOI": "10.1109/tvcg.2018.2865139",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865139",
                "FirstPage": 555,
                "LastPage": 565,
                "PaperType": "J",
                "Abstract": "When analyzing a visualized network, users need to explore different sections of the network to gain insight. However, effective exploration of large networks is often a challenge. While various tools are available for users to explore the global and local features of a network, these tools usually require significant interaction activities, such as repetitive navigation actions to follow network nodes and edges. In this paper, we propose a structure-based suggestive exploration approach to support effective exploration of large networks by suggesting appropriate structures upon user request. Encoding nodes with vectorized representations by transforming information of surrounding structures of nodes into a high dimensional space, our approach can identify similar structures within a large network, enable user interaction with multiple similar structures simultaneously, and guide the exploration of unexplored structures. We develop a web-based visual exploration system to incorporate this suggestive exploration approach and compare performances of our approach under different vectorizing methods and networks. We also present the usability and effectiveness of our approach through a controlled user study with two datasets.",
                "AuthorNamesDeduped": "Wei Chen 0001;Fangzhou Guo;Dongming Han;Jacheng Pan;Xiaotao Nie;Jiazhi Xia;Xiaolong Zhang 0001",
                "AuthorNames": "Wei Chen;Fangzhou Guo;Dongming Han;Jacheng Pan;Xiaotao Nie;Jiazhi Xia;Xiaolong Zhang",
                "AuthorAffiliation": "Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Zhejiang University, Hangzhou, Zhejiang, CN;Central South University, Changsha, Hunan, CN;Pennsylvania State University, University Park, PA, US",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2016.2598958;10.1109/infvis.2004.1;10.1109/infvis.2005.1532126;10.1109/tvcg.2007.70582;10.1109/tvcg.2006.147;10.1109/tvcg.2008.151;10.1109/tvcg.2017.2743858;10.1109/vast.2014.7042485;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744898;10.1109/tvcg.2017.2745219;10.1109/tvcg.2015.2468078;10.1109/tvcg.2009.108;10.1109/vast.2009.5333893;10.1109/tvcg.2013.167",
                "AuthorKeywords": "Large Network Exploration,Structure-Based Exploration,Suggestive Exploration",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 1434,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 672,
                "i": [
                    672
                ]
            }
        },
        {
            "name": "Xinlong Zhang",
            "value": 8,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 1135,
            "x": -330.42543212022235,
            "y": -66.09866721927392,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Visual Abstraction of Geographical Point Data with Spatial Autocorrelations",
                "DOI": "10.1109/vast50239.2020.00011",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00011",
                "FirstPage": 60,
                "LastPage": 71,
                "PaperType": "C",
                "Abstract": "Scatterplots are always employed to visualize geographical point datasets, which often suffer from an overdraw problem due to the increase of data sizes. A variety of sampling strategies have been proposed to reduce overdraw and visual clutter with the spatial densities of points taken into account. However, informative attributes associated with the points also play significant roles in the exploration of geographical datasets. In this paper, we propose an attribute-based abstraction method to simplify the cluttered visualization of large-scale geographical points. Spatial autocorrelations are utilized to measure the attribute relationships of points in local areas, and a novel attribute-based sampling model is designed to generate a subset of points to preserve both density and attribute characteristics of original geographical points. A set of visual designs and user-friendly interactions are implemented, enabling users to capture the spatial distribution of geographical points and get deeper insights into the attribute features across local areas. Case studies and quantitative comparisons based on the real-world datasets further demonstrate the effectiveness of our method in the abstraction and exploration of large-scale geographical point datasets.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics and State Key Lab of CAD & CG, Zhejiang University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Computer Sciences and Engineering, Central South University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/tvcg.2016.2598862;10.1109/tvcg.2014.2346594;10.1109/tvcg.2019.2934541;10.1109/tvcg.2006.161;10.1109/tvcg.2019.2934670;10.1109/tvcg.2008.175;10.1109/tvcg.2007.70535;10.1109/tvcg.2010.176;10.1109/tvcg.2019.2934799;10.1109/tvcg.2016.2598432;10.1109/tvcg.2016.2598831;10.1109/tvcg.2006.170;10.1109/tvcg.2010.180;10.1109/tvcg.2014.2346265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2019.2934208;10.1109/tvcg.2017.2744098;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Visual Abstraction,Spatial Autocorrelation,Sampling,Geospatial Analysis",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 601,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 496,
                "i": [
                    496
                ]
            }
        },
        {
            "name": "Zhendong Yang",
            "value": 8,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 1136,
            "x": 288.42135643606076,
            "y": -174.5368761940662,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Visual Abstraction of Geographical Point Data with Spatial Autocorrelations",
                "DOI": "10.1109/vast50239.2020.00011",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00011",
                "FirstPage": 60,
                "LastPage": 71,
                "PaperType": "C",
                "Abstract": "Scatterplots are always employed to visualize geographical point datasets, which often suffer from an overdraw problem due to the increase of data sizes. A variety of sampling strategies have been proposed to reduce overdraw and visual clutter with the spatial densities of points taken into account. However, informative attributes associated with the points also play significant roles in the exploration of geographical datasets. In this paper, we propose an attribute-based abstraction method to simplify the cluttered visualization of large-scale geographical points. Spatial autocorrelations are utilized to measure the attribute relationships of points in local areas, and a novel attribute-based sampling model is designed to generate a subset of points to preserve both density and attribute characteristics of original geographical points. A set of visual designs and user-friendly interactions are implemented, enabling users to capture the spatial distribution of geographical points and get deeper insights into the attribute features across local areas. Case studies and quantitative comparisons based on the real-world datasets further demonstrate the effectiveness of our method in the abstraction and exploration of large-scale geographical point datasets.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics and State Key Lab of CAD & CG, Zhejiang University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Computer Sciences and Engineering, Central South University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/tvcg.2016.2598862;10.1109/tvcg.2014.2346594;10.1109/tvcg.2019.2934541;10.1109/tvcg.2006.161;10.1109/tvcg.2019.2934670;10.1109/tvcg.2008.175;10.1109/tvcg.2007.70535;10.1109/tvcg.2010.176;10.1109/tvcg.2019.2934799;10.1109/tvcg.2016.2598432;10.1109/tvcg.2016.2598831;10.1109/tvcg.2006.170;10.1109/tvcg.2010.180;10.1109/tvcg.2014.2346265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2019.2934208;10.1109/tvcg.2017.2744098;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Visual Abstraction,Spatial Autocorrelation,Sampling,Geospatial Analysis",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 601,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 496,
                "i": [
                    496
                ]
            }
        },
        {
            "name": "Yuanyuan Chen",
            "value": 8,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 1137,
            "x": -94.81665296287467,
            "y": 323.6661896474789,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Visual Abstraction of Geographical Point Data with Spatial Autocorrelations",
                "DOI": "10.1109/vast50239.2020.00011",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00011",
                "FirstPage": 60,
                "LastPage": 71,
                "PaperType": "C",
                "Abstract": "Scatterplots are always employed to visualize geographical point datasets, which often suffer from an overdraw problem due to the increase of data sizes. A variety of sampling strategies have been proposed to reduce overdraw and visual clutter with the spatial densities of points taken into account. However, informative attributes associated with the points also play significant roles in the exploration of geographical datasets. In this paper, we propose an attribute-based abstraction method to simplify the cluttered visualization of large-scale geographical points. Spatial autocorrelations are utilized to measure the attribute relationships of points in local areas, and a novel attribute-based sampling model is designed to generate a subset of points to preserve both density and attribute characteristics of original geographical points. A set of visual designs and user-friendly interactions are implemented, enabling users to capture the spatial distribution of geographical points and get deeper insights into the attribute features across local areas. Case studies and quantitative comparisons based on the real-world datasets further demonstrate the effectiveness of our method in the abstraction and exploration of large-scale geographical point datasets.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics and State Key Lab of CAD & CG, Zhejiang University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Computer Sciences and Engineering, Central South University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/tvcg.2016.2598862;10.1109/tvcg.2014.2346594;10.1109/tvcg.2019.2934541;10.1109/tvcg.2006.161;10.1109/tvcg.2019.2934670;10.1109/tvcg.2008.175;10.1109/tvcg.2007.70535;10.1109/tvcg.2010.176;10.1109/tvcg.2019.2934799;10.1109/tvcg.2016.2598432;10.1109/tvcg.2016.2598831;10.1109/tvcg.2006.170;10.1109/tvcg.2010.180;10.1109/tvcg.2014.2346265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2019.2934208;10.1109/tvcg.2017.2744098;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Visual Abstraction,Spatial Autocorrelation,Sampling,Geospatial Analysis",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 601,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 496,
                "i": [
                    496
                ]
            }
        },
        {
            "name": "Jin Wen",
            "value": 8,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 1138,
            "x": -148.78387708652863,
            "y": -302.84213365894243,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Visual Abstraction of Geographical Point Data with Spatial Autocorrelations",
                "DOI": "10.1109/vast50239.2020.00011",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00011",
                "FirstPage": 60,
                "LastPage": 71,
                "PaperType": "C",
                "Abstract": "Scatterplots are always employed to visualize geographical point datasets, which often suffer from an overdraw problem due to the increase of data sizes. A variety of sampling strategies have been proposed to reduce overdraw and visual clutter with the spatial densities of points taken into account. However, informative attributes associated with the points also play significant roles in the exploration of geographical datasets. In this paper, we propose an attribute-based abstraction method to simplify the cluttered visualization of large-scale geographical points. Spatial autocorrelations are utilized to measure the attribute relationships of points in local areas, and a novel attribute-based sampling model is designed to generate a subset of points to preserve both density and attribute characteristics of original geographical points. A set of visual designs and user-friendly interactions are implemented, enabling users to capture the spatial distribution of geographical points and get deeper insights into the attribute features across local areas. Case studies and quantitative comparisons based on the real-world datasets further demonstrate the effectiveness of our method in the abstraction and exploration of large-scale geographical point datasets.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics and State Key Lab of CAD & CG, Zhejiang University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Computer Sciences and Engineering, Central South University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/tvcg.2016.2598862;10.1109/tvcg.2014.2346594;10.1109/tvcg.2019.2934541;10.1109/tvcg.2006.161;10.1109/tvcg.2019.2934670;10.1109/tvcg.2008.175;10.1109/tvcg.2007.70535;10.1109/tvcg.2010.176;10.1109/tvcg.2019.2934799;10.1109/tvcg.2016.2598432;10.1109/tvcg.2016.2598831;10.1109/tvcg.2006.170;10.1109/tvcg.2010.180;10.1109/tvcg.2014.2346265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2019.2934208;10.1109/tvcg.2017.2744098;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Visual Abstraction,Spatial Autocorrelation,Sampling,Geospatial Analysis",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 601,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 496,
                "i": [
                    496
                ]
            }
        },
        {
            "name": "Binjie Chen",
            "value": 8,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 1139,
            "x": 314.4135139580131,
            "y": 122.85822007734892,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Visual Abstraction of Geographical Point Data with Spatial Autocorrelations",
                "DOI": "10.1109/vast50239.2020.00011",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00011",
                "FirstPage": 60,
                "LastPage": 71,
                "PaperType": "C",
                "Abstract": "Scatterplots are always employed to visualize geographical point datasets, which often suffer from an overdraw problem due to the increase of data sizes. A variety of sampling strategies have been proposed to reduce overdraw and visual clutter with the spatial densities of points taken into account. However, informative attributes associated with the points also play significant roles in the exploration of geographical datasets. In this paper, we propose an attribute-based abstraction method to simplify the cluttered visualization of large-scale geographical points. Spatial autocorrelations are utilized to measure the attribute relationships of points in local areas, and a novel attribute-based sampling model is designed to generate a subset of points to preserve both density and attribute characteristics of original geographical points. A set of visual designs and user-friendly interactions are implemented, enabling users to capture the spatial distribution of geographical points and get deeper insights into the attribute features across local areas. Case studies and quantitative comparisons based on the real-world datasets further demonstrate the effectiveness of our method in the abstraction and exploration of large-scale geographical point datasets.",
                "AuthorNamesDeduped": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao 0001;Wei Chen 0001",
                "AuthorNames": "Zhiguang Zhou;Xinlong Zhang;Zhendong Yang;Yuanyuan Chen;Yuhua Liu;Jin Wen;Binjie Chen;Ying Zhao;Wei Chen",
                "AuthorAffiliation": "School of Information, Zhejiang University of Finance and Economics and State Key Lab of CAD & CG, Zhejiang University;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Information, Zhejiang University of Finance and Economics;School of Computer Sciences and Engineering, Central South University;State Key Lab of CAD & CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/tvcg.2016.2598862;10.1109/tvcg.2014.2346594;10.1109/tvcg.2019.2934541;10.1109/tvcg.2006.161;10.1109/tvcg.2019.2934670;10.1109/tvcg.2008.175;10.1109/tvcg.2007.70535;10.1109/tvcg.2010.176;10.1109/tvcg.2019.2934799;10.1109/tvcg.2016.2598432;10.1109/tvcg.2016.2598831;10.1109/tvcg.2006.170;10.1109/tvcg.2010.180;10.1109/tvcg.2014.2346265;10.1109/tvcg.2014.2346746;10.1109/tvcg.2019.2934208;10.1109/tvcg.2017.2744098;10.1109/vast47406.2019.8986943;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Visual Abstraction,Spatial Autocorrelation,Sampling,Geospatial Analysis",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 601,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 496,
                "i": [
                    496
                ]
            }
        },
        {
            "name": "Haidong Chen",
            "value": 147,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 1140,
            "x": -314.96638814380435,
            "y": 121.84487818388756,
            "vy": 0,
            "vx": 0,
            "r": 1.1692573402417963,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Abstraction and Exploration of Multi-class Scatterplots",
                "DOI": "10.1109/tvcg.2014.2346594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346594",
                "FirstPage": 1683,
                "LastPage": 1692,
                "PaperType": "J",
                "Abstract": "Scatterplots are widely used to visualize scatter dataset for exploring outliers, clusters, local trends, and correlations. Depicting multi-class scattered points within a single scatterplot view, however, may suffer from heavy overdraw, making it inefficient for data analysis. This paper presents a new visual abstraction scheme that employs a hierarchical multi-class sampling technique to show a feature-preserving simplification. To enhance the density contrast, the colors of multiple classes are optimized by taking the multi-class point distributions into account. We design a visual exploration system that supports visual inspection and quantitative analysis from different perspectives. We have applied our system to several challenging datasets, and the results demonstrate the efficiency of our approach.",
                "AuthorNamesDeduped": "Haidong Chen;Wei Chen 0001;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen 0002;Wentao Gu;Kwan-Liu Ma",
                "AuthorNames": "Haidong Chen;Wei Chen;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen;Wentao Gu;Kwan-Liu Ma",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;Cyber Innovation Joint Research Center, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Zhejiang University of Finance & Economics;Zhejiang GongShang University;University of California at Davis",
                "InternalReferences": "0.1109/tvcg.2013.150;10.1109/tvcg.2008.119;10.1109/visual.1998.745301;10.1109/tvcg.2008.120;10.1109/tvcg.2010.197;10.1109/tvcg.2006.187;10.1109/tvcg.2007.70623;10.1109/tvcg.2013.180;10.1109/infvis.2004.52;10.1109/vast.2010.5652460;10.1109/tvcg.2009.112;10.1109/tvcg.2009.122;10.1109/tvcg.2011.181;10.1109/tvcg.2012.238;10.1109/tvcg.2010.176;10.1109/tvcg.2013.212;10.1109/tvcg.2011.261;10.1109/tvcg.2008.153;10.1109/tvcg.2013.183",
                "AuthorKeywords": "Scatterplot, overdraw reduction, sampling, visual abstraction",
                "AminerCitationCount": 96,
                "CitationCountCrossRef": 75,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1254,
                "i": [
                    1254
                ]
            }
        },
        {
            "name": "Zhiqi Liu",
            "value": 147,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 1141,
            "x": 150.00710016825192,
            "y": -302.7339919782911,
            "vy": 0,
            "vx": 0,
            "r": 1.1692573402417963,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Abstraction and Exploration of Multi-class Scatterplots",
                "DOI": "10.1109/tvcg.2014.2346594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346594",
                "FirstPage": 1683,
                "LastPage": 1692,
                "PaperType": "J",
                "Abstract": "Scatterplots are widely used to visualize scatter dataset for exploring outliers, clusters, local trends, and correlations. Depicting multi-class scattered points within a single scatterplot view, however, may suffer from heavy overdraw, making it inefficient for data analysis. This paper presents a new visual abstraction scheme that employs a hierarchical multi-class sampling technique to show a feature-preserving simplification. To enhance the density contrast, the colors of multiple classes are optimized by taking the multi-class point distributions into account. We design a visual exploration system that supports visual inspection and quantitative analysis from different perspectives. We have applied our system to several challenging datasets, and the results demonstrate the efficiency of our approach.",
                "AuthorNamesDeduped": "Haidong Chen;Wei Chen 0001;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen 0002;Wentao Gu;Kwan-Liu Ma",
                "AuthorNames": "Haidong Chen;Wei Chen;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen;Wentao Gu;Kwan-Liu Ma",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;Cyber Innovation Joint Research Center, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Zhejiang University of Finance & Economics;Zhejiang GongShang University;University of California at Davis",
                "InternalReferences": "0.1109/tvcg.2013.150;10.1109/tvcg.2008.119;10.1109/visual.1998.745301;10.1109/tvcg.2008.120;10.1109/tvcg.2010.197;10.1109/tvcg.2006.187;10.1109/tvcg.2007.70623;10.1109/tvcg.2013.180;10.1109/infvis.2004.52;10.1109/vast.2010.5652460;10.1109/tvcg.2009.112;10.1109/tvcg.2009.122;10.1109/tvcg.2011.181;10.1109/tvcg.2012.238;10.1109/tvcg.2010.176;10.1109/tvcg.2013.212;10.1109/tvcg.2011.261;10.1109/tvcg.2008.153;10.1109/tvcg.2013.183",
                "AuthorKeywords": "Scatterplot, overdraw reduction, sampling, visual abstraction",
                "AminerCitationCount": 96,
                "CitationCountCrossRef": 75,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1254,
                "i": [
                    1254
                ]
            }
        },
        {
            "name": "Kun Zhou",
            "value": 147,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 1142,
            "x": 93.92441994692474,
            "y": 324.69709474775675,
            "vy": 0,
            "vx": 0,
            "r": 1.1692573402417963,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Abstraction and Exploration of Multi-class Scatterplots",
                "DOI": "10.1109/tvcg.2014.2346594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346594",
                "FirstPage": 1683,
                "LastPage": 1692,
                "PaperType": "J",
                "Abstract": "Scatterplots are widely used to visualize scatter dataset for exploring outliers, clusters, local trends, and correlations. Depicting multi-class scattered points within a single scatterplot view, however, may suffer from heavy overdraw, making it inefficient for data analysis. This paper presents a new visual abstraction scheme that employs a hierarchical multi-class sampling technique to show a feature-preserving simplification. To enhance the density contrast, the colors of multiple classes are optimized by taking the multi-class point distributions into account. We design a visual exploration system that supports visual inspection and quantitative analysis from different perspectives. We have applied our system to several challenging datasets, and the results demonstrate the efficiency of our approach.",
                "AuthorNamesDeduped": "Haidong Chen;Wei Chen 0001;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen 0002;Wentao Gu;Kwan-Liu Ma",
                "AuthorNames": "Haidong Chen;Wei Chen;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen;Wentao Gu;Kwan-Liu Ma",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;Cyber Innovation Joint Research Center, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Zhejiang University of Finance & Economics;Zhejiang GongShang University;University of California at Davis",
                "InternalReferences": "0.1109/tvcg.2013.150;10.1109/tvcg.2008.119;10.1109/visual.1998.745301;10.1109/tvcg.2008.120;10.1109/tvcg.2010.197;10.1109/tvcg.2006.187;10.1109/tvcg.2007.70623;10.1109/tvcg.2013.180;10.1109/infvis.2004.52;10.1109/vast.2010.5652460;10.1109/tvcg.2009.112;10.1109/tvcg.2009.122;10.1109/tvcg.2011.181;10.1109/tvcg.2012.238;10.1109/tvcg.2010.176;10.1109/tvcg.2013.212;10.1109/tvcg.2011.261;10.1109/tvcg.2008.153;10.1109/tvcg.2013.183",
                "AuthorKeywords": "Scatterplot, overdraw reduction, sampling, visual abstraction",
                "AminerCitationCount": 96,
                "CitationCountCrossRef": 75,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1254,
                "i": [
                    1254
                ]
            }
        },
        {
            "name": "Wentao Gu",
            "value": 147,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 1143,
            "x": -288.7129487537632,
            "y": -176.05349534135038,
            "vy": 0,
            "vx": 0,
            "r": 1.1692573402417963,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Abstraction and Exploration of Multi-class Scatterplots",
                "DOI": "10.1109/tvcg.2014.2346594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346594",
                "FirstPage": 1683,
                "LastPage": 1692,
                "PaperType": "J",
                "Abstract": "Scatterplots are widely used to visualize scatter dataset for exploring outliers, clusters, local trends, and correlations. Depicting multi-class scattered points within a single scatterplot view, however, may suffer from heavy overdraw, making it inefficient for data analysis. This paper presents a new visual abstraction scheme that employs a hierarchical multi-class sampling technique to show a feature-preserving simplification. To enhance the density contrast, the colors of multiple classes are optimized by taking the multi-class point distributions into account. We design a visual exploration system that supports visual inspection and quantitative analysis from different perspectives. We have applied our system to several challenging datasets, and the results demonstrate the efficiency of our approach.",
                "AuthorNamesDeduped": "Haidong Chen;Wei Chen 0001;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen 0002;Wentao Gu;Kwan-Liu Ma",
                "AuthorNames": "Haidong Chen;Wei Chen;Honghui Mei;Zhiqi Liu;Kun Zhou;Weifeng Chen;Wentao Gu;Kwan-Liu Ma",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;Cyber Innovation Joint Research Center, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Zhejiang University of Finance & Economics;Zhejiang GongShang University;University of California at Davis",
                "InternalReferences": "0.1109/tvcg.2013.150;10.1109/tvcg.2008.119;10.1109/visual.1998.745301;10.1109/tvcg.2008.120;10.1109/tvcg.2010.197;10.1109/tvcg.2006.187;10.1109/tvcg.2007.70623;10.1109/tvcg.2013.180;10.1109/infvis.2004.52;10.1109/vast.2010.5652460;10.1109/tvcg.2009.112;10.1109/tvcg.2009.122;10.1109/tvcg.2011.181;10.1109/tvcg.2012.238;10.1109/tvcg.2010.176;10.1109/tvcg.2013.212;10.1109/tvcg.2011.261;10.1109/tvcg.2008.153;10.1109/tvcg.2013.183",
                "AuthorKeywords": "Scatterplot, overdraw reduction, sampling, visual abstraction",
                "AminerCitationCount": 96,
                "CitationCountCrossRef": 75,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 2545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1254,
                "i": [
                    1254
                ]
            }
        },
        {
            "name": "Andrés Lalama",
            "value": 7,
            "numPapers": 26,
            "cluster": "3",
            "visible": 1,
            "index": 1144,
            "x": 331.95542420394565,
            "y": -65.23493191211656,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "Visual Neural Decomposition to Explain Multivariate Data Sets",
                "DOI": "10.1109/tvcg.2020.3030420",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030420",
                "FirstPage": 1374,
                "LastPage": 1384,
                "PaperType": "J",
                "Abstract": "Investigating relationships between variables in multi-dimensional data sets is a common task for data analysts and engineers. More specifically, it is often valuable to understand which ranges of which input variables lead to particular values of a given target variable. Unfortunately, with an increasing number of independent variables, this process may become cumbersome and time-consuming due to the many possible combinations that have to be explored. In this paper, we propose a novel approach to visualize correlations between input variables and a target output variable that scales to hundreds of variables. We developed a visual model based on neural networks that can be explored in a guided way to help analysts find and understand such correlations. First, we train a neural network to predict the target from the input variables. Then, we visualize the inner workings of the resulting model to help understand relations within the data set. We further introduce a new regularization term for the backpropagation algorithm that encourages the neural network to learn representations that are easier to interpret visually. We apply our method to artificial and real-world data sets to show its utility.",
                "AuthorNamesDeduped": "Johannes Knittel;Andrés Lalama;Steffen Koch 0001;Thomas Ertl",
                "AuthorNames": "Johannes Knittel;Andres Lalama;Steffen Koch;Thomas Ertl",
                "AuthorAffiliation": "University of Stuttgart;University of Stuttgart;University of Stuttgart;University of Stuttgart",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/vast.2008.4677368;10.1109/vast.2014.7042480;10.1109/tvcg.2011.188;10.1109/tvcg.2018.2865043;10.1109/tvcg.2019.2934251;10.1109/tvcg.2013.157;10.1109/tvcg.2015.2467199;10.1109/vast.2017.8585613;10.1109/infvis.2005.1532138;10.1109/tvcg.2017.2744718;10.1109/tvcg.2015.2468291;10.1109/tvcg.2014.2346482;10.1109/vast.2012.6400491;10.1109/vast.2011.6102448;10.1109/tvcg.2017.2745158;10.1109/tvcg.2013.125;10.1109/tvcg.2006.170;10.1109/tvcg.2018.2864838;10.1109/tvcg.2014.2346321;10.1109/tvcg.2017.2744158;10.1109/vast.2009.5332628;10.1109/vast.2012.6400488;10.1109/tvcg.2012.256;10.1109/vast.2011.6102453;10.1109/tvcg.2015.2467191;10.1109/tvcg.2016.2598831",
                "AuthorKeywords": "Visual Analytics,Multivariate Data Analysis,Machine Learning",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 64,
                "DownloadsXplore": 623,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 498,
                "i": [
                    498
                ]
            }
        },
        {
            "name": "Christopher Andrews 0001",
            "value": 65,
            "numPapers": 26,
            "cluster": "4",
            "visible": 1,
            "index": 1145,
            "x": -200.79569993903667,
            "y": 272.4538252364836,
            "vy": 0,
            "vx": 0,
            "r": 1.0748416810592976,
            "node": {
                "Conference": "VAST",
                "Year": 2010,
                "Title": "VizCept: Supporting synchronous collaboration for constructing visualizations in intelligence analysis",
                "DOI": "10.1109/vast.2010.5652932",
                "Link": "http://dx.doi.org/10.1109/VAST.2010.5652932",
                "FirstPage": 107,
                "LastPage": 114,
                "PaperType": "C",
                "Abstract": "In this paper, we present a new web-based visual analytics system, VizCept, which is designed to support fluid, collaborative analysis of large textual intelligence datasets. The main approach of the design is to combine individual workspace and shared visualization in an integrated environment. Collaborating analysts will be able to identify concepts and relationships from the dataset based on keyword searches in their own workspace and collaborate visually with other analysts using visualization tools such as a concept map view and a timeline view. The system allows analysts to parallelize the work by dividing initial sets of concepts, investigating them on their own workspace, and then integrating individual findings automatically on shared visualizations with support for interaction and personal graph layout in real time, in order to develop a unified plot. We highlight several design considerations that promote communication and analytic performance in small team synchronous collaboration. We report the result of a pair of case study applications including collaboration and communication methods, analysis strategies, and user behaviors under a competition setting in the same location at the same time. The results of these demonstrate the tool's effectiveness for synchronous collaborative construction and use of visualizations in intelligence data analysis.",
                "AuthorNamesDeduped": "Haeyong Chung;Seungwon Yang;Naveed Massjouni;Christopher Andrews 0001;Rahul Kanna;Chris North 0001",
                "AuthorNames": "Haeyong Chung;Seungwon Yang;Naveed Massjouni;Christopher Andrews;Rahul Kanna;Chris North",
                "AuthorAffiliation": "Department of Computer Science, Virginia Technology, USA;Department of Computer Science, Virginia Technology, USA;Department of Computer Science, Virginia Technology, USA;Department of Computer Science, Virginia Technology, USA;Department of Computer Science, Virginia Technology, USA;Department of Computer Science, Virginia Technology, USA",
                "InternalReferences": "0.1109/tvcg.2009.148;10.1109/vast.2007.4389006;10.1109/vast.2009.5333245;10.1109/vast.2008.4677362;10.1109/tvcg.2007.70577;10.1109/vast.2008.4677366",
                "AuthorKeywords": "Collaborative visualization, text and document data, intelligence analysis",
                "AminerCitationCount": 48,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 684,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1730,
                "i": [
                    1730
                ]
            }
        },
        {
            "name": "Lauren Bradel",
            "value": 71,
            "numPapers": 21,
            "cluster": "4",
            "visible": 1,
            "index": 1146,
            "x": -35.995115845514206,
            "y": -336.68137999489664,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Multi-Model Semantic Interaction for Text Analytics",
                "DOI": "10.1109/vast.2014.7042492",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042492",
                "FirstPage": 163,
                "LastPage": 172,
                "PaperType": "C",
                "Abstract": "Semantic interaction offers an intuitive communication mechanism between human users and complex statistical models. By shielding the users from manipulating model parameters, they focus instead on directly manipulating the spatialization, thus remaining in their cognitive zone. However, this technique is not inherently scalable past hundreds of text documents. To remedy this, we present the concept of multi-model semantic interaction, where semantic interactions can be used to steer multiple models at multiple levels of data scale, enabling users to tackle larger data problems. We also present an updated visualization pipeline model for generalized multi-model semantic interaction. To demonstrate multi-model semantic interaction, we introduce StarSPIRE, a visual text analytics prototype that transforms user interactions on documents into both small-scale display layout updates as well as large-scale relevancy-based document selection.",
                "AuthorNamesDeduped": "Lauren Bradel;Chris North 0001;Leanna House;Scotland Leman",
                "AuthorNames": "Lauren Bradel;Chris North;Leanna House;Scotland Leman",
                "AuthorAffiliation": "Virginia Tech;Virginia Tech;Virginia Tech;Virginia Tech",
                "InternalReferences": "0.1109/vast.2011.6102449;10.1109/tvcg.2013.188;10.1109/vast.2012.6400559;10.1109/vast.2012.6400486;10.1109/infvis.1995.528686;10.1109/vast.2007.4389006;10.1109/vast.2007.4389032",
                "AuthorKeywords": "Visual analytics, Semantic Interaction, Sensemaking, Text Analytics",
                "AminerCitationCount": 86,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 659,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1261,
                "i": [
                    1261
                ]
            }
        },
        {
            "name": "Jingjing Liu",
            "value": 165,
            "numPapers": 7,
            "cluster": "4",
            "visible": 1,
            "index": 1147,
            "x": 254.0774158762537,
            "y": 224.04166295947107,
            "vy": 0,
            "vx": 0,
            "r": 1.1899827288428324,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Dis-function: Learning distance functions interactively",
                "DOI": "10.1109/vast.2012.6400486",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400486",
                "FirstPage": 83,
                "LastPage": 92,
                "PaperType": "C",
                "Abstract": "The world's corpora of data grow in size and complexity every day, making it increasingly difficult for experts to make sense out of their data. Although machine learning offers algorithms for finding patterns in data automatically, they often require algorithm-specific parameters, such as an appropriate distance function, which are outside the purview of a domain expert. We present a system that allows an expert to interact directly with a visual representation of the data to define an appropriate distance function, thus avoiding direct manipulation of obtuse model parameters. Adopting an iterative approach, our system first assumes a uniformly weighted Euclidean distance function and projects the data into a two-dimensional scatterplot view. The user can then move incorrectly-positioned data points to locations that reflect his or her understanding of the similarity of those data points relative to the other data points. Based on this input, the system performs an optimization to learn a new distance function and then re-projects the data to redraw the scatter-plot. We illustrate empirically that with only a few iterations of interaction and optimization, a user can achieve a scatterplot view and its corresponding distance function that reflect the user's knowledge of the data. In addition, we evaluate our system to assess scalability in data size and data dimension, and show that our system is computationally efficient and can provide an interactive or near-interactive user experience.",
                "AuthorNamesDeduped": "Eli T. Brown;Jingjing Liu;Carla E. Brodley;Remco Chang",
                "AuthorNames": "Eli T. Brown;Jingjing Liu;Carla E. Brodley;Remco Chang",
                "AuthorAffiliation": "Department of Computer Science Tufts University;Department of Computer Science Tufts University;Department of Computer Science Tufts University;Department of Computer Science Tufts University",
                "InternalReferences": "0.1109/visual.1990.146402;10.1109/vast.2011.6102449;10.1109/vast.2007.4388999;10.1109/vast.2009.5332584;10.1109/vast.2011.6102448;10.1109/vast.2008.4677352;10.1109/vast.2010.5652443",
                "AuthorKeywords": null,
                "AminerCitationCount": 241,
                "CitationCountCrossRef": 129,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1387,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1486,
                "i": [
                    1486
                ]
            }
        },
        {
            "name": "Carla E. Brodley",
            "value": 159,
            "numPapers": 6,
            "cluster": "4",
            "visible": 1,
            "index": 1148,
            "x": -338.83429171645383,
            "y": 6.428277919403075,
            "vy": 0,
            "vx": 0,
            "r": 1.1830742659758204,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Dis-function: Learning distance functions interactively",
                "DOI": "10.1109/vast.2012.6400486",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400486",
                "FirstPage": 83,
                "LastPage": 92,
                "PaperType": "C",
                "Abstract": "The world's corpora of data grow in size and complexity every day, making it increasingly difficult for experts to make sense out of their data. Although machine learning offers algorithms for finding patterns in data automatically, they often require algorithm-specific parameters, such as an appropriate distance function, which are outside the purview of a domain expert. We present a system that allows an expert to interact directly with a visual representation of the data to define an appropriate distance function, thus avoiding direct manipulation of obtuse model parameters. Adopting an iterative approach, our system first assumes a uniformly weighted Euclidean distance function and projects the data into a two-dimensional scatterplot view. The user can then move incorrectly-positioned data points to locations that reflect his or her understanding of the similarity of those data points relative to the other data points. Based on this input, the system performs an optimization to learn a new distance function and then re-projects the data to redraw the scatter-plot. We illustrate empirically that with only a few iterations of interaction and optimization, a user can achieve a scatterplot view and its corresponding distance function that reflect the user's knowledge of the data. In addition, we evaluate our system to assess scalability in data size and data dimension, and show that our system is computationally efficient and can provide an interactive or near-interactive user experience.",
                "AuthorNamesDeduped": "Eli T. Brown;Jingjing Liu;Carla E. Brodley;Remco Chang",
                "AuthorNames": "Eli T. Brown;Jingjing Liu;Carla E. Brodley;Remco Chang",
                "AuthorAffiliation": "Department of Computer Science Tufts University;Department of Computer Science Tufts University;Department of Computer Science Tufts University;Department of Computer Science Tufts University",
                "InternalReferences": "0.1109/visual.1990.146402;10.1109/vast.2011.6102449;10.1109/vast.2007.4388999;10.1109/vast.2009.5332584;10.1109/vast.2011.6102448;10.1109/vast.2008.4677352;10.1109/vast.2010.5652443",
                "AuthorKeywords": null,
                "AminerCitationCount": 241,
                "CitationCountCrossRef": 129,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1387,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1486,
                "i": [
                    1486
                ]
            }
        },
        {
            "name": "Patrick Fiaux",
            "value": 75,
            "numPapers": 4,
            "cluster": "4",
            "visible": 1,
            "index": 1149,
            "x": 245.61047905316778,
            "y": -233.72097162914898,
            "vy": 0,
            "vx": 0,
            "r": 1.0863557858376511,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering",
                "DOI": "10.1109/tvcg.2012.260",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.260",
                "FirstPage": 2879,
                "LastPage": 2888,
                "PaperType": "J",
                "Abstract": "Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.",
                "AuthorNamesDeduped": "Alex Endert;Patrick Fiaux;Chris North 0001",
                "AuthorNames": "Alex Endert;Patrick Fiaux;Chris North",
                "AuthorAffiliation": "Virginia Polytechnic Institute and State University, USA;Virginia Polytechnic Institute and State University, USA;Virginia Polytechnic Institute and State University, USA",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/vast.2012.6400559;10.1109/vast.2011.6102449;10.1109/vast.2011.6102438;10.1109/vast.2007.4389006",
                "AuthorKeywords": "User Interaction, visualization, sensemaking, analytic reasoning, visual analytics",
                "AminerCitationCount": 163,
                "CitationCountCrossRef": 99,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 1333,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1489,
                "i": [
                    1489
                ]
            }
        },
        {
            "name": "Marcel Worring",
            "value": 67,
            "numPapers": 26,
            "cluster": "3",
            "visible": 1,
            "index": 1150,
            "x": -23.23937736438175,
            "y": 338.3931608938276,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Towards Interactive, Intelligent, and Integrated Multimedia Analytics",
                "DOI": "10.1109/vast.2014.7042476",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042476",
                "FirstPage": 3,
                "LastPage": 12,
                "PaperType": "C",
                "Abstract": "The size and importance of visual multimedia collections grew rapidly over the last years, creating a need for sophisticated multimedia analytics systems enabling large-scale, interactive, and insightful analysis. These systems need to integrate the human's natural expertise in analyzing multimedia with the machine's ability to process large-scale data. The paper starts off with a comprehensive overview of representation, learning, and interaction techniques from both the human's and the machine's point of view. To this end, hundreds of references from the related disciplines (visual analytics, information visualization, computer vision, multimedia information retrieval) have been surveyed. Based on the survey, a novel general multimedia analytics model is synthesized. In the model, the need for semantic navigation of the collection is emphasized and multimedia analytics tasks are placed on the exploration-search axis. The axis is composed of both exploration and search in a certain proportion which changes as the analyst progresses towards insight. Categorization is proposed as a suitable umbrella task realizing the exploration-search axis in the model. Finally, the pragmatic gap, defined as the difference between the tight machine categorization model and the flexible human categorization model is identified as a crucial multimedia analytics topic.",
                "AuthorNamesDeduped": "Jan Zahálka;Marcel Worring",
                "AuthorNames": "Jan Zahálka;Marcel Worring",
                "AuthorAffiliation": "University of Amsterdam;University of Amsterdam",
                "InternalReferences": "0.1109/vast.2006.261425;10.1109/vast.2007.4389003;10.1109/infvis.2005.1532136;10.1109/tvcg.2010.136;10.1109/tvcg.2007.70515;10.1109/tvcg.2007.70541;10.1109/tvcg.2013.168",
                "AuthorKeywords": "Multimedia (image/video/music) visualization, machine learning",
                "AminerCitationCount": 48,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 100,
                "DownloadsXplore": 850,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1270,
                "i": [
                    1270
                ]
            }
        },
        {
            "name": "Christopher Mears",
            "value": 52,
            "numPapers": 11,
            "cluster": "2",
            "visible": 1,
            "index": 1151,
            "x": -211.53717504121497,
            "y": -265.3337965197467,
            "vy": 0,
            "vx": 0,
            "r": 1.059873344847438,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Edge Compression Techniques for Visualization of Dense Directed Graphs",
                "DOI": "10.1109/tvcg.2013.151",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.151",
                "FirstPage": 2596,
                "LastPage": 2605,
                "PaperType": "J",
                "Abstract": "We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal-to compress the set of edges that need to be rendered to fully convey connectivity-but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by-and discuss in particular-the application to software dependency analysis.",
                "AuthorNamesDeduped": "Tim Dwyer;Nathalie Henry Riche;Kim Marriott;Christopher Mears",
                "AuthorNames": "Tim Dwyer;Nathalie Henry Riche;Kim Marriott;Christopher Mears",
                "AuthorAffiliation": "Monash University, Australia;Microsoft Research, USA;Monash University, Australia;Monash University, Australia",
                "InternalReferences": "0.1109/tvcg.2009.165;10.1109/tvcg.2011.233;10.1109/tvcg.2006.120;10.1109/infvis.2004.66;10.1109/tvcg.2012.208",
                "AuthorKeywords": "Directed graphs, networks, modular decomposition, power graph analysis",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 1002,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1322,
                "i": [
                    1322
                ]
            }
        },
        {
            "name": "Jonathan Zhang",
            "value": 41,
            "numPapers": 38,
            "cluster": "1",
            "visible": 1,
            "index": 1152,
            "x": 335.356856338192,
            "y": 52.78047846472468,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analysis of High-Dimensional Event Sequence Data via Dynamic Hierarchical Aggregation",
                "DOI": "10.1109/tvcg.2019.2934661",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934661",
                "FirstPage": 440,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "Temporal event data are collected across a broad range of domains, and a variety of visual analytics techniques have been developed to empower analysts working with this form of data. These techniques generally display aggregate statistics computed over sets of event sequences that share common patterns. Such techniques are often hindered, however, by the high-dimensionality of many real-world event sequence datasets which can prevent effective aggregation. A common coping strategy for this challenge is to group event types together prior to visualization, as a pre-process, so that each group can be represented within an analysis as a single event type. However, computing these event groupings as a pre-process also places significant constraints on the analysis. This paper presents a new visual analytics approach for dynamic hierarchical dimension aggregation. The approach leverages a predefined hierarchy of dimensions to computationally quantify the informativeness, with respect to a measure of interest, of alternative levels of grouping within the hierarchy at runtime. This information is then interactively visualized, enabling users to dynamically explore the hierarchy to select the most appropriate level of grouping to use at any individual step within an analysis. Key contributions include an algorithm for interactively determining the most informative set of event groupings for a specific analysis context, and a scented scatter-plus-focus visualization design with an optimization-based layout algorithm that supports interactive hierarchical exploration of alternative event type groupings. We apply these techniques to high-dimensional event sequence data from the medical domain and report findings from domain expert interviews.",
                "AuthorNamesDeduped": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorNames": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorAffiliation": "School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Biostatistics, University of North Carolina, Chapel Hill;School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Computer Science, University of North Carolina, Chapel Hill;RENCI, University of North Carolina, Chapel Hill",
                "InternalReferences": "0.1109/tvcg.2019.2934209;10.1109/tvcg.2017.2745278;10.1109/tvcg.2014.2346433;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2017.2745320;10.1109/tvcg.2018.2864886;10.1109/tvcg.2013.200;10.1109/vast.2011.6102443;10.1109/infvis.2005.1532152;10.1109/infvis.2000.885091;10.1109/tvcg.2017.2744686;10.1109/tvcg.2009.108;10.1109/tvcg.2007.70589;10.1109/vast.2014.7042487;10.1109/tvcg.2012.238",
                "AuthorKeywords": "Temporal event sequence visualization,visual analytics,hierarchical aggregation,medical informatics",
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1035,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 627,
                "i": [
                    627
                ]
            }
        },
        {
            "name": "Mingjie Tang",
            "value": 5,
            "numPapers": 28,
            "cluster": "1",
            "visible": 1,
            "index": 1153,
            "x": -283.05713134549063,
            "y": 187.6929950596496,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "STULL: Unbiased Online Sampling for Visual Exploration of Large Spatiotemporal Data",
                "DOI": "10.1109/vast50239.2020.00012",
                "Link": "http://dx.doi.org/10.1109/VAST50239.2020.00012",
                "FirstPage": 72,
                "LastPage": 83,
                "PaperType": "C",
                "Abstract": "Online sampling-supported visual analytics is increasingly important, as it allows users to explore large datasets with acceptable approximate answers at interactive rates. However, existing online spatiotemporal sampling techniques are often biased, as most researchers have primarily focused on reducing computational latency. Biased sampling approaches select data with unequal probabilities and produce results that do not match the exact data distribution, leading end users to incorrect interpretations. In this paper, we propose a novel approach to perform unbiased online sampling of large spatiotemporal data. The proposed approach ensures the same probability of selection to every point that qualifies the specifications of a user’s multidimensional query. To achieve unbiased sampling for accurate representative interactive visualizations, we design a novel data index and an associated sample retrieval plan. Our proposed sampling approach is suitable for a wide variety of visual analytics tasks, e.g., tasks that run aggregate queries of spatiotemporal data. Extensive experiments confirm the superiority of our approach over a state-of-the-art spatial online sampling technique, demonstrating that within the same computational time, data samples generated in our approach are at least 50% more accurate in representing the actual spatial distribution of the data and enable approximate visualizations to present closer visual appearances to the exact ones.",
                "AuthorNamesDeduped": "Guizhen Wang;Jingjing Guo;Mingjie Tang;José Florencio de Queiroz Neto;Calvin Yau;Anas Daghistani;Morteza Karimzadeh;Walid G. Aref;David S. Ebert",
                "AuthorNames": "Guizhen Wang;Jingjing Guo;Mingjie Tang;José Florencio de Queiroz Neto;Calvin Yau;Anas Daghistani;Morteza Karimzadeh;Walid G. Aref;David S. Ebert",
                "AuthorAffiliation": "Purdue University;Purdue University;Chinese Academy of Science;Federal University of Ceara;Purdue University;Umm Al-Qura University;University of Colorado Boulder;Purdue University;University of Oklahoma",
                "InternalReferences": "0.1109/vast.2012.6400557;10.1109/vast.2008.4677357;10.1109/tvcg.2014.2346594;10.1109/tvcg.2019.2934541;10.1109/tvcg.2019.2934799;10.1109/tvcg.2013.179;10.1109/tvcg.2019.2934434;10.1109/tvcg.2014.2346452;10.1109/tvcg.2014.2346926;10.1109/tvcg.2016.2598624;10.1109/tvcg.2019.2934208;10.1109/tvcg.2016.2598867",
                "AuthorKeywords": "Geospatial data,large-scale data techniques,data management,visual analytics",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 67,
                "DownloadsXplore": 302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 514,
                "i": [
                    514
                ]
            }
        },
        {
            "name": "Morteza Karimzadeh",
            "value": 27,
            "numPapers": 72,
            "cluster": "1",
            "visible": 1,
            "index": 1154,
            "x": 81.96823004914536,
            "y": -329.74415728350726,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "VASSL: A Visual Analytics Toolkit for Social Spambot Labeling",
                "DOI": "10.1109/tvcg.2019.2934266",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934266",
                "FirstPage": 874,
                "LastPage": 883,
                "PaperType": "J",
                "Abstract": "Social media platforms are filled with social spambots. Detecting these malicious accounts is essential, yet challenging, as they continually evolve to evade detection techniques. In this article, we present VASSL, a visual analytics system that assists in the process of detecting and labeling spambots. Our tool enhances the performance and scalability of manual labeling by providing multiple connected views and utilizing dimensionality reduction, sentiment analysis and topic modeling, enabling insights for the identification of spambots. The system allows users to select and analyze groups of accounts in an interactive manner, which enables the detection of spambots that may not be identified when examined individually. We present a user study to objectively evaluate the performance of VASSL users, as well as capturing subjective opinions about the usefulness and the ease of use of the tool.",
                "AuthorNamesDeduped": "Mosab Khayat;Morteza Karimzadeh;Jieqiong Zhao;David S. Ebert",
                "AuthorNames": "Mosab Khayat;Morteza Karimzadeh;Jieqiong Zhao;David S. Ebert",
                "AuthorAffiliation": "Purdue University;University of Colorado Boulder;Purdue University;Purdue University",
                "InternalReferences": "0.1109/tvcg.2015.2467196;10.1109/vast.2012.6400557;10.1109/vast.2016.7883510;10.1109/tvcg.2017.2745083;10.1109/tvcg.2017.2745080;10.1109/tvcg.2013.153;10.1109/tvcg.2014.2346920;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spambot,Labeling,Detection,Visual Analytics,Social Media Annotation",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 621,
                "i": [
                    621
                ]
            }
        },
        {
            "name": "Edward R. van Selow",
            "value": 155,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1155,
            "x": 162.36843016119215,
            "y": 298.641077025566,
            "vy": 0,
            "vx": 0,
            "r": 1.178468624064479,
            "node": {
                "Conference": "InfoVis",
                "Year": 1999,
                "Title": "Cluster and calendar based visualization of time series data",
                "DOI": "10.1109/infvis.1999.801851",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1999.801851",
                "FirstPage": 4,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "A new method is presented to get an insight into univariate time series data. The problem addressed is how to identify patterns and trends on multiple time scales (days, weeks, seasons) simultaneously. The solution presented is to cluster similar daily data patterns, and to visualize the average patterns as graphs and the corresponding days on a calendar. This presentation provides a quick insight into both standard and exceptional patterns. Furthermore, it is well suited to interactive exploration. Two applications, numbers of employees present and energy consumption, are presented.",
                "AuthorNamesDeduped": "Jarke J. van Wijk;Edward R. van Selow",
                "AuthorNames": "J.J. Van Wijk;E.R. Van Selow",
                "AuthorAffiliation": "Department of Mathematics and Computing Science, Eindhovan University of Technology, Eindhoven, Netherlands;Netherlands Energy Research Foundation, Petten, Netherlands",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 446,
                "CitationCountCrossRef": 142,
                "PubsCitedCrossRef": 7,
                "DownloadsXplore": 2638,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3031,
                "i": [
                    3031
                ]
            }
        },
        {
            "name": "Iain Dillingham",
            "value": 34,
            "numPapers": 6,
            "cluster": "5",
            "visible": 1,
            "index": 1156,
            "x": -321.5936437099811,
            "y": -110.57815482878048,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Creative User-Centered Visualization Design for Energy Analysts and Modelers",
                "DOI": "10.1109/tvcg.2013.145",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.145",
                "FirstPage": 2516,
                "LastPage": 2525,
                "PaperType": "J",
                "Abstract": "We enhance a user-centered design process with techniques that deliberately promote creativity to identify opportunities for the visualization of data generated by a major energy supplier. Visualization prototypes developed in this way prove effective in a situation whereby data sets are largely unknown and requirements open - enabling successful exploration of possibilities for visualization in Smart Home data analysis. The process gives rise to novel designs and design metaphors including data sculpting. It suggests: that the deliberate use of creativity techniques with data stakeholders is likely to contribute to successful, novel and effective solutions; that being explicit about creativity may contribute to designers developing creative solutions; that using creativity techniques early in the design process may result in a creative approach persisting throughout the process. The work constitutes the first systematic visualization design for a data rich source that will be increasingly important to energy suppliers and consumers as Smart Meter technology is widely deployed. It is novel in explicitly employing creativity techniques at the requirements stage of visualization design and development, paving the way for further use and study of creativity methods in visualization design.",
                "AuthorNamesDeduped": "Sarah Goodwin;Jason Dykes;Sara Jones 0001;Iain Dillingham;Graham Dove;Alison Duffy;Alexander Kachkaev;Aidan Slingsby;Jo Wood",
                "AuthorNames": "Sarah Goodwin;Jason Dykes;Sara Jones;Iain Dillingham;Graham Dove;Alison Duffy;Alexander Kachkaev;Aidan Slingsby;Jo Wood",
                "AuthorAffiliation": "GiCentre, City University London, UK;GiCentre, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;GiCentre, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;GiCentre, City University London, UK;GiCentre, City University London, UK;GiCentre, City University London, UK",
                "InternalReferences": "0.1109/tvcg.2010.191;10.1109/tvcg.2012.213;10.1109/tvcg.2011.196;10.1109/tvcg.2007.70539;10.1109/infvis.1999.801851;10.1109/tvcg.2011.209",
                "AuthorKeywords": "Creativity techniques, user-centered design, data visualization, smart home, energy consumption",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 52,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 1925,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1313,
                "i": [
                    1313
                ]
            }
        },
        {
            "name": "Graham Dove",
            "value": 34,
            "numPapers": 5,
            "cluster": "5",
            "visible": 1,
            "index": 1157,
            "x": 311.9624006980383,
            "y": -135.7551492604113,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Creative User-Centered Visualization Design for Energy Analysts and Modelers",
                "DOI": "10.1109/tvcg.2013.145",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.145",
                "FirstPage": 2516,
                "LastPage": 2525,
                "PaperType": "J",
                "Abstract": "We enhance a user-centered design process with techniques that deliberately promote creativity to identify opportunities for the visualization of data generated by a major energy supplier. Visualization prototypes developed in this way prove effective in a situation whereby data sets are largely unknown and requirements open - enabling successful exploration of possibilities for visualization in Smart Home data analysis. The process gives rise to novel designs and design metaphors including data sculpting. It suggests: that the deliberate use of creativity techniques with data stakeholders is likely to contribute to successful, novel and effective solutions; that being explicit about creativity may contribute to designers developing creative solutions; that using creativity techniques early in the design process may result in a creative approach persisting throughout the process. The work constitutes the first systematic visualization design for a data rich source that will be increasingly important to energy suppliers and consumers as Smart Meter technology is widely deployed. It is novel in explicitly employing creativity techniques at the requirements stage of visualization design and development, paving the way for further use and study of creativity methods in visualization design.",
                "AuthorNamesDeduped": "Sarah Goodwin;Jason Dykes;Sara Jones 0001;Iain Dillingham;Graham Dove;Alison Duffy;Alexander Kachkaev;Aidan Slingsby;Jo Wood",
                "AuthorNames": "Sarah Goodwin;Jason Dykes;Sara Jones;Iain Dillingham;Graham Dove;Alison Duffy;Alexander Kachkaev;Aidan Slingsby;Jo Wood",
                "AuthorAffiliation": "GiCentre, City University London, UK;GiCentre, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;GiCentre, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;GiCentre, City University London, UK;GiCentre, City University London, UK;GiCentre, City University London, UK",
                "InternalReferences": "0.1109/tvcg.2010.191;10.1109/tvcg.2012.213;10.1109/tvcg.2011.196;10.1109/tvcg.2007.70539;10.1109/infvis.1999.801851;10.1109/tvcg.2011.209",
                "AuthorKeywords": "Creativity techniques, user-centered design, data visualization, smart home, energy consumption",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 52,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 1925,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1313,
                "i": [
                    1313
                ]
            }
        },
        {
            "name": "Alison Duffy",
            "value": 34,
            "numPapers": 5,
            "cluster": "5",
            "visible": 1,
            "index": 1158,
            "x": -138.38982059779892,
            "y": 310.963434433872,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Creative User-Centered Visualization Design for Energy Analysts and Modelers",
                "DOI": "10.1109/tvcg.2013.145",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.145",
                "FirstPage": 2516,
                "LastPage": 2525,
                "PaperType": "J",
                "Abstract": "We enhance a user-centered design process with techniques that deliberately promote creativity to identify opportunities for the visualization of data generated by a major energy supplier. Visualization prototypes developed in this way prove effective in a situation whereby data sets are largely unknown and requirements open - enabling successful exploration of possibilities for visualization in Smart Home data analysis. The process gives rise to novel designs and design metaphors including data sculpting. It suggests: that the deliberate use of creativity techniques with data stakeholders is likely to contribute to successful, novel and effective solutions; that being explicit about creativity may contribute to designers developing creative solutions; that using creativity techniques early in the design process may result in a creative approach persisting throughout the process. The work constitutes the first systematic visualization design for a data rich source that will be increasingly important to energy suppliers and consumers as Smart Meter technology is widely deployed. It is novel in explicitly employing creativity techniques at the requirements stage of visualization design and development, paving the way for further use and study of creativity methods in visualization design.",
                "AuthorNamesDeduped": "Sarah Goodwin;Jason Dykes;Sara Jones 0001;Iain Dillingham;Graham Dove;Alison Duffy;Alexander Kachkaev;Aidan Slingsby;Jo Wood",
                "AuthorNames": "Sarah Goodwin;Jason Dykes;Sara Jones;Iain Dillingham;Graham Dove;Alison Duffy;Alexander Kachkaev;Aidan Slingsby;Jo Wood",
                "AuthorAffiliation": "GiCentre, City University London, UK;GiCentre, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;GiCentre, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;Centre for Creativity in Professional Practice, City University London, UK;GiCentre, City University London, UK;GiCentre, City University London, UK;GiCentre, City University London, UK",
                "InternalReferences": "0.1109/tvcg.2010.191;10.1109/tvcg.2012.213;10.1109/tvcg.2011.196;10.1109/tvcg.2007.70539;10.1109/infvis.1999.801851;10.1109/tvcg.2011.209",
                "AuthorKeywords": "Creativity techniques, user-centered design, data visualization, smart home, energy consumption",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 52,
                "PubsCitedCrossRef": 57,
                "DownloadsXplore": 1925,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1313,
                "i": [
                    1313
                ]
            }
        },
        {
            "name": "Alexander Kachkaev",
            "value": 72,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 1159,
            "x": -108.055040662299,
            "y": -322.9150169742325,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Design Exposition with Literate Visualization",
                "DOI": "10.1109/tvcg.2018.2864836",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864836",
                "FirstPage": 759,
                "LastPage": 768,
                "PaperType": "J",
                "Abstract": "We propose a new approach to the visualization design and communication process, literate visualization, based upon and extending, Donald Knuth's idea of literate programming. It integrates the process of writing data visualization code with description of the design choices that led to the implementation (design exposition). We develop a model of design exposition characterised by four visualization designer architypes: the evaluator, the autonomist, the didacticist and the rationalist. The model is used to justify the key characteristics of literate visualization: `notebook' documents that integrate live coding input, rendered output and textual narrative; low cost of authoring textual narrative; guidelines to encourage structured visualization design and its documentation. We propose narrative schemas for structuring and validating a wide range of visualization design approaches and models, and branching narratives for capturing alternative designs and design views. We describe a new open source literate visualization environment, litvis, based on a declarative interface to Vega and Vega-Lite through the functional programming language Elm combined with markdown for formatted narrative. We informally assess the approach, its implementation and potential by considering three examples spanning a range of design abstractions: new visualization idioms; validation though visualization algebra; and feminist data visualization. We argue that the rich documentation of the design process provided by literate visualization offers the potential to improve the validity of visualization design and so benefit both academic visualization and visualization practice.",
                "AuthorNamesDeduped": "Jo Wood;Alexander Kachkaev;Jason Dykes",
                "AuthorNames": "Jo Wood;Alexander Kachkaev;Jason Dykes",
                "AuthorAffiliation": "University of London, London, London, GB;University of London, London, London, GB;University of London, London, London, GB",
                "InternalReferences": "0.1109/tvcg.2013.145;10.1109/tvcg.2014.2346325;10.1109/tvcg.2017.2744319;10.1109/tvcg.2011.209;10.1109/tvcg.2014.2346331;10.1109/tvcg.2016.2598542;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467271;10.1109/tvcg.2016.2599030;10.1109/tvcg.2012.213;10.1109/tvcg.2014.2346323",
                "AuthorKeywords": "storytelling,design,literate programming,theory",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1174,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 671,
                "i": [
                    671
                ]
            }
        },
        {
            "name": "Stephen Smart",
            "value": 23,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1160,
            "x": 297.93077471417064,
            "y": 165.18853918542317,
            "vy": 0,
            "vx": 0,
            "r": 1.0264824409902131,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Color Crafting: Automating the Construction of Designer Quality Color Ramps",
                "DOI": "10.1109/tvcg.2019.2934284",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934284",
                "FirstPage": 1215,
                "LastPage": 1225,
                "PaperType": "J",
                "Abstract": "Visualizations often encode numeric data using sequential and diverging color ramps. Effective ramps use colors that are sufficiently discriminable, align well with the data, and are aesthetically pleasing. Designers rely on years of experience to create high-quality color ramps. However, it is challenging for novice visualization developers that lack this experience to craft effective ramps as most guidelines for constructing ramps are loosely defined qualitative heuristics that are often difficult to apply. Our goal is to enable visualization developers to readily create effective color encodings using a single seed color. We do this using an algorithmic approach that models designer practices by analyzing patterns in the structure of designer-crafted color ramps. We construct these models from a corpus of 222 expert-designed color ramps, and use the results to automatically generate ramps that mimic designer practices. We evaluate our approach through an empirical study comparing the outputs of our approach with designer-crafted color ramps. Our models produce ramps that support accurate and aesthetically pleasing visualizations at least as well as designer ramps and that outperform conventional mathematical approaches.",
                "AuthorNamesDeduped": "Stephen Smart;Keke Wu;Danielle Albers Szafir",
                "AuthorNames": "Stephen Smart;Keke Wu;Danielle Albers Szafir",
                "AuthorAffiliation": "University of Colorado Boulder;University of Colorado Boulder;University of Colorado Boulder",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/tvcg.2017.2743978;10.1109/tvcg.2014.2346978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2008.174;10.1109/tvcg.2012.279;10.1109/tvcg.2018.2865240;10.1109/tvcg.2016.2599106;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865147;10.1109/tvcg.2017.2744359;10.1109/tvcg.2014.2346277;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization,Aesthetics in Visualization,Color Perception,Visual Design,Design Mining",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 998,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 531,
                "i": [
                    531
                ]
            }
        },
        {
            "name": "Keke Wu",
            "value": 34,
            "numPapers": 17,
            "cluster": "5",
            "visible": 1,
            "index": 1161,
            "x": -331.4108317751056,
            "y": 79.47868004775059,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Color Crafting: Automating the Construction of Designer Quality Color Ramps",
                "DOI": "10.1109/tvcg.2019.2934284",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934284",
                "FirstPage": 1215,
                "LastPage": 1225,
                "PaperType": "J",
                "Abstract": "Visualizations often encode numeric data using sequential and diverging color ramps. Effective ramps use colors that are sufficiently discriminable, align well with the data, and are aesthetically pleasing. Designers rely on years of experience to create high-quality color ramps. However, it is challenging for novice visualization developers that lack this experience to craft effective ramps as most guidelines for constructing ramps are loosely defined qualitative heuristics that are often difficult to apply. Our goal is to enable visualization developers to readily create effective color encodings using a single seed color. We do this using an algorithmic approach that models designer practices by analyzing patterns in the structure of designer-crafted color ramps. We construct these models from a corpus of 222 expert-designed color ramps, and use the results to automatically generate ramps that mimic designer practices. We evaluate our approach through an empirical study comparing the outputs of our approach with designer-crafted color ramps. Our models produce ramps that support accurate and aesthetically pleasing visualizations at least as well as designer ramps and that outperform conventional mathematical approaches.",
                "AuthorNamesDeduped": "Stephen Smart;Keke Wu;Danielle Albers Szafir",
                "AuthorNames": "Stephen Smart;Keke Wu;Danielle Albers Szafir",
                "AuthorAffiliation": "University of Colorado Boulder;University of Colorado Boulder;University of Colorado Boulder",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/tvcg.2017.2743978;10.1109/tvcg.2014.2346978;10.1109/tvcg.2016.2598918;10.1109/tvcg.2008.174;10.1109/tvcg.2012.279;10.1109/tvcg.2018.2865240;10.1109/tvcg.2016.2599106;10.1109/tvcg.2017.2744320;10.1109/tvcg.2018.2865147;10.1109/tvcg.2017.2744359;10.1109/tvcg.2014.2346277;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Visualization,Aesthetics in Visualization,Color Perception,Visual Design,Design Mining",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 998,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 531,
                "i": [
                    531
                ]
            }
        },
        {
            "name": "Thilo Spinner",
            "value": 57,
            "numPapers": 25,
            "cluster": "3",
            "visible": 1,
            "index": 1162,
            "x": 190.7670242218654,
            "y": -282.591476286059,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning",
                "DOI": "10.1109/tvcg.2019.2934629",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934629",
                "FirstPage": 1064,
                "LastPage": 1074,
                "PaperType": "J",
                "Abstract": "We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models. Our framework combines an iterative XAI pipeline with eight global monitoring and steering mechanisms, including quality monitoring, provenance tracking, model comparison, and trust building. To operationalize the framework, we present explAIner, a visual analytics system for interactive and explainable machine learning that instantiates all phases of the suggested pipeline within the commonly used TensorBoard environment. We performed a user-study with nine participants across different expertise levels to examine their perception of our workflow and to collect suggestions to fill the gap between our system and framework. The evaluation confirms that our tightly integrated system leads to an informed machine learning process while disclosing opportunities for further extensions.",
                "AuthorNamesDeduped": "Thilo Spinner;Udo Schlegel;Hanna Schäfer;Mennatallah El-Assady",
                "AuthorNames": "Thilo Spinner;Udo Schlegel;Hanna Schäfer;Mennatallah El-Assady",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2019.2934654;10.1109/tvcg.2017.2745080;10.1109/tvcg.2018.2864769;10.1109/tvcg.2017.2744718;10.1109/vast.2017.8585720;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864812;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2014.2346481;10.1109/tvcg.2018.2864838;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475;10.1109/vast.2017.8585721",
                "AuthorKeywords": "Explainable AI,Interactive Machine Learning,Deep Learning,Visual Analytics,Interpretability,Explainability",
                "AminerCitationCount": 143,
                "CitationCountCrossRef": 90,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 6281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 599,
                "i": [
                    599
                ]
            }
        },
        {
            "name": "Zeyu Wang 0005",
            "value": 16,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 1163,
            "x": 50.24372912144324,
            "y": 337.38044946909866,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "Improving the Robustness of Scagnostics",
                "DOI": "10.1109/tvcg.2019.2934796",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934796",
                "FirstPage": 759,
                "LastPage": 769,
                "PaperType": "J",
                "Abstract": "In this paper, we examine the robustness of scagnostics through a series of theoretical and empirical studies. First, we investigate the sensitivity of scagnostics by employing perturbing operations on more than 60M synthetic and real-world scatterplots. We found that two scagnostic measures, Outlying and Clumpy, are overly sensitive to data binning. To understand how these measures align with human judgments of visual features, we conducted a study with 24 participants, which reveals that i) humans are not sensitive to small perturbations of the data that cause large changes in both measures, and ii) the perception of clumpiness heavily depends on per-cluster topologies and structures. Motivated by these results, we propose Robust Scagnostics (RScag) by combining adaptive binning with a hierarchy-based form of scagnostics. An analysis shows that RScag improves on the robustness of original scagnostics, aligns better with human judgments, and is equally fast as the traditional scagnostic measures.",
                "AuthorNamesDeduped": "Yunhai Wang;Zeyu Wang 0005;Tingting Liu;Michael Correll;Zhanglin Cheng;Oliver Deussen;Michael Sedlmair",
                "AuthorNames": "Yunhai Wang;Zeyu Wang;Tingting Liu;Michael Correll;Zhanglin Cheng;Oliver Deussen;Michael Sedlmair",
                "AuthorAffiliation": "Shandong University;Shandong University and Shenzhen VisuCA Key Lab, SIAT, China;Shandong University;Tableau Research;Shenzhen VisuCA Key Lab, SIAT, China;Konstanz University, Germany and Shenzhen VisuCA Key Lab, SIAT, China;VISUS, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/vast.2010.5652433;10.1109/tvcg.2015.2467323;10.1109/vast.2012.6400490;10.1109/vast.2008.4677368;10.1109/tvcg.2016.2598467;10.1109/tvcg.2011.229;10.1109/vast.2010.5652460;10.1109/vast.2009.5332611;10.1109/tvcg.2018.2864907;10.1109/tvcg.2014.2346572;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346979;10.1109/tvcg.2015.2467671;10.1109/tvcg.2017.2744339;10.1109/tvcg.2013.153;10.1109/vast.2009.5332628;10.1109/infvis.2005.1532142",
                "AuthorKeywords": "Scagnostics,scatterplots,sensitivity analysis,Robust Scagnostics",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 548,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 541,
                "i": [
                    541
                ]
            }
        },
        {
            "name": "Andrada Tatu",
            "value": 273,
            "numPapers": 34,
            "cluster": "6",
            "visible": 1,
            "index": 1164,
            "x": -265.05921387001416,
            "y": -214.9269949136453,
            "vy": 0,
            "vx": 0,
            "r": 1.31433506044905,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Quality Metrics in High-Dimensional Data Visualization: An Overview and Systematization",
                "DOI": "10.1109/tvcg.2011.229",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.229",
                "FirstPage": 2203,
                "LastPage": 2212,
                "PaperType": "J",
                "Abstract": "In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research.",
                "AuthorNamesDeduped": "Enrico Bertini;Andrada Tatu;Daniel A. Keim",
                "AuthorNames": "Enrico Bertini;Andrada Tatu;Daniel Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532145;10.1109/vast.2010.5652433;10.1109/vast.2006.261423;10.1109/tvcg.2010.184;10.1109/tvcg.2010.179;10.1109/infvis.2004.15;10.1109/tvcg.2006.161;10.1109/tvcg.2007.70515;10.1109/infvis.2005.1532142;10.1109/visual.1990.146402;10.1109/infvis.2003.1249006;10.1109/visual.1990.146386;10.1109/tvcg.2006.138;10.1109/infvis.2004.59;10.1109/vast.2009.5332628;10.1109/infvis.2003.1249015;10.1109/vast.2010.5652450;10.1109/tvcg.2007.70535;10.1109/infvis.1998.729559;10.1109/infvis.2000.885092;10.1109/infvis.2004.3;10.1109/tvcg.2009.153;10.1109/infvis.1997.636794",
                "AuthorKeywords": "Quality Metrics, High-Dimensional Data Visualization",
                "AminerCitationCount": 311,
                "CitationCountCrossRef": 182,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 5055,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1538,
                "i": [
                    1538
                ]
            }
        },
        {
            "name": "Jinsong Wang",
            "value": 37,
            "numPapers": 46,
            "cluster": "1",
            "visible": 1,
            "index": 1165,
            "x": 340.7737376459516,
            "y": -20.573277104247243,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analytics for Electromagnetic Situation Awareness in Radio Monitoring and Management",
                "DOI": "10.1109/tvcg.2019.2934655",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934655",
                "FirstPage": 590,
                "LastPage": 600,
                "PaperType": "J",
                "Abstract": "Traditional radio monitoring and management largely depend on radio spectrum data analysis, which requires considerable domain experience and heavy cognition effort and frequently results in incorrect signal judgment and incomprehensive situation awareness. Faced with increasingly complicated electromagnetic environments, radio supervisors urgently need additional data sources and advanced analytical technologies to enhance their situation awareness ability. This paper introduces a visual analytics approach for electromagnetic situation awareness. Guided by a detailed scenario and requirement analysis, we first propose a signal clustering method to process radio signal data and a situation assessment model to obtain qualitative and quantitative descriptions of the electromagnetic situations. We then design a two-module interface with a set of visualization views and interactions to help radio supervisors perceive and understand the electromagnetic situations by a joint analysis of radio signal data and radio spectrum data. Evaluations on real-world data sets and an interview with actual users demonstrate the effectiveness of our prototype system. Finally, we discuss the limitations of the proposed approach and provide future work directions.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Automation, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;Southwest Electric & Telecom Engineering Institute, Shanghai, China;Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2018.2865028;10.1109/tvcg.2016.2598619;10.1109/tvcg.2008.166;10.1109/tvcg.2015.2467196;10.1109/vast.2014.7042479;10.1109/tvcg.2016.2598460;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/tvcg.2017.2745180;10.1109/tvcg.2018.2865077;10.1109/tvcg.2018.2865029;10.1109/tvcg.2010.193;10.1109/tvcg.2014.2346911;10.1109/tvcg.2011.179;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532134;10.1109/tvcg.2014.2346926;10.1109/tvcg.2017.2744459;10.1109/tvcg.2013.228;10.1109/tvcg.2017.2744098;10.1109/tvcg.2014.2346913;10.1109/tvcg.2016.2598664;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Radio monitoring and management,radio signal data,radio spectrum data,situation awareness,visual analytics",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1548,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 603,
                "i": [
                    603
                ]
            }
        },
        {
            "name": "Jesse Grosjean",
            "value": 72,
            "numPapers": 0,
            "cluster": "4",
            "visible": 1,
            "index": 1166,
            "x": -237.48071341684164,
            "y": 245.46468331519287,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "InfoVis",
                "Year": 2002,
                "Title": "SpaceTree: supporting exploration in large node link tree, design evolution and empirical evaluation",
                "DOI": "10.1109/infvis.2002.1173148",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2002.1173148",
                "FirstPage": 57,
                "LastPage": 64,
                "PaperType": "C",
                "Abstract": "We present a novel tree browser that builds on the conventional node link tree diagrams. It adds dynamic rescaling of branches of the tree to best fit the available screen space, optimized camera movement, and the use of preview icons summarizing the topology of the branches that cannot be expanded. In addition, it includes integrated search and filter functions. This paper reflects on the evolution of the design and highlights the principles that emerged from it. A controlled experiment showed benefits for navigation to already previously visited nodes and estimation of overall tree topology.",
                "AuthorNamesDeduped": "Catherine Plaisant;Jesse Grosjean;Benjamin B. Bederson",
                "AuthorNames": "C. Plaisant;J. Grosjean;B.B. Bederson",
                "AuthorAffiliation": "Human-Computer Interaction Laboratory, University of Maryland, USA;Human-Computer Interaction Laboratory, University of Maryland, USA;Human-Computer Interaction Laboratory, University of Maryland, USA",
                "InternalReferences": "0.1109/visual.1996.567745",
                "AuthorKeywords": null,
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 71,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 2131,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2741,
                "i": [
                    2741
                ]
            }
        },
        {
            "name": "Benjamin B. Bederson",
            "value": 138,
            "numPapers": 13,
            "cluster": "4",
            "visible": 1,
            "index": 1167,
            "x": 9.30586228778572,
            "y": -341.5602449452816,
            "vy": 0,
            "vx": 0,
            "r": 1.1588946459412781,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations",
                "DOI": "10.1109/tvcg.2015.2467051",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467051",
                "FirstPage": 688,
                "LastPage": 697,
                "PaperType": "J",
                "Abstract": "Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university.",
                "AuthorNamesDeduped": "Mehmet Adil Yalçin;Niklas Elmqvist;Benjamin B. Bederson",
                "AuthorNames": "M. Adil Yalçin;Niklas Elmqvist;Benjamin B. Bederson",
                "AuthorAffiliation": "University of Maryland, College Park;University of Maryland, College Park;University of Maryland, College Park",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2013.184;10.1109/tvcg.2011.185;10.1109/tvcg.2009.122;10.1109/tvcg.2007.70535;10.1109/tvcg.2008.144;10.1109/infvis.2004.1;10.1109/tvcg.2007.70539;10.1109/tvcg.2008.141;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.210;10.1109/tvcg.2014.2346249",
                "AuthorKeywords": "Multi-valued attributes, sets, visualization, set visualization, data exploration, interaction, design, scalability",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 869,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1037,
                "i": [
                    1037
                ]
            }
        },
        {
            "name": "Yating Wei",
            "value": 61,
            "numPapers": 39,
            "cluster": "1",
            "visible": 1,
            "index": 1168,
            "x": 223.95462760772597,
            "y": 258.2524438859868,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Evaluating Perceptual Bias During Geometric Scaling of Scatterplots",
                "DOI": "10.1109/tvcg.2019.2934208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934208",
                "FirstPage": 321,
                "LastPage": 331,
                "PaperType": "J",
                "Abstract": "Scatterplots are frequently scaled to fit display areas in multi-view and multi-device data analysis environments. A common method used for scaling is to enlarge or shrink the entire scatterplot together with the inside points synchronously and proportionally. This process is called geometric scaling. However, geometric scaling of scatterplots may cause a perceptual bias, that is, the perceived and physical values of visual features may be dissociated with respect to geometric scaling. For example, if a scatterplot is projected from a laptop to a large projector screen, then observers may feel that the scatterplot shown on the projector has fewer points than that viewed on the laptop. This paper presents an evaluation study on the perceptual bias of visual features in scatterplots caused by geometric scaling. The study focuses on three fundamental visual features (i.e., numerosity, correlation, and cluster separation) and three hypotheses that are formulated on the basis of our experience. We carefully design three controlled experiments by using well-prepared synthetic data and recruit participants to complete the experiments on the basis of their subjective experience. With a detailed analysis of the experimental results, we obtain a set of instructive findings. First, geometric scaling causes a bias that has a linear relationship with the scale ratio. Second, no significant difference exists between the biases measured from normally and uniformly distributed scatterplots. Third, changing the point radius can correct the bias to a certain extent. These findings can be used to inspire the design decisions of scatterplots in various scenarios.",
                "AuthorNamesDeduped": "Yating Wei;Honghui Mei;Ying Zhao 0001;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen 0001",
                "AuthorNames": "Yating Wei;Honghui Mei;Ying Zhao;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen",
                "AuthorAffiliation": "The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/tvcg.2018.2865142;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.124;10.1109/vast.2010.5652460;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2006.163;10.1109/vast.2012.6400487;10.1109/tvcg.2015.2467671;10.1109/tvcg.2018.2864884;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.120;10.1109/tvcg.2013.153;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2007.70596;10.1109/tvcg.2017.2744138;10.1109/tvcg.2018.2864912;10.1109/tvcg.2018.2865266;10.1109/tvcg.2017.2744098;10.1109/tvcg.2006.184;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Evaluation,scatterplot,geometric scaling,bias,perceptual consistency",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 1007,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 623,
                "i": [
                    623
                ]
            }
        },
        {
            "name": "Bingru Lin",
            "value": 61,
            "numPapers": 39,
            "cluster": "1",
            "visible": 1,
            "index": 1169,
            "x": -339.7294684801048,
            "y": -39.16488537230078,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Evaluating Perceptual Bias During Geometric Scaling of Scatterplots",
                "DOI": "10.1109/tvcg.2019.2934208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934208",
                "FirstPage": 321,
                "LastPage": 331,
                "PaperType": "J",
                "Abstract": "Scatterplots are frequently scaled to fit display areas in multi-view and multi-device data analysis environments. A common method used for scaling is to enlarge or shrink the entire scatterplot together with the inside points synchronously and proportionally. This process is called geometric scaling. However, geometric scaling of scatterplots may cause a perceptual bias, that is, the perceived and physical values of visual features may be dissociated with respect to geometric scaling. For example, if a scatterplot is projected from a laptop to a large projector screen, then observers may feel that the scatterplot shown on the projector has fewer points than that viewed on the laptop. This paper presents an evaluation study on the perceptual bias of visual features in scatterplots caused by geometric scaling. The study focuses on three fundamental visual features (i.e., numerosity, correlation, and cluster separation) and three hypotheses that are formulated on the basis of our experience. We carefully design three controlled experiments by using well-prepared synthetic data and recruit participants to complete the experiments on the basis of their subjective experience. With a detailed analysis of the experimental results, we obtain a set of instructive findings. First, geometric scaling causes a bias that has a linear relationship with the scale ratio. Second, no significant difference exists between the biases measured from normally and uniformly distributed scatterplots. Third, changing the point radius can correct the bias to a certain extent. These findings can be used to inspire the design decisions of scatterplots in various scenarios.",
                "AuthorNamesDeduped": "Yating Wei;Honghui Mei;Ying Zhao 0001;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen 0001",
                "AuthorNames": "Yating Wei;Honghui Mei;Ying Zhao;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen",
                "AuthorAffiliation": "The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/tvcg.2018.2865142;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.124;10.1109/vast.2010.5652460;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2006.163;10.1109/vast.2012.6400487;10.1109/tvcg.2015.2467671;10.1109/tvcg.2018.2864884;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.120;10.1109/tvcg.2013.153;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2007.70596;10.1109/tvcg.2017.2744138;10.1109/tvcg.2018.2864912;10.1109/tvcg.2018.2865266;10.1109/tvcg.2017.2744098;10.1109/tvcg.2006.184;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Evaluation,scatterplot,geometric scaling,bias,perceptual consistency",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 1007,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 623,
                "i": [
                    623
                ]
            }
        },
        {
            "name": "Yuanzhe Hu",
            "value": 28,
            "numPapers": 27,
            "cluster": "3",
            "visible": 1,
            "index": 1170,
            "x": 277.0798218579433,
            "y": -200.69073800046297,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "InfoVis",
                "Year": 2020,
                "Title": "DRGraph: An Efficient Graph Layout Algorithm for Large-scale Graphs by Dimensionality Reduction",
                "DOI": "10.1109/tvcg.2020.3030447",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030447",
                "FirstPage": 1666,
                "LastPage": 1676,
                "PaperType": "J",
                "Abstract": "Efficient layout of large-scale graphs remains a challenging problem: the force-directed and dimensionality reduction-based methods suffer from high overhead for graph distance and gradient computation. In this paper, we present a new graph layout algorithm, called DRGraph, that enhances the nonlinear dimensionality reduction process with three schemes: approximating graph distances by means of a sparse distance matrix, estimating the gradient by using the negative sampling technique, and accelerating the optimization process through a multi-level layout scheme. DRGraph achieves a linear complexity for the computation and memory consumption, and scales up to large-scale graphs with millions of nodes. Experimental results and comparisons with state-of-the-art graph layout methods demonstrate that DRGraph can generate visually comparable layouts with a faster running time and a lower memory requirement.",
                "AuthorNamesDeduped": "Minfeng Zhu;Wei Chen 0001;Yuanzhe Hu;Yuxuan Hou;Liangjun Liu;Kaiyuan Zhang 0002",
                "AuthorNames": "Minfeng Zhu;Wei Chen;Yuanzhe Hu;Yuxuan Hou;Liangjun Liu;Kaiyuan Zhang",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University",
                "InternalReferences": "0.1109/tvcg.2013.151;10.1109/tvcg.2011.220;10.1109/tvcg.2015.2467451;10.1109/tvcg.2017.2743858;10.1109/tvcg.2019.2934396;10.1109/tvcg.2019.2934307;10.1109/tvcg.2019.2934798;10.1109/tvcg.2017.2745919;10.1109/tvcg.2017.2744878;10.1109/tvcg.2016.2598867;10.1109/tvcg.2015.2468151;10.1109/tvcg.2015.2467251;10.1109/tvcg.2012.238;10.1109/tvcg.2016.2598958",
                "AuthorKeywords": "graph visualization,graph layout,dimensionality reduction,force-directed layout",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 1387,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 387,
                "i": [
                    387
                ]
            }
        },
        {
            "name": "Harsh Shukla",
            "value": 0,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1171,
            "x": -68.77475126767791,
            "y": 335.2909685453353,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization",
                "DOI": "10.1109/tvcg.2019.2934402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934402",
                "FirstPage": 938,
                "LastPage": 948,
                "PaperType": "J",
                "Abstract": "Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract—yet spatially contextualized—cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.",
                "AuthorNamesDeduped": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorNames": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorAffiliation": "Northeastern University;Northeastern University;Brigham and Women's Hospital;Dana-Farber Cancer Institute;Brigham and Women's Hospital;Brigham and Women's Hospital;Brigham and Women's Hospital;Northeastern University;Northeastern University",
                "InternalReferences": "0.1109/tvcg.2011.192;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2011.193;10.1109/tvcg.2013.231;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346312;10.1109/tvcg.2017.2744278;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2016.2598472;10.1109/tvcg.2008.165;10.1109/tvcg.2014.2346276;10.1109/tvcg.2008.117",
                "AuthorKeywords": "Network Visualization,Spatial Context,Abstract Design,Flow Network,Medical Imaging,Cerebral Arteries",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 687,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 557,
                "i": [
                    557
                ]
            }
        },
        {
            "name": "Geoffrey S. Young",
            "value": 0,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1172,
            "x": -175.84843849256532,
            "y": -293.8151232998847,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization",
                "DOI": "10.1109/tvcg.2019.2934402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934402",
                "FirstPage": 938,
                "LastPage": 948,
                "PaperType": "J",
                "Abstract": "Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract—yet spatially contextualized—cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.",
                "AuthorNamesDeduped": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorNames": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorAffiliation": "Northeastern University;Northeastern University;Brigham and Women's Hospital;Dana-Farber Cancer Institute;Brigham and Women's Hospital;Brigham and Women's Hospital;Brigham and Women's Hospital;Northeastern University;Northeastern University",
                "InternalReferences": "0.1109/tvcg.2011.192;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2011.193;10.1109/tvcg.2013.231;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346312;10.1109/tvcg.2017.2744278;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2016.2598472;10.1109/tvcg.2008.165;10.1109/tvcg.2014.2346276;10.1109/tvcg.2008.117",
                "AuthorKeywords": "Network Visualization,Spatial Context,Abstract Design,Flow Network,Medical Imaging,Cerebral Arteries",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 687,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 557,
                "i": [
                    557
                ]
            }
        },
        {
            "name": "Lei Qin",
            "value": 0,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1173,
            "x": 328.2743294708501,
            "y": 97.90793946592768,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization",
                "DOI": "10.1109/tvcg.2019.2934402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934402",
                "FirstPage": 938,
                "LastPage": 948,
                "PaperType": "J",
                "Abstract": "Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract—yet spatially contextualized—cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.",
                "AuthorNamesDeduped": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorNames": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorAffiliation": "Northeastern University;Northeastern University;Brigham and Women's Hospital;Dana-Farber Cancer Institute;Brigham and Women's Hospital;Brigham and Women's Hospital;Brigham and Women's Hospital;Northeastern University;Northeastern University",
                "InternalReferences": "0.1109/tvcg.2011.192;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2011.193;10.1109/tvcg.2013.231;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346312;10.1109/tvcg.2017.2744278;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2016.2598472;10.1109/tvcg.2008.165;10.1109/tvcg.2014.2346276;10.1109/tvcg.2008.117",
                "AuthorKeywords": "Network Visualization,Spatial Context,Abstract Design,Flow Network,Medical Imaging,Cerebral Arteries",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 687,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 557,
                "i": [
                    557
                ]
            }
        },
        {
            "name": "Amir A. Zamani",
            "value": 0,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1174,
            "x": -308.32642340344285,
            "y": 149.61556279759432,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization",
                "DOI": "10.1109/tvcg.2019.2934402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934402",
                "FirstPage": 938,
                "LastPage": 948,
                "PaperType": "J",
                "Abstract": "Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract—yet spatially contextualized—cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.",
                "AuthorNamesDeduped": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorNames": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorAffiliation": "Northeastern University;Northeastern University;Brigham and Women's Hospital;Dana-Farber Cancer Institute;Brigham and Women's Hospital;Brigham and Women's Hospital;Brigham and Women's Hospital;Northeastern University;Northeastern University",
                "InternalReferences": "0.1109/tvcg.2011.192;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2011.193;10.1109/tvcg.2013.231;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346312;10.1109/tvcg.2017.2744278;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2016.2598472;10.1109/tvcg.2008.165;10.1109/tvcg.2014.2346276;10.1109/tvcg.2008.117",
                "AuthorKeywords": "Network Visualization,Spatial Context,Abstract Design,Flow Network,Medical Imaging,Cerebral Arteries",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 687,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 557,
                "i": [
                    557
                ]
            }
        },
        {
            "name": "Liangge Hsu",
            "value": 0,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1175,
            "x": 126.34019871952498,
            "y": -318.72896665899526,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization",
                "DOI": "10.1109/tvcg.2019.2934402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934402",
                "FirstPage": 938,
                "LastPage": 948,
                "PaperType": "J",
                "Abstract": "Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract—yet spatially contextualized—cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.",
                "AuthorNamesDeduped": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorNames": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorAffiliation": "Northeastern University;Northeastern University;Brigham and Women's Hospital;Dana-Farber Cancer Institute;Brigham and Women's Hospital;Brigham and Women's Hospital;Brigham and Women's Hospital;Northeastern University;Northeastern University",
                "InternalReferences": "0.1109/tvcg.2011.192;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2011.193;10.1109/tvcg.2013.231;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346312;10.1109/tvcg.2017.2744278;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2016.2598472;10.1109/tvcg.2008.165;10.1109/tvcg.2014.2346276;10.1109/tvcg.2008.117",
                "AuthorKeywords": "Network Visualization,Spatial Context,Abstract Design,Flow Network,Medical Imaging,Cerebral Arteries",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 687,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 557,
                "i": [
                    557
                ]
            }
        },
        {
            "name": "Raymond Huang",
            "value": 0,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1176,
            "x": 122.19093378026686,
            "y": 320.49863603751334,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization",
                "DOI": "10.1109/tvcg.2019.2934402",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934402",
                "FirstPage": 938,
                "LastPage": 948,
                "PaperType": "J",
                "Abstract": "Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract—yet spatially contextualized—cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.",
                "AuthorNamesDeduped": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorNames": "Aditeya Pandey;Harsh Shukla;Geoffrey S. Young;Lei Qin;Amir A. Zamani;Liangge Hsu;Raymond Huang;Cody Dunne;Michelle A. Borkin",
                "AuthorAffiliation": "Northeastern University;Northeastern University;Brigham and Women's Hospital;Dana-Farber Cancer Institute;Brigham and Women's Hospital;Brigham and Women's Hospital;Brigham and Women's Hospital;Northeastern University;Northeastern University",
                "InternalReferences": "0.1109/tvcg.2011.192;10.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2011.193;10.1109/tvcg.2013.231;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346312;10.1109/tvcg.2017.2744278;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/tvcg.2016.2598472;10.1109/tvcg.2008.165;10.1109/tvcg.2014.2346276;10.1109/tvcg.2008.117",
                "AuthorKeywords": "Network Visualization,Spatial Context,Abstract Design,Flow Network,Medical Imaging,Cerebral Arteries",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 68,
                "DownloadsXplore": 687,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 557,
                "i": [
                    557
                ]
            }
        },
        {
            "name": "Ozan Ersoy",
            "value": 181,
            "numPapers": 10,
            "cluster": "2",
            "visible": 1,
            "index": 1177,
            "x": -306.72378102836956,
            "y": -153.85227379424973,
            "vy": 0,
            "vx": 0,
            "r": 1.208405296488198,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "MoleView: An Attribute and Structure-Based Semantic Lens for Large Element-Based Plots",
                "DOI": "10.1109/tvcg.2011.223",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.223",
                "FirstPage": 2600,
                "LastPage": 2609,
                "PaperType": "J",
                "Abstract": "We present MoleView, a novel technique for interactive exploration of multivariate relational data. Given a spatial embedding of the data, in terms of a scatter plot or graph layout, we propose a semantic lens which selects a specific spatial and attribute-related data range. The lens keeps the selected data in focus unchanged and continuously deforms the data out of the selection range in order to maintain the context around the focus. Specific deformations include distance-based repulsion of scatter plot points, deforming straight-line node-link graph drawings, and as varying the simplification degree of bundled edge graph layouts. Using a brushing-based technique, we further show the applicability of our semantic lens for scenarios requiring a complex selection of the zones of interest. Our technique is simple to implement and provides real-time performance on large datasets. We demonstrate our technique with actual data from air and road traffic control, medical imaging, and software comprehension applications.",
                "AuthorNamesDeduped": "Christophe Hurter;Alexandru C. Telea;Ozan Ersoy",
                "AuthorNames": "Christophe Hurter;Alexandru Telea;Ozan Ersoy",
                "AuthorAffiliation": "DGAC/DTI Research and Development, ENAC, University of Toulouse, France;University of Groningen, Netherlands;University of Groningen, Netherlands",
                "InternalReferences": "0.1109/tvcg.2011.233;10.1109/tvcg.2008.135;10.1109/tvcg.2006.147;10.1109/infvis.2005.1532150;10.1109/infvis.2004.66;10.1109/infvis.2003.1249008",
                "AuthorKeywords": "Semantic lenses, magic lenses, graph bundling, attribute filtering",
                "AminerCitationCount": 98,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 825,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1562,
                "i": [
                    1562
                ]
            }
        },
        {
            "name": "Gabriel Cantareiro",
            "value": 116,
            "numPapers": 5,
            "cluster": "3",
            "visible": 1,
            "index": 1178,
            "x": 330.2344256739034,
            "y": -93.78285610828436,
            "vy": 0,
            "vx": 0,
            "r": 1.1335636154289004,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Skeleton-Based Edge Bundling for Graph Visualization",
                "DOI": "10.1109/tvcg.2011.233",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.233",
                "FirstPage": 2364,
                "LastPage": 2373,
                "PaperType": "J",
                "Abstract": "In this paper, we present a novel approach for constructing bundled layouts of general graphs. As layout cues for bundles, we use medial axes, or skeletons, of edges which are similar in terms of position information. We combine edge clustering, distance fields, and 2D skeletonization to construct progressively bundled layouts for general graphs by iteratively attracting edges towards the centerlines of level sets of their distance fields. Apart from clustering, our entire pipeline is image-based with an efficient implementation in graphics hardware. Besides speed and implementation simplicity, our method allows explicit control of the emphasis on structure of the bundled layout, i.e. the creation of strongly branching (organic-like) or smooth bundles. We demonstrate our method on several large real-world graphs.",
                "AuthorNamesDeduped": "Ozan Ersoy;Christophe Hurter;Fernando Vieira Paulovich;Gabriel Cantareiro;Alexandru C. Telea",
                "AuthorNames": "Ozan Ersoy;Christophe Hurter;Fernando Paulovich;Gabriel Cantareiro;Alex Telea",
                "AuthorAffiliation": "University of Groningen, Netherlands;DGAC-DTI Research and Development, ENAC, University of Toulouse, France;University of Sao Paulo, Brazil;Universidade de Sao Paulo, Sao Paulo, São Paulo, BR;University of Groningen, Netherlands",
                "InternalReferences": "0.1109/tvcg.2008.135;10.1109/tvcg.2006.147;10.1109/tvcg.2007.70535;10.1109/tvcg.2006.120;10.1109/infvis.2005.1532150;10.1109/infvis.2003.1249030",
                "AuthorKeywords": "Graph layouts, edge bundles, image-based information visualization",
                "AminerCitationCount": 218,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 2278,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1544,
                "i": [
                    1544
                ]
            }
        },
        {
            "name": "Xiaowei Chu",
            "value": 10,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 1179,
            "x": -180.23159678753547,
            "y": 292.3466632602726,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "ShapeWordle: Tailoring Wordles using Shape-aware Archimedean Spirals",
                "DOI": "10.1109/tvcg.2019.2934783",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934783",
                "FirstPage": 991,
                "LastPage": 1000,
                "PaperType": "J",
                "Abstract": "We present a new technique to enable the creation of shape-bounded Wordles, we call ShapeWordle, in which we fit words to form a given shape. To guide word placement within a shape, we extend the traditional Archimedean spirals to be shape-aware by formulating the spirals in a differential form using the distance field of the shape. To handle non-convex shapes, we introduce a multi-centric Wordle layout method that segments the shape into parts for our shape-aware spirals to adaptively fill the space and generate word placements. In addition, we offer a set of editing interactions to facilitate the creation of semantically-meaningful Wordles. Lastly, we present three evaluations: a comprehensive comparison of our results against the state-of-the-art technique (WordArt), case studies with 14 users, and a gallery to showcase the coverage of our technique.",
                "AuthorNamesDeduped": "Yunhai Wang;Xiaowei Chu;Kaiyi Zhang;Chen Bao;Xiaotong Li;Jian Zhang 0070;Chi-Wing Fu;Christophe Hurter;Bongshin Lee;Oliver Deussen",
                "AuthorNames": "Yunhai Wang;Xiaowei Chu;Kaiyi Zhang;Chen Bao;Xiaotong Li;Jian Zhang;Chi-Wing Fu;Christophe Hurter;Oliver Deussen;Bongshin Lee",
                "AuthorAffiliation": "Shandong University;Shandong University;Shandong University;Shandong University;Shandong University;CNIC, CAS;Chinese University of Hong Kong;ENAC, France;Konstanz University, Germany and Shenzhen VisuCA Key Lab, SIAT, China;Microsoft Research",
                "InternalReferences": "0.1109/vast.2007.4389007;10.1109/vast.2009.5333443;10.1109/tvcg.2011.233;10.1109/tvcg.2017.2746018;10.1109/tvcg.2011.223;10.1109/tvcg.2010.175;10.1109/tvcg.2010.194;10.1109/infvis.2004.56;10.1109/tvcg.2009.171;10.1109/tvcg.2017.2745859;10.1109/infvis.2001.963273",
                "AuthorKeywords": "Wordle,Archimedean spiral,shape",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 743,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 562,
                "i": [
                    562
                ]
            }
        },
        {
            "name": "Chen Bao",
            "value": 60,
            "numPapers": 27,
            "cluster": "1",
            "visible": 1,
            "index": 1180,
            "x": -64.60753355054594,
            "y": -337.45498456611233,
            "vy": 0,
            "vx": 0,
            "r": 1.0690846286701208,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Optimizing Color Assignment for Perception of Class Separability in Multiclass Scatterplots",
                "DOI": "10.1109/tvcg.2018.2864912",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864912",
                "FirstPage": 820,
                "LastPage": 829,
                "PaperType": "J",
                "Abstract": "Appropriate choice of colors significantly aids viewers in understanding the structures in multiclass scatterplots and becomes more important with a growing number of data points and groups. An appropriate color mapping is also an important parameter for the creation of an aesthetically pleasing scatterplot. Currently, users of visualization software routinely rely on color mappings that have been pre-defined by the software. A default color mapping, however, cannot ensure an optimal perceptual separability between groups, and sometimes may even lead to a misinterpretation of the data. In this paper, we present an effective approach for color assignment based on a set of given colors that is designed to optimize the perception of scatterplots. Our approach takes into account the spatial relationships, density, degree of overlap between point clusters, and also the background color. For this purpose, we use a genetic algorithm that is able to efficiently find good color assignments. We implemented an interactive color assignment system with three extensions of the basic method that incorporates top K suggestions, user-defined color subsets, and classes of interest for the optimization. To demonstrate the effectiveness of our assignment technique, we conducted a numerical study and a controlled user study to compare our approach with default color assignments; our findings were verified by two expert studies. The results show that our approach is able to support users in distinguishing cluster numbers faster and more precisely than default assignment methods.",
                "AuthorNamesDeduped": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorNames": "Yunhai Wang;Xin Chen;Tong Ge;Chen Bao;Michael Sedlmair;Chi-Wing Fu;Oliver Deussen;Baoquan Chen",
                "AuthorAffiliation": "Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Shandong University, Jinan, Shandong, CN;Universitat Stuttgart, Stuttgart, Baden-Württemberg, DE;Chinese University of Hong Kong, New Territories, HK;Universitat Konstanz, Konstanz, Baden-Württemberg, DE;Peking University, Beijing, Beijing, CN",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/tvcg.2016.2599214;10.1109/tvcg.2013.183;10.1109/tvcg.2016.2598918;10.1109/visual.1996.568118;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.153;10.1109/tvcg.2015.2467471;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2008.118",
                "AuthorKeywords": "Color perception,visual design,scatterplots",
                "AminerCitationCount": 40,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1169,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 667,
                "i": [
                    667
                ]
            }
        },
        {
            "name": "R. Jordan Crouser",
            "value": 31,
            "numPapers": 37,
            "cluster": "5",
            "visible": 1,
            "index": 1181,
            "x": 275.70385140230405,
            "y": 205.2739299617373,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "The Role of Latency and Task Complexity in Predicting Visual Search Behavior",
                "DOI": "10.1109/tvcg.2019.2934556",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934556",
                "FirstPage": 1246,
                "LastPage": 1255,
                "PaperType": "J",
                "Abstract": "Latency in a visualization system is widely believed to affect user behavior in measurable ways, such as requiring the user to wait for the visualization system to respond, leading to interruption of the analytic flow. While this effect is frequently observed and widely accepted, precisely how latency affects different analysis scenarios is less well understood. In this paper, we examine the role of latency in the context of visual search, an essential task in data foraging and exploration using visualization. We conduct a series of studies on Amazon Mechanical Turk and find that under certain conditions, latency is a statistically significant predictor of visual search behavior, which is consistent with previous studies. However, our results also suggest that task type, task complexity, and other factors can modulate the effect of latency, in some cases rendering latency statistically insignificant in predicting user behavior. This suggests a more nuanced view of the role of latency than previously reported. Building on these results and the findings of prior studies, we propose design guidelines for measuring and interpreting the effects of latency when evaluating performance on visual search tasks.",
                "AuthorNamesDeduped": "Leilani Battle;R. Jordan Crouser;Audace Nakeshimana;Ananda Montoly;Remco Chang;Michael Stonebraker",
                "AuthorNames": "Leilani Battle;R. Jordan Crouser;Audace Nakeshimana;Ananda Montoly;Remco Chang;Michael Stonebraker",
                "AuthorAffiliation": "Department of Computer Science, University of Maryland;Department of Computer Science, Smith College;Electrical Engineering & Computer Science Department, MIT;Department of Computer Science, Smith College;Department of Computer Science, Tufts University;Electrical Engineering & Computer Science Department, MIT",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/tvcg.2014.2346979;10.1109/tvcg.2015.2467671;10.1109/tvcg.2013.179;10.1109/tvcg.2014.2346452",
                "AuthorKeywords": "Visual search,latency,system response time,SRT",
                "AminerCitationCount": 11,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 587,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 566,
                "i": [
                    566
                ]
            }
        },
        {
            "name": "Brittany Kondo",
            "value": 51,
            "numPapers": 10,
            "cluster": "5",
            "visible": 1,
            "index": 1182,
            "x": -342.1006689847605,
            "y": 34.88742295124912,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "DimpVis: Exploring Time-varying Information Visualizations by Direct Manipulation",
                "DOI": "10.1109/tvcg.2014.2346250",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346250",
                "FirstPage": 2003,
                "LastPage": 2012,
                "PaperType": "J",
                "Abstract": "We introduce a new direct manipulation technique, DimpVis, for interacting with visual items in information visualizations to enable exploration of the time dimension. DimpVis is guided by visual hint paths which indicate how a selected data item changes through the time dimension in a visualization. Temporal navigation is controlled by manipulating any data item along its hint path. All other items are updated to reflect the new time. We demonstrate how the DimpVis technique can be designed to directly manipulate position, colour, and size in familiar visualizations such as bar charts and scatter plots, as a means for temporal navigation. We present results from a comparative evaluation, showing that the DimpVis technique was subjectively preferred and quantitatively competitive with the traditional time slider, and significantly faster than small multiples for a variety of tasks.",
                "AuthorNamesDeduped": "Brittany Kondo;Christopher Collins 0001",
                "AuthorNames": "Brittany Kondo;Christopher Collins",
                "AuthorAffiliation": "University of Ontario Institute of Technology;University of Ontario Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2013.147;10.1109/tvcg.2012.204;10.1109/tvcg.2012.260;10.1109/tvcg.2008.175;10.1109/vast.2012.6400486;10.1109/tvcg.2012.265;10.1109/tvcg.2013.149;10.1109/infvis.2005.1532136;10.1109/tvcg.2011.185;10.1109/tvcg.2008.125;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Time navigation, direct manipulation, information visualization",
                "AminerCitationCount": 71,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 1345,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1189,
                "i": [
                    1189
                ]
            }
        },
        {
            "name": "Ko-Chih Wang",
            "value": 61,
            "numPapers": 32,
            "cluster": "6",
            "visible": 1,
            "index": 1183,
            "x": 228.7849473923729,
            "y": -256.91914651631,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "InSituNet: Deep Image Synthesis for Parameter Space Exploration of Ensemble Simulations",
                "DOI": "10.1109/tvcg.2019.2934312",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934312",
                "FirstPage": 23,
                "LastPage": 33,
                "PaperType": "J",
                "Abstract": "We propose InSituNet, a deep learning based surrogate model to support parameter space exploration for ensemble simulations that are visualized in situ. In situ visualization, generating visualizations at simulation time, is becoming prevalent in handling large-scale simulations because of the I/O and storage constraints. However, in situ visualization approaches limit the flexibility of post-hoc exploration because the raw simulation data are no longer available. Although multiple image-based approaches have been proposed to mitigate this limitation, those approaches lack the ability to explore the simulation parameters. Our approach allows flexible exploration of parameter space for large-scale ensemble simulations by taking advantage of the recent advances in deep learning. Specifically, we design InSituNet as a convolutional regression model to learn the mapping from the simulation and visualization parameters to the visualization results. With the trained model, users can generate new images for different simulation parameters under various visualization settings, which enables in-depth analysis of the underlying ensemble simulations. We demonstrate the effectiveness of InSituNet in combustion, cosmology, and ocean simulations through quantitative and qualitative evaluations.",
                "AuthorNamesDeduped": "Wenbin He;Junpeng Wang;Hanqi Guo 0001;Ko-Chih Wang;Han-Wei Shen;Mukund Raj;Youssef S. G. Nashed;Tom Peterka",
                "AuthorNames": "Wenbin He;Junpeng Wang;Hanqi Guo;Ko-Chih Wang;Han-Wei Shen;Mukund Raj;Youssef S. G. Nashed;Tom Peterka",
                "AuthorAffiliation": "Department of Computer Science and Engineering, The Ohio State University;Department of Computer Science and Engineering, The Ohio State University;Mathematics and Computer Science Division, Argonne National Laboratory;Department of Computer Science and Engineering, The Ohio State University;Department of Computer Science and Engineering, The Ohio State University;Mathematics and Computer Science Division, Argonne National Laboratory;Mathematics and Computer Science Division, Argonne National Laboratory;Mathematics and Computer Science Division, Argonne National Laboratory",
                "InternalReferences": "0.1109/tvcg.2016.2598869;10.1109/scivis.2015.7429487;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/tvcg.2016.2598604;10.1109/tvcg.2009.155;10.1109/tvcg.2018.2865051;10.1109/tvcg.2014.2346755;10.1109/tvcg.2014.2346321;10.1109/vast.2015.7347635;10.1109/tvcg.2010.215;10.1109/tvcg.2011.248;10.1109/tvcg.2016.2598830;10.1109/tvcg.2018.2865026",
                "AuthorKeywords": "In situ visualization,ensemble visualization,parameter space exploration,deep learning,image synthesis",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 1702,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 571,
                "i": [
                    571
                ]
            }
        },
        {
            "name": "Soumya Dutta",
            "value": 84,
            "numPapers": 52,
            "cluster": "6",
            "visible": 1,
            "index": 1184,
            "x": 4.849537416104985,
            "y": 344.1314893857431,
            "vy": 0,
            "vx": 0,
            "r": 1.0967184801381693,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations",
                "DOI": "10.1109/tvcg.2016.2598604",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598604",
                "FirstPage": 811,
                "LastPage": 820,
                "PaperType": "J",
                "Abstract": "Study of flow instability in turbine engine compressors is crucial to understand the inception and evolution of engine stall. Aerodynamics experts have been working on detecting the early signs of stall in order to devise novel stall suppression technologies. A state-of-the-art Navier-Stokes based, time-accurate computational fluid dynamics simulator, TURBO, has been developed in NASA to enhance the understanding of flow phenomena undergoing rotating stall. Despite the proven high modeling accuracy of TURBO, the excessive simulation data prohibits post-hoc analysis in both storage and I/O time. To address these issues and allow the expert to perform scalable stall analysis, we have designed an in situ distribution guided stall analysis technique. Our method summarizes statistics of important properties of the simulation data in situ using a probabilistic data modeling scheme. This data summarization enables statistical anomaly detection for flow instability in post analysis, which reveals the spatiotemporal trends of rotating stall for the expert to conceive new hypotheses. Furthermore, the verification of the hypotheses and exploratory visualization using the summarized data are realized using probabilistic visualization techniques such as uncertain isocontouring. Positive feedback from the domain scientist has indicated the efficacy of our system in exploratory stall analysis.",
                "AuthorNamesDeduped": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorNames": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorAffiliation": "GRAVITY research group, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University",
                "InternalReferences": "0.1109/tvcg.2008.140;10.1109/tvcg.2013.152;10.1109/tvcg.2015.2467436;10.1109/tvcg.2007.70615;10.1109/tvcg.2015.2467952;10.1109/tvcg.2015.2467958;10.1109/tvcg.2015.2467411",
                "AuthorKeywords": "In situ analysis;rotating stall analysis;Gaussian mixture model;incremental distribution modeling;feature analysis;high performance computing;collaborative development",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1444,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 930,
                "i": [
                    930
                ]
            }
        },
        {
            "name": "Chun-Ming Chen",
            "value": 38,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1185,
            "x": -236.13299215644776,
            "y": -250.58174318023046,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations",
                "DOI": "10.1109/tvcg.2016.2598604",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598604",
                "FirstPage": 811,
                "LastPage": 820,
                "PaperType": "J",
                "Abstract": "Study of flow instability in turbine engine compressors is crucial to understand the inception and evolution of engine stall. Aerodynamics experts have been working on detecting the early signs of stall in order to devise novel stall suppression technologies. A state-of-the-art Navier-Stokes based, time-accurate computational fluid dynamics simulator, TURBO, has been developed in NASA to enhance the understanding of flow phenomena undergoing rotating stall. Despite the proven high modeling accuracy of TURBO, the excessive simulation data prohibits post-hoc analysis in both storage and I/O time. To address these issues and allow the expert to perform scalable stall analysis, we have designed an in situ distribution guided stall analysis technique. Our method summarizes statistics of important properties of the simulation data in situ using a probabilistic data modeling scheme. This data summarization enables statistical anomaly detection for flow instability in post analysis, which reveals the spatiotemporal trends of rotating stall for the expert to conceive new hypotheses. Furthermore, the verification of the hypotheses and exploratory visualization using the summarized data are realized using probabilistic visualization techniques such as uncertain isocontouring. Positive feedback from the domain scientist has indicated the efficacy of our system in exploratory stall analysis.",
                "AuthorNamesDeduped": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorNames": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorAffiliation": "GRAVITY research group, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University",
                "InternalReferences": "0.1109/tvcg.2008.140;10.1109/tvcg.2013.152;10.1109/tvcg.2015.2467436;10.1109/tvcg.2007.70615;10.1109/tvcg.2015.2467952;10.1109/tvcg.2015.2467958;10.1109/tvcg.2015.2467411",
                "AuthorKeywords": "In situ analysis;rotating stall analysis;Gaussian mixture model;incremental distribution modeling;feature analysis;high performance computing;collaborative development",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1444,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 930,
                "i": [
                    930
                ]
            }
        },
        {
            "name": "Gregory Heinlein",
            "value": 38,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1186,
            "x": 343.5274505009134,
            "y": 25.276288341892045,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations",
                "DOI": "10.1109/tvcg.2016.2598604",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598604",
                "FirstPage": 811,
                "LastPage": 820,
                "PaperType": "J",
                "Abstract": "Study of flow instability in turbine engine compressors is crucial to understand the inception and evolution of engine stall. Aerodynamics experts have been working on detecting the early signs of stall in order to devise novel stall suppression technologies. A state-of-the-art Navier-Stokes based, time-accurate computational fluid dynamics simulator, TURBO, has been developed in NASA to enhance the understanding of flow phenomena undergoing rotating stall. Despite the proven high modeling accuracy of TURBO, the excessive simulation data prohibits post-hoc analysis in both storage and I/O time. To address these issues and allow the expert to perform scalable stall analysis, we have designed an in situ distribution guided stall analysis technique. Our method summarizes statistics of important properties of the simulation data in situ using a probabilistic data modeling scheme. This data summarization enables statistical anomaly detection for flow instability in post analysis, which reveals the spatiotemporal trends of rotating stall for the expert to conceive new hypotheses. Furthermore, the verification of the hypotheses and exploratory visualization using the summarized data are realized using probabilistic visualization techniques such as uncertain isocontouring. Positive feedback from the domain scientist has indicated the efficacy of our system in exploratory stall analysis.",
                "AuthorNamesDeduped": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorNames": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorAffiliation": "GRAVITY research group, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University",
                "InternalReferences": "0.1109/tvcg.2008.140;10.1109/tvcg.2013.152;10.1109/tvcg.2015.2467436;10.1109/tvcg.2007.70615;10.1109/tvcg.2015.2467952;10.1109/tvcg.2015.2467958;10.1109/tvcg.2015.2467411",
                "AuthorKeywords": "In situ analysis;rotating stall analysis;Gaussian mixture model;incremental distribution modeling;feature analysis;high performance computing;collaborative development",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1444,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 930,
                "i": [
                    930
                ]
            }
        },
        {
            "name": "Jen-Ping Chen",
            "value": 44,
            "numPapers": 23,
            "cluster": "6",
            "visible": 1,
            "index": 1187,
            "x": -270.494254519992,
            "y": 213.50142451907385,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations",
                "DOI": "10.1109/tvcg.2016.2598604",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598604",
                "FirstPage": 811,
                "LastPage": 820,
                "PaperType": "J",
                "Abstract": "Study of flow instability in turbine engine compressors is crucial to understand the inception and evolution of engine stall. Aerodynamics experts have been working on detecting the early signs of stall in order to devise novel stall suppression technologies. A state-of-the-art Navier-Stokes based, time-accurate computational fluid dynamics simulator, TURBO, has been developed in NASA to enhance the understanding of flow phenomena undergoing rotating stall. Despite the proven high modeling accuracy of TURBO, the excessive simulation data prohibits post-hoc analysis in both storage and I/O time. To address these issues and allow the expert to perform scalable stall analysis, we have designed an in situ distribution guided stall analysis technique. Our method summarizes statistics of important properties of the simulation data in situ using a probabilistic data modeling scheme. This data summarization enables statistical anomaly detection for flow instability in post analysis, which reveals the spatiotemporal trends of rotating stall for the expert to conceive new hypotheses. Furthermore, the verification of the hypotheses and exploratory visualization using the summarized data are realized using probabilistic visualization techniques such as uncertain isocontouring. Positive feedback from the domain scientist has indicated the efficacy of our system in exploratory stall analysis.",
                "AuthorNamesDeduped": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorNames": "Soumya Dutta;Chun-Ming Chen;Gregory Heinlein;Han-Wei Shen;Jen-Ping Chen",
                "AuthorAffiliation": "GRAVITY research group, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University;GRAVITY research group, The Ohio State University;The Department of Mechanical and Aerospace Engineering, The Ohio State University",
                "InternalReferences": "0.1109/tvcg.2008.140;10.1109/tvcg.2013.152;10.1109/tvcg.2015.2467436;10.1109/tvcg.2007.70615;10.1109/tvcg.2015.2467952;10.1109/tvcg.2015.2467958;10.1109/tvcg.2015.2467411",
                "AuthorKeywords": "In situ analysis;rotating stall analysis;Gaussian mixture model;incremental distribution modeling;feature analysis;high performance computing;collaborative development",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1444,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 930,
                "i": [
                    930
                ]
            }
        },
        {
            "name": "Tom Peterka",
            "value": 62,
            "numPapers": 35,
            "cluster": "6",
            "visible": 1,
            "index": 1188,
            "x": 55.25915723731735,
            "y": -340.28873848751067,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Adaptively Placed Multi-Grid Scene Representation Networks for Large-Scale Data Visualization",
                "DOI": "10.1109/tvcg.2023.3327194",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3327194",
                "FirstPage": 965,
                "LastPage": 974,
                "PaperType": "J",
                "Abstract": "Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN.",
                "AuthorNamesDeduped": "Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo 0001;Tom Peterka",
                "AuthorNames": "Skylar W. Wurster;Tianyu Xiong;Han-Wei Shen;Hanqi Guo;Tom Peterka",
                "AuthorAffiliation": "The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA;The Ohio State University, USA",
                "InternalReferences": "10.1109/tvcg.2012.274",
                "AuthorKeywords": "Scene representation network,deep learning,scientific visualization,volume rendering",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 161,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 20,
                "i": [
                    20
                ]
            }
        },
        {
            "name": "Irene Baeza Rojo",
            "value": 12,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1189,
            "x": 189.19490111126134,
            "y": 288.3665885526616,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "Vector Field Topology of Time-Dependent Flows in a Steady Reference Frame",
                "DOI": "10.1109/tvcg.2019.2934375",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934375",
                "FirstPage": 280,
                "LastPage": 290,
                "PaperType": "J",
                "Abstract": "The topological analysis of unsteady vector fields remains to this day one of the largest challenges in flow visualization. We build up on recent work on vortex extraction to define a time-dependent vector field topology for 2D and 3D flows. In our work, we split the vector field into two components: a vector field in which the flow becomes steady, and the remaining ambient flow that describes the motion of topological elements (such as sinks, sources and saddles) and feature curves (vortex corelines and bifurcation lines). To this end, we expand on recent local optimization approaches by modeling spatially-varying deformations through displacement transformations from continuum mechanics. We compare and discuss the relationships with existing local and integration-based topology extraction methods, showing for instance that separatrices seeded from saddles in the optimal frame align with the integration-based streakline vector field topology. In contrast to the streakline-based approach, our method gives a complete picture of the topology for every time slice, including the steps near the temporal domain boundaries. With our work it now becomes possible to extract topological information even when only few time slices are available. We demonstrate the method in several analytical and numerically-simulated flows and discuss practical aspects, limitations and opportunities for future work.",
                "AuthorNamesDeduped": "Irene Baeza Rojo;Tobias Günther",
                "AuthorNames": "Irene Baeza Rojo;Tobias Günther",
                "AuthorAffiliation": "Computer Graphics Laboratory, ETH Zürich;Computer Graphics Laboratory, ETH Zürich",
                "InternalReferences": "0.1109/visual.1999.809907;10.1109/visual.1991.175773;10.1109/tvcg.2015.2467200;10.1109/tvcg.2018.2864828;10.1109/tvcg.2018.2864839;10.1109/visual.1999.809896;10.1109/visual.1998.745296;10.1109/visual.2005.1532851;10.1109/visual.2004.99;10.1109/visual.2000.885716;10.1109/tvcg.2007.70545;10.1109/tvcg.2007.70557;10.1109/tvcg.2018.2864813",
                "AuthorKeywords": "Scientific visualization,unsteady flow,vector field topology,reference frame optimization",
                "AminerCitationCount": 37,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 803,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 573,
                "i": [
                    573
                ]
            }
        },
        {
            "name": "Heike Leitte",
            "value": 49,
            "numPapers": 32,
            "cluster": "11",
            "visible": 1,
            "index": 1190,
            "x": -334.4357534428685,
            "y": -84.86888015698621,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "Dynamic Nested Tracking Graphs",
                "DOI": "10.1109/tvcg.2019.2934368",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934368",
                "FirstPage": 249,
                "LastPage": 258,
                "PaperType": "J",
                "Abstract": "This work describes an approach for the interactive visual analysis of large-scale simulations, where numerous superlevel set components and their evolution are of primary interest. The approach first derives, at simulation runtime, a specialized Cinema database that consists of images of component groups, and topological abstractions. This database is processed by a novel graph operation-based nested tracking graph algorithm (GO-NTG) that dynamically computes NTGs for component groups based on size, overlap, persistence, and level thresholds. The resulting NTGs are in turn used in a feature-centered visual analytics framework to query specific database elements and update feature parameters, facilitating flexible post hoc analysis.",
                "AuthorNamesDeduped": "Jonas Lukasczyk;Christoph Garth;Gunther H. Weber;Tim Biedert;Ross Maciejewski;Heike Leitte",
                "AuthorNames": "Jonas Lukasczyk;Christoph Garth;Gunther H. Weber;Tim Biedert;Ross Maciejewski;Heike Leitte",
                "AuthorAffiliation": "Technische Universität Kaiserslautern;Technische Universität Kaiserslautern;Lawrence Berkeley National Laboratory, University of California, Davis;NVIDIA Corporation;Arizona State University;Technische Universität Kaiserslautern",
                "InternalReferences": "0.1109/tvcg.2018.2865265;10.1109/visual.1998.745288;10.1109/tvcg.2012.228",
                "AuthorKeywords": "Topological Data Analysis,Nested Tracking Graphs,Image Databases,Feature Tracking,Post Hoc Visual Analytics",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 643,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 577,
                "i": [
                    577
                ]
            }
        },
        {
            "name": "Joseph Budin",
            "value": 17,
            "numPapers": 22,
            "cluster": "11",
            "visible": 1,
            "index": 1191,
            "x": 304.0582426498892,
            "y": -163.39701673121536,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "Progressive Wasserstein Barycenters of Persistence Diagrams",
                "DOI": "10.1109/tvcg.2019.2934256",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934256",
                "FirstPage": 151,
                "LastPage": 161,
                "PaperType": "J",
                "Abstract": "This paper presents an efficient algorithm for the progressive approximation of Wasserstein barycenters of persistence diagrams, with applications to the visual analysis of ensemble data. Given a set of scalar fields, our approach enables the computation of a persistence diagram which is representative of the set, and which visually conveys the number, data ranges and saliences of the main features of interest found in the set. Such representative diagrams are obtained by computing explicitly the discrete Wasserstein barycenter of the set of persistence diagrams, a notoriously computationally intensive task. In particular, we revisit efficient algorithms for Wasserstein distance approximation [12], [51] to extend previous work on barycenter estimation [94]. We present a new fast algorithm, which progressively approximates the barycenter by iteratively increasing the computation accuracy as well as the number of persistent features in the output diagram. Such a progressivity drastically improves convergence in practice and allows to design an interruptible algorithm, capable of respecting computation time constraints. This enables the approximation of Wasserstein barycenters within interactive times. We present an application to ensemble clustering where we revisit the $k$-means algorithm to exploit our barycenters and compute, within execution time constraints, meaningful clusters of ensemble data along with their barycenter diagram. Extensive experiments on synthetic and real-life data sets report that our algorithm converges to barycenters that are qualitatively meaningful with regard to the applications, and quantitatively comparable to previous techniques, while offering an order of magnitude speedup when run until convergence (without time constraint). Our algorithm can be trivially parallelized to provide additional speedups in practice on standard workstations. We provide a lightweight C++ implementation of our approach that can be used to reproduce our results.",
                "AuthorNamesDeduped": "Jules Vidal;Joseph Budin;Julien Tierny",
                "AuthorNames": "Jules Vidal;Joseph Budin;Julien Tierny",
                "AuthorAffiliation": "Sorbonne Université, CNRS (LIP6);Sorbonne Université, CNRS (LIP6);Sorbonne Université, CNRS (LIP6)",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2018.2864505;10.1109/tvcg.2015.2467958;10.1109/tvcg.2018.2864432;10.1109/tvcg.2015.2467204;10.1109/tvcg.2014.2346403;10.1109/tvcg.2018.2864848;10.1109/tvcg.2008.110;10.1109/tvcg.2015.2467432;10.1109/tvcg.2013.141;10.1109/tvcg.2011.249;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346455;10.1109/tvcg.2018.2864768;10.1109/tvcg.2010.181;10.1109/tvcg.2012.249;10.1109/tvcg.2013.148;10.1109/tvcg.2014.2346332;10.1109/tvcg.2009.163;10.1109/tvcg.2013.143;10.1109/tvcg.2017.2743938;10.1109/tvcg.2007.70603;10.1109/tvcg.2014.2346434",
                "AuthorKeywords": "Topological data analysis,scalar data,ensemble data",
                "AminerCitationCount": 32,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 96,
                "DownloadsXplore": 445,
                "Award": "HM",
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 578,
                "i": [
                    578
                ]
            }
        },
        {
            "name": "Fariba Khan",
            "value": 11,
            "numPapers": 12,
            "cluster": "11",
            "visible": 1,
            "index": 1192,
            "x": -113.87774383583691,
            "y": 326.00898677622297,
            "vy": 0,
            "vx": 0,
            "r": 1.0126655152561888,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "Multi-Scale Topological Analysis of Asymmetric Tensor Fields on Surfaces",
                "DOI": "10.1109/tvcg.2019.2934314",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934314",
                "FirstPage": 270,
                "LastPage": 279,
                "PaperType": "J",
                "Abstract": "Asymmetric tensor fields have found applications in many science and engineering domains, such as fluid dynamics. Recent advances in the visualization and analysis of 2D asymmetric tensor fields focus on pointwise analysis of the tensor field and effective visualization metaphors such as colors, glyphs, and hyperstreamlines. In this paper, we provide a novel multi-scale topological analysis framework for asymmetric tensor fields on surfaces. Our multi-scale framework is based on the notions of eigenvalue and eigenvector graphs. At the core of our framework are the identification of atomic operations that modify the graphs and the scale definition that guides the order in which the graphs are simplified to enable clarity and focus for the visualization of topological analysis on data of different sizes. We also provide efficient algorithms to realize these operations. Furthermore, we provide physical interpretation of these graphs. To demonstrate the utility of our system, we apply our multi-scale analysis to data in computational fluid dynamics.",
                "AuthorNamesDeduped": "Fariba Khan;Lawrence Roy;Eugene Zhang;Botong Qu;Shih-Hsuan Hung;Harry Yeh;Robert S. Laramee;Yue Zhang 0009",
                "AuthorNames": "Fariba Khan;Lawrence Roy;Eugene Zhang;Botong Qu;Shih-Hsuan Hung;Harry Yeh;Robert S. Laramee;Yue Zhang",
                "AuthorAffiliation": "Oregon State University;Oregon State University;Oregon State University;Oregon State University;Oregon State University;Oregon State University;Swansea University;Oregon State University",
                "InternalReferences": "0.1109/visual.1994.346326;10.1109/visual.1998.745312;10.1109/tvcg.2016.2598998;10.1109/tvcg.2009.126;10.1109/visual.2005.1532850;10.1109/visual.2004.59;10.1109/visual.2004.59;10.1109/tvcg.2011.170;10.1109/tvcg.2010.199;10.1109/visual.2001.964507;10.1109/visual.2000.885716;10.1109/visual.2002.1183784;10.1109/visual.2005.1532770",
                "AuthorKeywords": "Tensor field visualization,tensor field topology,2D asymmetric tensor fields,2D asymmetric tensor field topology,eigenvalue graphs,eigenvector graphs",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 482,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 589,
                "i": [
                    589
                ]
            }
        },
        {
            "name": "Jürgen Schneider",
            "value": 86,
            "numPapers": 17,
            "cluster": "11",
            "visible": 1,
            "index": 1193,
            "x": -136.30311643429772,
            "y": -317.4452085798403,
            "vy": 0,
            "vx": 0,
            "r": 1.0990213010938399,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "Visual analysis and exploration of fluid flow in a cooling jacket",
                "DOI": "10.1109/visual.2005.1532850",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532850",
                "FirstPage": 623,
                "LastPage": 630,
                "PaperType": "C",
                "Abstract": "We present a visual analysis and exploration of fluid flow through a cooling jacket. Engineers invest a large amount of time and serious effort to optimize the flow through this engine component because of its important role in transferring heat away from the engine block. In this study we examine the design goals that engineers apply in order to construct an ideal-as-possible cooling jacket geometry and use a broad range of visualization tools in order to analyze, explore, and present the results. We systematically employ direct, geometric, and texture-based flow visualization techniques as well as automatic feature extraction and interactive feature-based methodology. And we discuss the relative advantages and disadvantages of these approaches as well as the challenges, both technical and perceptual with this application. The result is a feature-rich state-of-the-art flow visualization analysis applied to an important and complex data set from real-world computational fluid dynamics simulations.",
                "AuthorNamesDeduped": "Robert S. Laramee;Christoph Garth;Helmut Doleisch;Jürgen Schneider;Helwig Hauser;Hans Hagen",
                "AuthorNames": "R.S. Laramee;C. Garth;H. Doleisch;J. Schneider;H. Hauser;H. Hagen",
                "AuthorAffiliation": "VRVis Research Center, Vienna, Austria;Department of Computer Science, University of Kaiserslautern, Germany;VRVis Research Center, Vienna, Austria;Department of Advanced Simulation Technologies (AST), AVL, Graz, Austria;VRVis Research Center, Vienna, Austria;Department of Computer Science, University of Kaiserslautern, Germany",
                "InternalReferences": "0.1109/visual.1999.809895;10.1109/visual.1992.235211;10.1109/visual.1997.663910;10.1109/visual.1996.568137;10.1109/visual.2004.107;10.1109/visual.2004.113;10.1109/visual.1998.745333;10.1109/visual.2002.1183821;10.1109/visual.2003.1250376;10.1109/visual.2002.1183822;10.1109/visual.2004.59;10.1109/visual.2004.128",
                "AuthorKeywords": "flow visualization, vector field visualization, feature-extraction, feature-based visualization, computational fluid dynamics (CFD), cooling jacket, visualization systems, engine simulation,heat transfer",
                "AminerCitationCount": 90,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 483,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2376,
                "i": [
                    2376
                ]
            }
        },
        {
            "name": "Juraj Pálenik",
            "value": 0,
            "numPapers": 23,
            "cluster": "6",
            "visible": 1,
            "index": 1194,
            "x": 315.06874412058755,
            "y": 142.06226268181052,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "IsoTrotter: Visually Guided Empirical Modelling of Atmospheric Convection",
                "DOI": "10.1109/tvcg.2020.3030389",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030389",
                "FirstPage": 775,
                "LastPage": 784,
                "PaperType": "J",
                "Abstract": "Empirical models, fitted to data from observations, are often used in natural sciences to describe physical behaviour and support discoveries. However, with more complex models, the regression of parameters quickly becomes insufficient, requiring a visual parameter space analysis to understand and optimize the models. In this work, we present a design study for building a model describing atmospheric convection. We present a mixed-initiative approach to visually guided modelling, integrating an interactive visual parameter space analysis with partial automatic parameter optimization. Our approach includes a new, semi-automatic technique called IsoTrotting, where we optimize the procedure by navigating along isocontours of the model. We evaluate the model with unique observational data of atmospheric convection based on flight trajectories of paragliders.",
                "AuthorNamesDeduped": "Juraj Pálenik;Thomas Spengler;Helwig Hauser",
                "AuthorNames": "Juraj Palenik;Thomas Spengler;Helwig Hauser",
                "AuthorAffiliation": "University of Bergen;University of Bergen;University of Bergen",
                "InternalReferences": "0.1109/tvcg.2010.190;10.1109/vast.2009.5333431;10.1109/vast.2011.6102450;10.1109/tvcg.2008.139;10.1109/tvcg.2018.2864901;10.1109/tvcg.2014.2346744;10.1109/tvcg.2013.125;10.1109/tvcg.2014.2346578;10.1109/tvcg.2014.2346321;10.1109/tvcg.2012.190;10.1109/visual.1993.398859;10.1109/tvcg.2009.170",
                "AuthorKeywords": "visual parameter space exploration,scientific modelling,atmospheric convection",
                "AminerCitationCount": 1,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 371,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 456,
                "i": [
                    456
                ]
            }
        },
        {
            "name": "Armin Kanitsar",
            "value": 263,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1195,
            "x": -328.4209518805502,
            "y": 108.11881596592387,
            "vy": 0,
            "vx": 0,
            "r": 1.3028209556706967,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Importance-driven volume rendering",
                "DOI": "10.1109/visual.2004.48",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.48",
                "FirstPage": 139,
                "LastPage": 145,
                "PaperType": "C",
                "Abstract": "This work introduces importance-driven volume rendering as a novel technique for automatic focus and context display of volumetric data. Our technique is a generalization of cut-away views, which - depending on the viewpoint - remove or suppress less important parts of a scene to reveal more important underlying information. We automatize and apply this idea to volumetric data. Each part of the volumetric data is assigned an object importance, which encodes visibility priority. This property determines which structures should be readily discernible and which structures are less important. In those image regions, where an object occludes more important structures it is displayed more sparsely than in those areas where no occlusion occurs. Thus the objects of interest are clearly visible. For each object several representations, i.e., levels of sparseness, are specified. The display of an individual object may incorporate different levels of sparseness. The goal is to emphasize important structures and to maximize the information content in the final image. This work also discusses several possible schemes for level of sparseness specification and different ways how object importance can be composited to determine the final appearance of a particular object.",
                "AuthorNamesDeduped": "Ivan Viola;Armin Kanitsar;M. Eduard Gröller",
                "AuthorNames": "I. Viola;A. Kanitsar;M.E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2003.1250406;10.1109/infvis.1996.559215;10.1109/visual.1996.568110;10.1109/visual.2000.885694;10.1109/visual.2001.964519;10.1109/visual.2000.885697;10.1109/visual.2000.885696",
                "AuthorKeywords": "view-dependent visualization, volume rendering, focus+context techniques, level-of-detail techniques, non-photorealistic techniques",
                "AminerCitationCount": 273,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2514,
                "i": [
                    2514
                ]
            }
        },
        {
            "name": "Rainer Wegenkittl",
            "value": 254,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1196,
            "x": 169.20490106828404,
            "y": -301.69471565553187,
            "vy": 0,
            "vx": 0,
            "r": 1.2924582613701785,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "Nonlinear virtual colon unfolding",
                "DOI": "10.1109/visual.2001.964540",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964540",
                "FirstPage": 411,
                "LastPage": 418,
                "PaperType": "C",
                "Abstract": "The majority of virtual endoscopy techniques tries to simulate a real endoscopy. A real endoscopy does not always give the optimal information due to the physical limitations it is subject to. In this paper, we deal with the unfolding of the surface of the colon as a possible visualization technique for diagnosis and polyp detection. A new two-step technique is presented which deals with the problems of double appearance of polyps and nonuniform sampling that other colon unfolding techniques suffer from. In the first step, a distance map from a central path induces nonlinear rays for unambiguous parameterization of the surface. The second step compensates for locally varying distortions of the unfolded surface. A technique similar to magnification fields in information visualization is hereby applied. The technique produces a single view of a complete, virtually dissected colon.",
                "AuthorNamesDeduped": "Anna Vilanova Bartrolí;Rainer Wegenkittl;Andreas König 0002;M. Eduard Gröller",
                "AuthorNames": "A.V. Vilanova Bartroli;R. Wegenkittl;A. Konig;E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Tiani Medgraph, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/infvis.1997.636786;10.1109/visual.1999.809914",
                "AuthorKeywords": "Volume Rendering, Virtual Endoscopy",
                "AminerCitationCount": 125,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 220,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2895,
                "i": [
                    2895
                ]
            }
        },
        {
            "name": "Dominik Fleischmann",
            "value": 195,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1197,
            "x": 79.05844087092257,
            "y": 336.8972587111074,
            "vy": 0,
            "vx": 0,
            "r": 1.224525043177893,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Advanced curved planar reformation: flattening of vascular structures",
                "DOI": "10.1109/visual.2003.1250353",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250353",
                "FirstPage": 43,
                "LastPage": 50,
                "PaperType": "C",
                "Abstract": "Traditional volume visualization techniques may provide incomplete clinical information needed for applications in medical visualization. In the area of vascular visualization important features such as the lumen of a diseased vessel segment may not be visible. Curved planar reformation (CPR) has proven to be an acceptable practical solution. Existing CPR techniques, however, still have diagnostically relevant limitations. In this paper, we introduce two advances methods for efficient vessel visualization, based on the concept of CPR. Both methods benefit from relaxation of spatial coherence in favor of improved feature perception. We present a new technique to visualize the interior of a vessel in a single image. A vessel is resampled along a spiral around its central axis. The helical spiral depicts the vessel volume. Furthermore, a method to display an entire vascular tree without mutually occluding vessels is presented. Minimal rotations at the bifurcations avoid occlusions. For each viewing direction the entire vessel structure is visible.",
                "AuthorNamesDeduped": "Armin Kanitsar;Rainer Wegenkittl;Dominik Fleischmann;M. Eduard Gröller",
                "AuthorNames": "A. Kanitsar;R. Wegenkittl;D. Fleischmann;M.E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;TIANI Medgraph, Austria;Department of Radiology, University of Stanford, USA;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2002.1183754;10.1109/visual.2001.964555;10.1109/visual.2001.964538",
                "AuthorKeywords": "computed tomography angiography, vessel analysis, curved planar reformation",
                "AminerCitationCount": 121,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 300,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2681,
                "i": [
                    2681
                ]
            }
        },
        {
            "name": "Robert van Liere",
            "value": 136,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1198,
            "x": -285.9853968871536,
            "y": -195.09575281716727,
            "vy": 0,
            "vx": 0,
            "r": 1.1565918249856073,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Collapsing Flow Topology Using Area Metrics",
                "DOI": "10.1109/visual.1999.809907",
                "Link": "http://doi.ieeecomputersociety.org/10.1109/VISUAL.1999.809907",
                "FirstPage": 349,
                "LastPage": 354,
                "PaperType": "C",
                "Abstract": "Visualization of topological information of a vector field can provide useful information on the structure of the field. However, in turbulent flows standard critical point visualization will result in a cluttered image which is difficult to interpret. This paper presents a technique for collapsing topologies. The governing idea is to classify the importance of the critical points in the topology. By only displaying the more important critical points, a simplified depiction of the topology can be provided. Flow consistency is maintained when collapsing the topology, resulting in a visualization which is consistent with the original topology. We apply the collapsing topology technique to a turbulent flow field.",
                "AuthorNamesDeduped": "Wim C. de Leeuw;Robert van Liere",
                "AuthorNames": "W. De Leeuw;R. Van Liere",
                "AuthorAffiliation": "Center for Math. & Comput. Sci., CWI, Amsterdam, Netherlands;Center for Math. & Comput. Sci., CWI, Amsterdam, Netherlands",
                "InternalReferences": "0.1109/visual.1991.175773",
                "AuthorKeywords": "multi-level visualization techniques, flow visualization, flow topology",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 48,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3059,
                "i": [
                    3059
                ]
            }
        },
        {
            "name": "Ashok Jallepalli",
            "value": 3,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1199,
            "x": 342.80494345637527,
            "y": -49.343396132323115,
            "vy": 0,
            "vx": 0,
            "r": 1.003454231433506,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "The Effect of Data Transformations on Scalar Field Topological Analysis of High-Order FEM Solutions",
                "DOI": "10.1109/tvcg.2019.2934338",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934338",
                "FirstPage": 162,
                "LastPage": 172,
                "PaperType": "J",
                "Abstract": "High-order finite element methods (HO-FEM) are gaining popularity in the simulation community due to their success in solving complex flow dynamics. There is an increasing need to analyze the data produced as output by these simulations. Simultaneously, topological analysis tools are emerging as powerful methods for investigating simulation data. However, most of the current approaches to topological analysis have had limited application to HO-FEM simulation data for two reasons. First, the current topological tools are designed for linear data (polynomial degree one), but the polynomial degree of the data output by these simulations is typically higher (routinely up to polynomial degree six). Second, the simulation data and derived quantities of the simulation data have discontinuities at element boundaries, and these discontinuities do not match the input requirements for the topological tools. One solution to both issues is to transform the high-order data to achieve low-order, continuous inputs for topological analysis. Nevertheless, there has been little work evaluating the possible transformation choices and their downstream effect on the topological analysis. We perform an empirical study to evaluate two commonly used data transformation methodologies along with the recently introduced L-SIAC filter for processing high-order simulation data. Our results show diverse behaviors are possible. We offer some guidance about how best to consider a pipeline of topological analysis of HO-FEM simulations with the currently available implementations of topological analysis.",
                "AuthorNamesDeduped": "Ashok Jallepalli;Joshua A. Levine;Robert M. Kirby",
                "AuthorNames": "Ashok Jallepalli;Joshua A. Levine;Robert M. Kirby",
                "AuthorAffiliation": "SCI Institute, University of Utah;Department of Computer Science, University of Arizona;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2015.2467432;10.1109/tvcg.2007.70603;10.1109/tvcg.2017.2744058;10.1109/tvcg.2011.249;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346332;10.1109/tvcg.2017.2743938;10.1109/tvcg.2009.163;10.1109/tvcg.2012.228;10.1109/visual.2004.113",
                "AuthorKeywords": "High-Order Finite Element Methods,Filtering Techniques,Scalar Field Visualization,Topological Analysis",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 324,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 595,
                "i": [
                    595
                ]
            }
        },
        {
            "name": "Robert Haimes",
            "value": 94,
            "numPapers": 11,
            "cluster": "11",
            "visible": 1,
            "index": 1200,
            "x": -219.53416490270612,
            "y": 268.05736408550956,
            "vy": 0,
            "vx": 0,
            "r": 1.1082325849165227,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "ElVis: A System for the Accurate and Interactive Visualization of High-Order finite Element Solutions",
                "DOI": "10.1109/tvcg.2012.218",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.218",
                "FirstPage": 2325,
                "LastPage": 2334,
                "PaperType": "J",
                "Abstract": "This paper presents the Element Visualizer (ElVis), a new, open-source scientific visualization system for use with high-order finite element solutions to PDEs in three dimensions. This system is designed to minimize visualization errors of these types of fields by querying the underlying finite element basis functions (e.g., high-order polynomials) directly, leading to pixel-exact representations of solutions and geometry. The system interacts with simulation data through runtime plugins, which only require users to implement a handful of operations fundamental to finite element solvers. The data in turn can be visualized through the use of cut surfaces, contours, isosurfaces, and volume rendering. These visualization algorithms are implemented using NVIDIA's OptiX GPU-based ray-tracing engine, which provides accelerated ray traversal of the high-order geometry, and CUDA, which allows for effective parallel evaluation of the visualization algorithms. The direct interface between ElVis and the underlying data differentiates it from existing visualization tools. Current tools assume the underlying data is composed of linear primitives; high-order data must be interpolated with linear functions as a result. In this work, examples drawn from aerodynamic simulations-high-order discontinuous Galerkin finite element solutions of aerodynamic flows in particular-will demonstrate the superiority of ElVis' pixel-exact approach when compared with traditional linear-interpolation methods. Such methods can introduce a number of inaccuracies in the resulting visualization, making it unclear if visual artifacts are genuine to the solution data or if these artifacts are the result of interpolation errors. Linear methods additionally cannot properly visualize curved geometries (elements or boundaries) which can greatly inhibit developers' debugging efforts. As we will show, pixel-exact visualization exhibits none of these issues, removing the visualization scheme as a source of uncertainty for engineers using ElVis.",
                "AuthorNamesDeduped": "Blake Nelson;Eric Liu;Robert M. Kirby;Robert Haimes",
                "AuthorNames": "Blake Nelson;Eric Liu;Robert M. Kirby;Robert Haimes",
                "AuthorAffiliation": "School of Computing and the Scientific Computing and Imaging Institute, University of Utah, USA;Department of Aeronautics and Astronautics, MIT, USA;Department of Aeronautics and Astronautics, MIT, USA;School of Computing and the Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.2005.1532776;10.1109/visual.1991.175837;10.1109/visual.2004.91;10.1109/tvcg.2006.154;10.1109/tvcg.2011.206",
                "AuthorKeywords": "High-order finite elements, spectral/hp elements, discontinuous Galerkin, fluid flow simulation, cut surface extraction, contours, isosurfaces",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 667,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1456,
                "i": [
                    1456
                ]
            }
        },
        {
            "name": "Udo Schlegel",
            "value": 54,
            "numPapers": 30,
            "cluster": "3",
            "visible": 1,
            "index": 1201,
            "x": -19.200478745873543,
            "y": -346.09441141967204,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning",
                "DOI": "10.1109/tvcg.2019.2934629",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934629",
                "FirstPage": 1064,
                "LastPage": 1074,
                "PaperType": "J",
                "Abstract": "We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models. Our framework combines an iterative XAI pipeline with eight global monitoring and steering mechanisms, including quality monitoring, provenance tracking, model comparison, and trust building. To operationalize the framework, we present explAIner, a visual analytics system for interactive and explainable machine learning that instantiates all phases of the suggested pipeline within the commonly used TensorBoard environment. We performed a user-study with nine participants across different expertise levels to examine their perception of our workflow and to collect suggestions to fill the gap between our system and framework. The evaluation confirms that our tightly integrated system leads to an informed machine learning process while disclosing opportunities for further extensions.",
                "AuthorNamesDeduped": "Thilo Spinner;Udo Schlegel;Hanna Schäfer;Mennatallah El-Assady",
                "AuthorNames": "Thilo Spinner;Udo Schlegel;Hanna Schäfer;Mennatallah El-Assady",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2019.2934654;10.1109/tvcg.2017.2745080;10.1109/tvcg.2018.2864769;10.1109/tvcg.2017.2744718;10.1109/vast.2017.8585720;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864812;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2014.2346481;10.1109/tvcg.2018.2864838;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475;10.1109/vast.2017.8585721",
                "AuthorKeywords": "Explainable AI,Interactive Machine Learning,Deep Learning,Visual Analytics,Interpretability,Explainability",
                "AminerCitationCount": 143,
                "CitationCountCrossRef": 90,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 6281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 599,
                "i": [
                    599
                ]
            }
        },
        {
            "name": "Hanna Schäfer",
            "value": 54,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 1202,
            "x": 248.04440970336324,
            "y": 242.33029281315635,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning",
                "DOI": "10.1109/tvcg.2019.2934629",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934629",
                "FirstPage": 1064,
                "LastPage": 1074,
                "PaperType": "J",
                "Abstract": "We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models. Our framework combines an iterative XAI pipeline with eight global monitoring and steering mechanisms, including quality monitoring, provenance tracking, model comparison, and trust building. To operationalize the framework, we present explAIner, a visual analytics system for interactive and explainable machine learning that instantiates all phases of the suggested pipeline within the commonly used TensorBoard environment. We performed a user-study with nine participants across different expertise levels to examine their perception of our workflow and to collect suggestions to fill the gap between our system and framework. The evaluation confirms that our tightly integrated system leads to an informed machine learning process while disclosing opportunities for further extensions.",
                "AuthorNamesDeduped": "Thilo Spinner;Udo Schlegel;Hanna Schäfer;Mennatallah El-Assady",
                "AuthorNames": "Thilo Spinner;Udo Schlegel;Hanna Schäfer;Mennatallah El-Assady",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz;University of Konstanz;University of Konstanz",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2019.2934654;10.1109/tvcg.2017.2745080;10.1109/tvcg.2018.2864769;10.1109/tvcg.2017.2744718;10.1109/vast.2017.8585720;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864812;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2014.2346481;10.1109/tvcg.2018.2864838;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864475;10.1109/vast.2017.8585721",
                "AuthorKeywords": "Explainable AI,Interactive Machine Learning,Deep Learning,Visual Analytics,Interpretability,Explainability",
                "AminerCitationCount": 143,
                "CitationCountCrossRef": 90,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 6281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 599,
                "i": [
                    599
                ]
            }
        },
        {
            "name": "Bowen Yu 0004",
            "value": 82,
            "numPapers": 20,
            "cluster": "3",
            "visible": 1,
            "index": 1203,
            "x": -346.7360720418143,
            "y": -11.139853904510014,
            "vy": 0,
            "vx": 0,
            "r": 1.0944156591824985,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "FlowSense: A Natural Language Interface for Visual Data Exploration within a Dataflow System",
                "DOI": "10.1109/tvcg.2019.2934668",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934668",
                "FirstPage": 1,
                "LastPage": 11,
                "PaperType": "J",
                "Abstract": "Dataflow visualization systems enable flexible visual data exploration by allowing the user to construct a dataflow diagram that composes query and visualization modules to specify system functionality. However learning dataflow diagram usage presents overhead that often discourages the user. In this work we design FlowSense, a natural language interface for dataflow visualization systems that utilizes state-of-the-art natural language processing techniques to assist dataflow diagram construction. FlowSense employs a semantic parser with special utterance tagging and special utterance placeholders to generalize to different datasets and dataflow diagrams. It explicitly presents recognized dataset and diagram special utterances to the user for dataflow context awareness. With FlowSense the user can expand and adjust dataflow diagrams more conveniently via plain English. We apply FlowSense to the VisFlow subset-flow visualization system to enhance its usability. We evaluate FlowSense by one case study with domain experts on a real-world data analysis problem and a formal user study.",
                "AuthorNamesDeduped": "Bowen Yu 0004;Cláudio T. Silva",
                "AuthorNames": "Bowen Yu;Cláudio T. Silva",
                "AuthorAffiliation": "New York University;New York University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/visual.2005.1532788;10.1109/tvcg.2010.164;10.1109/tvcg.2017.2744684;10.1109/tvcg.2007.70594;10.1109/tvcg.2017.2745219;10.1109/tvcg.2016.2598497",
                "AuthorKeywords": "Natural language interface,dataflow visualization system,visual data exploration",
                "AminerCitationCount": 70,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 3244,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 600,
                "i": [
                    600
                ]
            }
        },
        {
            "name": "Xiaobo Luo",
            "value": 26,
            "numPapers": 23,
            "cluster": "3",
            "visible": 1,
            "index": 1204,
            "x": 263.30657551182964,
            "y": -226.09654418463182,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analytics for Electromagnetic Situation Awareness in Radio Monitoring and Management",
                "DOI": "10.1109/tvcg.2019.2934655",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934655",
                "FirstPage": 590,
                "LastPage": 600,
                "PaperType": "J",
                "Abstract": "Traditional radio monitoring and management largely depend on radio spectrum data analysis, which requires considerable domain experience and heavy cognition effort and frequently results in incorrect signal judgment and incomprehensive situation awareness. Faced with increasingly complicated electromagnetic environments, radio supervisors urgently need additional data sources and advanced analytical technologies to enhance their situation awareness ability. This paper introduces a visual analytics approach for electromagnetic situation awareness. Guided by a detailed scenario and requirement analysis, we first propose a signal clustering method to process radio signal data and a situation assessment model to obtain qualitative and quantitative descriptions of the electromagnetic situations. We then design a two-module interface with a set of visualization views and interactions to help radio supervisors perceive and understand the electromagnetic situations by a joint analysis of radio signal data and radio spectrum data. Evaluations on real-world data sets and an interview with actual users demonstrate the effectiveness of our prototype system. Finally, we discuss the limitations of the proposed approach and provide future work directions.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Automation, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;Southwest Electric & Telecom Engineering Institute, Shanghai, China;Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2018.2865028;10.1109/tvcg.2016.2598619;10.1109/tvcg.2008.166;10.1109/tvcg.2015.2467196;10.1109/vast.2014.7042479;10.1109/tvcg.2016.2598460;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/tvcg.2017.2745180;10.1109/tvcg.2018.2865077;10.1109/tvcg.2018.2865029;10.1109/tvcg.2010.193;10.1109/tvcg.2014.2346911;10.1109/tvcg.2011.179;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532134;10.1109/tvcg.2014.2346926;10.1109/tvcg.2017.2744459;10.1109/tvcg.2013.228;10.1109/tvcg.2017.2744098;10.1109/tvcg.2014.2346913;10.1109/tvcg.2016.2598664;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Radio monitoring and management,radio signal data,radio spectrum data,situation awareness,visual analytics",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1548,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 603,
                "i": [
                    603
                ]
            }
        },
        {
            "name": "Xiaoru Lin",
            "value": 26,
            "numPapers": 23,
            "cluster": "3",
            "visible": 1,
            "index": 1205,
            "x": -41.44524664438676,
            "y": 344.7205992257874,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analytics for Electromagnetic Situation Awareness in Radio Monitoring and Management",
                "DOI": "10.1109/tvcg.2019.2934655",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934655",
                "FirstPage": 590,
                "LastPage": 600,
                "PaperType": "J",
                "Abstract": "Traditional radio monitoring and management largely depend on radio spectrum data analysis, which requires considerable domain experience and heavy cognition effort and frequently results in incorrect signal judgment and incomprehensive situation awareness. Faced with increasingly complicated electromagnetic environments, radio supervisors urgently need additional data sources and advanced analytical technologies to enhance their situation awareness ability. This paper introduces a visual analytics approach for electromagnetic situation awareness. Guided by a detailed scenario and requirement analysis, we first propose a signal clustering method to process radio signal data and a situation assessment model to obtain qualitative and quantitative descriptions of the electromagnetic situations. We then design a two-module interface with a set of visualization views and interactions to help radio supervisors perceive and understand the electromagnetic situations by a joint analysis of radio signal data and radio spectrum data. Evaluations on real-world data sets and an interview with actual users demonstrate the effectiveness of our prototype system. Finally, we discuss the limitations of the proposed approach and provide future work directions.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Automation, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;Southwest Electric & Telecom Engineering Institute, Shanghai, China;Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2018.2865028;10.1109/tvcg.2016.2598619;10.1109/tvcg.2008.166;10.1109/tvcg.2015.2467196;10.1109/vast.2014.7042479;10.1109/tvcg.2016.2598460;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/tvcg.2017.2745180;10.1109/tvcg.2018.2865077;10.1109/tvcg.2018.2865029;10.1109/tvcg.2010.193;10.1109/tvcg.2014.2346911;10.1109/tvcg.2011.179;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532134;10.1109/tvcg.2014.2346926;10.1109/tvcg.2017.2744459;10.1109/tvcg.2013.228;10.1109/tvcg.2017.2744098;10.1109/tvcg.2014.2346913;10.1109/tvcg.2016.2598664;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Radio monitoring and management,radio signal data,radio spectrum data,situation awareness,visual analytics",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1548,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 603,
                "i": [
                    603
                ]
            }
        },
        {
            "name": "Hairong Wang",
            "value": 26,
            "numPapers": 23,
            "cluster": "3",
            "visible": 1,
            "index": 1206,
            "x": -202.37887160796302,
            "y": -282.29911853685917,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analytics for Electromagnetic Situation Awareness in Radio Monitoring and Management",
                "DOI": "10.1109/tvcg.2019.2934655",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934655",
                "FirstPage": 590,
                "LastPage": 600,
                "PaperType": "J",
                "Abstract": "Traditional radio monitoring and management largely depend on radio spectrum data analysis, which requires considerable domain experience and heavy cognition effort and frequently results in incorrect signal judgment and incomprehensive situation awareness. Faced with increasingly complicated electromagnetic environments, radio supervisors urgently need additional data sources and advanced analytical technologies to enhance their situation awareness ability. This paper introduces a visual analytics approach for electromagnetic situation awareness. Guided by a detailed scenario and requirement analysis, we first propose a signal clustering method to process radio signal data and a situation assessment model to obtain qualitative and quantitative descriptions of the electromagnetic situations. We then design a two-module interface with a set of visualization views and interactions to help radio supervisors perceive and understand the electromagnetic situations by a joint analysis of radio signal data and radio spectrum data. Evaluations on real-world data sets and an interview with actual users demonstrate the effectiveness of our prototype system. Finally, we discuss the limitations of the proposed approach and provide future work directions.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Automation, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;Southwest Electric & Telecom Engineering Institute, Shanghai, China;Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2018.2865028;10.1109/tvcg.2016.2598619;10.1109/tvcg.2008.166;10.1109/tvcg.2015.2467196;10.1109/vast.2014.7042479;10.1109/tvcg.2016.2598460;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/tvcg.2017.2745180;10.1109/tvcg.2018.2865077;10.1109/tvcg.2018.2865029;10.1109/tvcg.2010.193;10.1109/tvcg.2014.2346911;10.1109/tvcg.2011.179;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532134;10.1109/tvcg.2014.2346926;10.1109/tvcg.2017.2744459;10.1109/tvcg.2013.228;10.1109/tvcg.2017.2744098;10.1109/tvcg.2014.2346913;10.1109/tvcg.2016.2598664;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Radio monitoring and management,radio signal data,radio spectrum data,situation awareness,visual analytics",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1548,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 603,
                "i": [
                    603
                ]
            }
        },
        {
            "name": "Xiaoyan Kui",
            "value": 26,
            "numPapers": 23,
            "cluster": "3",
            "visible": 1,
            "index": 1207,
            "x": 340.05903651533066,
            "y": 71.48322659383108,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analytics for Electromagnetic Situation Awareness in Radio Monitoring and Management",
                "DOI": "10.1109/tvcg.2019.2934655",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934655",
                "FirstPage": 590,
                "LastPage": 600,
                "PaperType": "J",
                "Abstract": "Traditional radio monitoring and management largely depend on radio spectrum data analysis, which requires considerable domain experience and heavy cognition effort and frequently results in incorrect signal judgment and incomprehensive situation awareness. Faced with increasingly complicated electromagnetic environments, radio supervisors urgently need additional data sources and advanced analytical technologies to enhance their situation awareness ability. This paper introduces a visual analytics approach for electromagnetic situation awareness. Guided by a detailed scenario and requirement analysis, we first propose a signal clustering method to process radio signal data and a situation assessment model to obtain qualitative and quantitative descriptions of the electromagnetic situations. We then design a two-module interface with a set of visualization views and interactions to help radio supervisors perceive and understand the electromagnetic situations by a joint analysis of radio signal data and radio spectrum data. Evaluations on real-world data sets and an interview with actual users demonstrate the effectiveness of our prototype system. Finally, we discuss the limitations of the proposed approach and provide future work directions.",
                "AuthorNamesDeduped": "Ying Zhao 0001;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen 0007;Wei Chen 0001",
                "AuthorNames": "Ying Zhao;Xiaobo Luo;Xiaoru Lin;Hairong Wang;Xiaoyan Kui;Fangfang Zhou;Jinsong Wang;Yi Chen;Wei Chen",
                "AuthorAffiliation": "School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Automation, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;School of Computer Science and Engineering, Central South University, Changsha, China;Southwest Electric & Telecom Engineering Institute, Shanghai, China;Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, China;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2018.2865028;10.1109/tvcg.2016.2598619;10.1109/tvcg.2008.166;10.1109/tvcg.2015.2467196;10.1109/vast.2014.7042479;10.1109/tvcg.2016.2598460;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/tvcg.2017.2745180;10.1109/tvcg.2018.2865077;10.1109/tvcg.2018.2865029;10.1109/tvcg.2010.193;10.1109/tvcg.2014.2346911;10.1109/tvcg.2011.179;10.1109/tvcg.2013.196;10.1109/infvis.2005.1532134;10.1109/tvcg.2014.2346926;10.1109/tvcg.2017.2744459;10.1109/tvcg.2013.228;10.1109/tvcg.2017.2744098;10.1109/tvcg.2014.2346913;10.1109/tvcg.2016.2598664;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Radio monitoring and management,radio signal data,radio spectrum data,situation awareness,visual analytics",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 1548,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 603,
                "i": [
                    603
                ]
            }
        },
        {
            "name": "Xi Ye",
            "value": 72,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 1208,
            "x": -299.15897446592885,
            "y": 177.07034759240113,
            "vy": 0,
            "vx": 0,
            "r": 1.0829015544041452,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Interactive Correction of Mislabeled Training Data",
                "DOI": "10.1109/vast47406.2019.8986943",
                "Link": "http://dx.doi.org/10.1109/VAST47406.2019.8986943",
                "FirstPage": 57,
                "LastPage": 68,
                "PaperType": "C",
                "Abstract": "In this paper, we develop a visual analysis method for interactively improving the quality of labeled data, which is essential to the success of supervised and semi-supervised learning. The quality improvement is achieved through the use of user-selected trusted items. We employ a bi-level optimization model to accurately match the labels of the trusted items and to minimize the training loss. Based on this model, a scalable data correction algorithm is developed to handle tens of thousands of labeled data efficiently. The selection of the trusted items is facilitated by an incremental tSNE with improved computational efficiency and layout stability to ensure a smooth transition between different levels. We evaluated our method on real-world datasets through quantitative evaluation and case studies, and the results were generally favorable.",
                "AuthorNamesDeduped": "Shouxing Xiang;Xi Ye;Jiazhi Xia;Jing Wu 0004;Yang Chen;Shixia Liu",
                "AuthorNames": "Shouxing Xiang;Xi Ye;Jiazhi Xia;Jing Wu;Yang Chen;Shixia Liu",
                "AuthorAffiliation": "School of Software, BNRist, Tsinghua University;School of Software, BNRist, Tsinghua University;School of Computer Science and Engineering, Central South University;School of Computer Science and Informatics, Cardiff University;School of Software, BNRist, Tsinghua University;School of Software, BNRist, Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2016.2598592;10.1109/tvcg.2017.2744818;10.1109/tvcg.2017.2744419;10.1109/tvcg.2014.2346594;10.1109/vast.2012.6400492;10.1109/vast.2018.8802509;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/tvcg.2014.2346574;10.1109/tvcg.2017.2744685;10.1109/tvcg.2018.2865026",
                "AuthorKeywords": "Labeled data debugging,trusted item,tSNE",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1545,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 604,
                "i": [
                    604
                ]
            }
        },
        {
            "name": "Yale Song",
            "value": 78,
            "numPapers": 11,
            "cluster": "1",
            "visible": 1,
            "index": 1209,
            "x": 101.02298709766902,
            "y": -332.7827460639512,
            "vy": 0,
            "vx": 0,
            "r": 1.0898100172711571,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "#FluxFlow: Visual Analysis of Anomalous Information Spreading on Social Media",
                "DOI": "10.1109/tvcg.2014.2346922",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346922",
                "FirstPage": 1773,
                "LastPage": 1782,
                "PaperType": "J",
                "Abstract": "We present FluxFlow, an interactive visual analysis system for revealing and analyzing anomalous information spreading in social media. Everyday, millions of messages are created, commented, and shared by people on social media websites, such as Twitter and Facebook. This provides valuable data for researchers and practitioners in many application domains, such as marketing, to inform decision-making. Distilling valuable social signals from the huge crowd's messages, however, is challenging, due to the heterogeneous and dynamic crowd behaviors. The challenge is rooted in data analysts' capability of discerning the anomalous information behaviors, such as the spreading of rumors or misinformation, from the rest that are more conventional patterns, such as popular topics and newsworthy events, in a timely fashion. FluxFlow incorporates advanced machine learning algorithms to detect anomalies, and offers a set of novel visualization designs for presenting the detected threads for deeper analysis. We evaluated FluxFlow with real datasets containing the Twitter feeds captured during significant events such as Hurricane Sandy. Through quantitative measurements of the algorithmic performance and qualitative interviews with domain experts, the results show that the back-end anomaly detection model is effective in identifying anomalous retweeting threads, and its front-end interactive visualizations are intuitive and useful for analysts to discover insights in data and comprehend the underlying analytical model.",
                "AuthorNamesDeduped": "Jian Zhao 0010;Nan Cao 0001;Zhen Wen;Yale Song;Yu-Ru Lin;Christopher Collins 0001",
                "AuthorNames": "Jian Zhao;Nan Cao;Zhen Wen;Yale Song;Yu-Ru Lin;Christopher Collins",
                "AuthorAffiliation": "University of Toronto;MIT;MIT;IBM J. Watson Research Center;University of Pittsburgh;UOIT",
                "InternalReferences": "0.1109/vast.2011.6102456;10.1109/tvcg.2012.291;10.1109/vast.2012.6400557;10.1109/tvcg.2011.179;10.1109/tvcg.2011.239;10.1109/tvcg.2012.226;10.1109/tvcg.2013.227;10.1109/vast.2012.6400485;10.1109/vast.2010.5652922;10.1109/tvcg.2010.129;10.1109/tvcg.2013.221;10.1109/tvcg.2013.162",
                "AuthorKeywords": "Retweeting threads, anomaly detection, social media, visual analytics, machine learning, information visualization",
                "AminerCitationCount": 194,
                "CitationCountCrossRef": 144,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 4240,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1243,
                "i": [
                    1243
                ]
            }
        },
        {
            "name": "Yun Jang",
            "value": 244,
            "numPapers": 59,
            "cluster": "1",
            "visible": 1,
            "index": 1210,
            "x": 150.36242881047033,
            "y": 313.7533107430362,
            "vy": 0,
            "vx": 0,
            "r": 1.280944156591825,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "A correlative analysis process in a visual analytics environment",
                "DOI": "10.1109/vast.2012.6400491",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400491",
                "FirstPage": 33,
                "LastPage": 42,
                "PaperType": "C",
                "Abstract": "Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.",
                "AuthorNamesDeduped": "Abish Malik;Ross Maciejewski;Niklas Elmqvist;Yun Jang;David S. Ebert;Whitney Huang",
                "AuthorNames": "Abish Malik;Ross Maciejewski;Niklas Elmqvist;Yun Jang;David S. Ebert;Whitney Huang",
                "AuthorAffiliation": "Purdue University, USA;Arizona State University, USA;Sejong University, South Korea;Purdue University, USA;Purdue University, USA;Purdue University, USA",
                "InternalReferences": "0.1109/infvis.2005.1532148;10.1109/tvcg.2011.179;10.1109/tvcg.2010.193;10.1109/infvis.1999.801851;10.1109/infvis.1999.801851;10.1109/vast.2007.4389006;10.1109/tvcg.2007.70539;10.1109/tvcg.2010.162;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Visual analytics, correlative analysis",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 1101,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1496,
                "i": [
                    1496
                ]
            }
        },
        {
            "name": "Mosab Khayat",
            "value": 22,
            "numPapers": 48,
            "cluster": "5",
            "visible": 1,
            "index": 1211,
            "x": -322.94320152924064,
            "y": -129.83716180679642,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "VASSL: A Visual Analytics Toolkit for Social Spambot Labeling",
                "DOI": "10.1109/tvcg.2019.2934266",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934266",
                "FirstPage": 874,
                "LastPage": 883,
                "PaperType": "J",
                "Abstract": "Social media platforms are filled with social spambots. Detecting these malicious accounts is essential, yet challenging, as they continually evolve to evade detection techniques. In this article, we present VASSL, a visual analytics system that assists in the process of detecting and labeling spambots. Our tool enhances the performance and scalability of manual labeling by providing multiple connected views and utilizing dimensionality reduction, sentiment analysis and topic modeling, enabling insights for the identification of spambots. The system allows users to select and analyze groups of accounts in an interactive manner, which enables the detection of spambots that may not be identified when examined individually. We present a user study to objectively evaluate the performance of VASSL users, as well as capturing subjective opinions about the usefulness and the ease of use of the tool.",
                "AuthorNamesDeduped": "Mosab Khayat;Morteza Karimzadeh;Jieqiong Zhao;David S. Ebert",
                "AuthorNames": "Mosab Khayat;Morteza Karimzadeh;Jieqiong Zhao;David S. Ebert",
                "AuthorAffiliation": "Purdue University;University of Colorado Boulder;Purdue University;Purdue University",
                "InternalReferences": "0.1109/tvcg.2015.2467196;10.1109/vast.2012.6400557;10.1109/vast.2016.7883510;10.1109/tvcg.2017.2745083;10.1109/tvcg.2017.2745080;10.1109/tvcg.2013.153;10.1109/tvcg.2014.2346920;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Spambot,Labeling,Detection,Visual Analytics,Social Media Annotation",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 621,
                "i": [
                    621
                ]
            }
        },
        {
            "name": "Haojing Jiang",
            "value": 49,
            "numPapers": 25,
            "cluster": "3",
            "visible": 1,
            "index": 1212,
            "x": 325.9664557133114,
            "y": -122.45762430204924,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Evaluating Perceptual Bias During Geometric Scaling of Scatterplots",
                "DOI": "10.1109/tvcg.2019.2934208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934208",
                "FirstPage": 321,
                "LastPage": 331,
                "PaperType": "J",
                "Abstract": "Scatterplots are frequently scaled to fit display areas in multi-view and multi-device data analysis environments. A common method used for scaling is to enlarge or shrink the entire scatterplot together with the inside points synchronously and proportionally. This process is called geometric scaling. However, geometric scaling of scatterplots may cause a perceptual bias, that is, the perceived and physical values of visual features may be dissociated with respect to geometric scaling. For example, if a scatterplot is projected from a laptop to a large projector screen, then observers may feel that the scatterplot shown on the projector has fewer points than that viewed on the laptop. This paper presents an evaluation study on the perceptual bias of visual features in scatterplots caused by geometric scaling. The study focuses on three fundamental visual features (i.e., numerosity, correlation, and cluster separation) and three hypotheses that are formulated on the basis of our experience. We carefully design three controlled experiments by using well-prepared synthetic data and recruit participants to complete the experiments on the basis of their subjective experience. With a detailed analysis of the experimental results, we obtain a set of instructive findings. First, geometric scaling causes a bias that has a linear relationship with the scale ratio. Second, no significant difference exists between the biases measured from normally and uniformly distributed scatterplots. Third, changing the point radius can correct the bias to a certain extent. These findings can be used to inspire the design decisions of scatterplots in various scenarios.",
                "AuthorNamesDeduped": "Yating Wei;Honghui Mei;Ying Zhao 0001;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen 0001",
                "AuthorNames": "Yating Wei;Honghui Mei;Ying Zhao;Shuyue Zhou;Bingru Lin;Haojing Jiang;Wei Chen",
                "AuthorAffiliation": "The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China;School of Computer Science and Engineering, Central South University, Changsha, China;The State Key Lab of CAD & CG, Zhejiang University, Hangzhou, China",
                "InternalReferences": "0.1109/tvcg.2011.229;10.1109/tvcg.2018.2865142;10.1109/tvcg.2015.2467732;10.1109/tvcg.2013.124;10.1109/vast.2010.5652460;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.183;10.1109/tvcg.2014.2346979;10.1109/tvcg.2006.163;10.1109/vast.2012.6400487;10.1109/tvcg.2015.2467671;10.1109/tvcg.2018.2864884;10.1109/infvis.2004.15;10.1109/tvcg.2017.2744184;10.1109/tvcg.2013.120;10.1109/tvcg.2013.153;10.1109/tvcg.2017.2744359;10.1109/vast.2009.5332628;10.1109/tvcg.2007.70596;10.1109/tvcg.2017.2744138;10.1109/tvcg.2018.2864912;10.1109/tvcg.2018.2865266;10.1109/tvcg.2017.2744098;10.1109/tvcg.2006.184;10.1109/tvcg.2018.2865020;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Evaluation,scatterplot,geometric scaling,bias,perceptual consistency",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 86,
                "DownloadsXplore": 1007,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 623,
                "i": [
                    623
                ]
            }
        },
        {
            "name": "Jörn Schneidewind",
            "value": 239,
            "numPapers": 14,
            "cluster": "2",
            "visible": 1,
            "index": 1213,
            "x": -157.70357526970488,
            "y": 310.61162622663136,
            "vy": 0,
            "vx": 0,
            "r": 1.2751871042026481,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Exploring and Visualizing the History of InfoVis",
                "DOI": "10.1109/infvis.2004.22",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.22",
                "FirstPage": null,
                "LastPage": null,
                "PaperType": "M",
                "Abstract": null,
                "AuthorNamesDeduped": "Daniel A. Keim;Helmut Barro;Christian Panse;Jörn Schneidewind;Mike Sips",
                "AuthorNames": "D.A. Keim;H. Barro;C. Panse;J. Schneidewind;M. Sips",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 2,
                "DownloadsXplore": 152,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2483,
                "i": [
                    2483
                ]
            }
        },
        {
            "name": "Subhashis Hazarika",
            "value": 38,
            "numPapers": 34,
            "cluster": "6",
            "visible": 1,
            "index": 1214,
            "x": -93.56795955350321,
            "y": -335.7008146326041,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "NNVA: Neural Network Assisted Visual Analysis of Yeast Cell Polarization Simulation",
                "DOI": "10.1109/tvcg.2019.2934591",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934591",
                "FirstPage": 34,
                "LastPage": 44,
                "PaperType": "J",
                "Abstract": "Complex computational models are often designed to simulate real-world physical phenomena in many scientific disciplines. However, these simulation models tend to be computationally very expensive and involve a large number of simulation input parameters, which need to be analyzed and properly calibrated before the models can be applied for real scientific studies. We propose a visual analysis system to facilitate interactive exploratory analysis of high-dimensional input parameter space for a complex yeast cell polarization simulation. The proposed system can assist the computational biologists, who designed the simulation model, to visually calibrate the input parameters by modifying the parameter values and immediately visualizing the predicted simulation outcome without having the need to run the original expensive simulation for every instance. Our proposed visual analysis system is driven by a trained neural network-based surrogate model as the backend analysis framework. In this work, we demonstrate the advantage of using neural networks as surrogate models for visual analysis by incorporating some of the recent advances in the field of uncertainty quantification, interpretability and explainability of neural network-based models. We utilize the trained network to perform interactive parameter sensitivity analysis of the original simulation as well as recommend optimal parameter configurations using the activation maximization framework of neural networks. We also facilitate detail analysis of the trained network to extract useful insights about the simulation model, learned by the network, during the training process. We performed two case studies, and discovered multiple new parameter configurations, which can trigger high cell polarization results in the original simulation model. We evaluated our results by comparing with the original simulation model outcomes as well as the findings from previous parameter analysis performed by our experts.",
                "AuthorNamesDeduped": "Subhashis Hazarika;Haoyu Li;Ko-Chih Wang;Han-Wei Shen;Ching-Shan Chou",
                "AuthorNames": "Subhashis Hazarika;Haoyu Li;Ko-Chih Wang;Han-Wei Shen;Ching-Shan Chou",
                "AuthorAffiliation": "Department of Computer Science, Ohio State University;Department of Computer Science, Ohio State University;Department of Computer Science, Ohio State University;Department of Computer Science, Ohio State University;Department of Mathematics, Ohio State University",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2016.2598869;10.1109/tvcg.2013.147;10.1109/tvcg.2018.2865029;10.1109/tvcg.2017.2744718;10.1109/tvcg.2018.2864500;10.1109/tvcg.2016.2598831;10.1109/tvcg.2018.2864843;10.1109/tvcg.2018.2864887;10.1109/vast.2017.8585721;10.1109/tvcg.2018.2865051;10.1109/tvcg.2014.2346321;10.1109/tvcg.2018.2865044;10.1109/tvcg.2017.2744158;10.1109/tvcg.2018.2864504;10.1109/tvcg.2016.2598830;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2865026;10.1109/tvcg.2018.2864499;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "Surrogate modeling,Neural networks,Computational biology,Visual analysis,Parameter analysis",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1001,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 624,
                "i": [
                    624
                ]
            }
        },
        {
            "name": "Carolina Nobre",
            "value": 65,
            "numPapers": 30,
            "cluster": "4",
            "visible": 1,
            "index": 1215,
            "x": 295.878479067696,
            "y": 184.4069565515004,
            "vy": 0,
            "vx": 0,
            "r": 1.0748416810592976,
            "node": {
                "Conference": "Vis",
                "Year": 2023,
                "Title": "Vistrust: a Multidimensional Framework and Empirical Study of Trust in Data Visualizations",
                "DOI": "10.1109/tvcg.2023.3326579",
                "Link": "http://dx.doi.org/10.1109/TVCG.2023.3326579",
                "FirstPage": 348,
                "LastPage": 358,
                "PaperType": "J",
                "Abstract": "Trust is an essential aspect of data visualization, as it plays a crucial role in the interpretation and decision-making processes of users. While research in social sciences outlines the multi-dimensional factors that can play a role in trust formation, most data visualization trust researchers employ a single-item scale to measure trust. We address this gap by proposing a comprehensive, multidimensional conceptualization and operationalization of trust in visualization. We do this by applying general theories of trust from social sciences, as well as synthesizing and extending earlier work and factors identified by studies in the visualization field. We apply a two-dimensional approach to trust in visualization, to distinguish between cognitive and affective elements, as well as between visualization and data-specific trust antecedents. We use our framework to design and run a large crowd-sourced study to quantify the role of visual complexity in establishing trust in science visualizations. Our study provides empirical evidence for several aspects of our proposed theoretical framework, most notably the impact of cognition, affective responses, and individual differences when establishing trust in visualizations.",
                "AuthorNamesDeduped": "Hamza Elhamdadi;Adam Stefkovics;Johanna Beyer;Eric Mörth;Hanspeter Pfister;Cindy Xiong Bearfield;Carolina Nobre",
                "AuthorNames": "Hamza Elhamdadi;Adam Stefkovics;Johanna Beyer;Eric Moerth;Hanspeter Pfister;Cindy Xiong Bearfield;Carolina Nobre",
                "AuthorAffiliation": "UMass Amherst, USA;HUN-REN Centre for Social Sciences, USA;Harvard University, USA;Harvard Medical School, USA;Harvard University, USA;UMass Amherst, USA;University of Toronto, Canada",
                "InternalReferences": "10.1109/tvcg.2016.2598544;10.1109/tvcg.2020.3028984;10.1109/tvcg.2017.2745240;10.1109/tvcg.2016.2598920;10.1109/tvcg.2022.3209457;10.1109/tvcg.2015.2467591",
                "AuthorKeywords": "Trust,visualization,science,framework",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 62,
                "DownloadsXplore": 307,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 10,
                "i": [
                    10
                ]
            }
        },
        {
            "name": "Pak Chung Wong",
            "value": 365,
            "numPapers": 58,
            "cluster": "4",
            "visible": 1,
            "index": 1216,
            "x": -342.87764840400985,
            "y": 63.91336499462564,
            "vy": 0,
            "vx": 0,
            "r": 1.420264824409902,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "IN-SPIRE InfoVis 2004 Contest Entry",
                "DOI": "10.1109/infvis.2004.37",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.37",
                "FirstPage": null,
                "LastPage": null,
                "PaperType": "M",
                "Abstract": "This is the first part (summary) of a three-part contest entry submitted to IEEE InfoVis 2004. The contest topic is visualizing InfoVis symposium papers from 1995 to 2002 and their references. The paper introduces the visualization tool IN-SPIRE, the visualization process and results, and presents lessons learned.",
                "AuthorNamesDeduped": "Pak Chung Wong;Elizabeth G. Hetzler;Christian Posse;Mark A. Whiting;Susan Havre;Nick Cramer;Anuj R. Shah;Mudita Singhal;Alan Turner;James J. Thomas",
                "AuthorNames": "Pak Chung Wong;B. Hetzler;C. Posse;M. Whiting;S. Havre;N. Cramer;Anuj Shah;M. Singhal;A. Turner;J. Thomas",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;;Pacific Northwest National Laboratory, USA",
                "InternalReferences": "10.1109/infvis.1995.528686",
                "AuthorKeywords": null,
                "AminerCitationCount": 72,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 5,
                "DownloadsXplore": 234,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2472,
                "i": [
                    2472
                ]
            }
        },
        {
            "name": "Joshua Shrestha",
            "value": 39,
            "numPapers": 29,
            "cluster": "1",
            "visible": 1,
            "index": 1217,
            "x": 209.74060265973094,
            "y": -278.8527919815989,
            "vy": 0,
            "vx": 0,
            "r": 1.0449050086355787,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Visual Analysis of High-Dimensional Event Sequence Data via Dynamic Hierarchical Aggregation",
                "DOI": "10.1109/tvcg.2019.2934661",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934661",
                "FirstPage": 440,
                "LastPage": 450,
                "PaperType": "J",
                "Abstract": "Temporal event data are collected across a broad range of domains, and a variety of visual analytics techniques have been developed to empower analysts working with this form of data. These techniques generally display aggregate statistics computed over sets of event sequences that share common patterns. Such techniques are often hindered, however, by the high-dimensionality of many real-world event sequence datasets which can prevent effective aggregation. A common coping strategy for this challenge is to group event types together prior to visualization, as a pre-process, so that each group can be represented within an analysis as a single event type. However, computing these event groupings as a pre-process also places significant constraints on the analysis. This paper presents a new visual analytics approach for dynamic hierarchical dimension aggregation. The approach leverages a predefined hierarchy of dimensions to computationally quantify the informativeness, with respect to a measure of interest, of alternative levels of grouping within the hierarchy at runtime. This information is then interactively visualized, enabling users to dynamically explore the hierarchy to select the most appropriate level of grouping to use at any individual step within an analysis. Key contributions include an algorithm for interactively determining the most informative set of event groupings for a specific analysis context, and a scented scatter-plus-focus visualization design with an optimization-based layout algorithm that supports interactive hierarchical exploration of alternative event type groupings. We apply these techniques to high-dimensional event sequence data from the medical domain and report findings from domain expert interviews.",
                "AuthorNamesDeduped": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorNames": "David Gotz;Jonathan Zhang;Wenyuan Wang;Joshua Shrestha;David Borland",
                "AuthorAffiliation": "School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Biostatistics, University of North Carolina, Chapel Hill;School of Information and Library Science, University of North Carolina, Chapel Hill;Dept. of Computer Science, University of North Carolina, Chapel Hill;RENCI, University of North Carolina, Chapel Hill",
                "InternalReferences": "0.1109/tvcg.2019.2934209;10.1109/tvcg.2017.2745278;10.1109/tvcg.2014.2346433;10.1109/vast.2016.7883512;10.1109/tvcg.2014.2346682;10.1109/tvcg.2017.2745320;10.1109/tvcg.2018.2864886;10.1109/tvcg.2013.200;10.1109/vast.2011.6102443;10.1109/infvis.2005.1532152;10.1109/infvis.2000.885091;10.1109/tvcg.2017.2744686;10.1109/tvcg.2009.108;10.1109/tvcg.2007.70589;10.1109/vast.2014.7042487;10.1109/tvcg.2012.238",
                "AuthorKeywords": "Temporal event sequence visualization,visual analytics,hierarchical aggregation,medical informatics",
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1035,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 627,
                "i": [
                    627
                ]
            }
        },
        {
            "name": "Megan Monroe",
            "value": 178,
            "numPapers": 2,
            "cluster": "1",
            "visible": 1,
            "index": 1218,
            "x": 33.720001165589096,
            "y": 347.43770883626416,
            "vy": 0,
            "vx": 0,
            "r": 1.204951065054692,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Temporal Event Sequence Simplification",
                "DOI": "10.1109/tvcg.2013.200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.200",
                "FirstPage": 2227,
                "LastPage": 2236,
                "PaperType": "J",
                "Abstract": "Electronic Health Records (EHRs) have emerged as a cost-effective data source for conducting medical research. The difficulty in using EHRs for research purposes, however, is that both patient selection and record analysis must be conducted across very large, and typically very noisy datasets. Our previous work introduced EventFlow, a visualization tool that transforms an entire dataset of temporal event records into an aggregated display, allowing researchers to analyze population-level patterns and trends. As datasets become larger and more varied, however, it becomes increasingly difficult to provide a succinct, summarizing display. This paper presents a series of user-driven data simplifications that allow researchers to pare event records down to their core elements. Furthermore, we present a novel metric for measuring visual complexity, and a language for codifying disjoint strategies into an overarching simplification framework. These simplifications were used by real-world researchers to gain new and valuable insights from initially overwhelming datasets.",
                "AuthorNamesDeduped": "Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman",
                "AuthorNames": "Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman",
                "AuthorAffiliation": "University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA",
                "InternalReferences": "0.1109/tvcg.2009.117;10.1109/tvcg.2012.213;10.1109/vast.2010.5652890",
                "AuthorKeywords": "Event sequences, simplification, electronic heath records, temporal query",
                "AminerCitationCount": 318,
                "CitationCountCrossRef": 193,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 2567,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1367,
                "i": [
                    1367
                ]
            }
        },
        {
            "name": "Rongjian Lan",
            "value": 178,
            "numPapers": 2,
            "cluster": "1",
            "visible": 1,
            "index": 1219,
            "x": -259.66136366527934,
            "y": -233.5079789203949,
            "vy": 0,
            "vx": 0,
            "r": 1.204951065054692,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Temporal Event Sequence Simplification",
                "DOI": "10.1109/tvcg.2013.200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.200",
                "FirstPage": 2227,
                "LastPage": 2236,
                "PaperType": "J",
                "Abstract": "Electronic Health Records (EHRs) have emerged as a cost-effective data source for conducting medical research. The difficulty in using EHRs for research purposes, however, is that both patient selection and record analysis must be conducted across very large, and typically very noisy datasets. Our previous work introduced EventFlow, a visualization tool that transforms an entire dataset of temporal event records into an aggregated display, allowing researchers to analyze population-level patterns and trends. As datasets become larger and more varied, however, it becomes increasingly difficult to provide a succinct, summarizing display. This paper presents a series of user-driven data simplifications that allow researchers to pare event records down to their core elements. Furthermore, we present a novel metric for measuring visual complexity, and a language for codifying disjoint strategies into an overarching simplification framework. These simplifications were used by real-world researchers to gain new and valuable insights from initially overwhelming datasets.",
                "AuthorNamesDeduped": "Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman",
                "AuthorNames": "Megan Monroe;Rongjian Lan;Hanseung Lee;Catherine Plaisant;Ben Shneiderman",
                "AuthorAffiliation": "University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA;University of Maryland, USA",
                "InternalReferences": "0.1109/tvcg.2009.117;10.1109/tvcg.2012.213;10.1109/vast.2010.5652890",
                "AuthorKeywords": "Event sequences, simplification, electronic heath records, temporal query",
                "AminerCitationCount": 318,
                "CitationCountCrossRef": 193,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 2567,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1367,
                "i": [
                    1367
                ]
            }
        },
        {
            "name": "Lee Byron",
            "value": 121,
            "numPapers": 3,
            "cluster": "1",
            "visible": 1,
            "index": 1220,
            "x": 349.34172531107106,
            "y": -3.2185333125788445,
            "vy": 0,
            "vx": 0,
            "r": 1.1393206678180772,
            "node": {
                "Conference": "InfoVis",
                "Year": 2008,
                "Title": "Stacked Graphs - Geometry & Aesthetics",
                "DOI": "10.1109/tvcg.2008.166",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.166",
                "FirstPage": 1245,
                "LastPage": 1252,
                "PaperType": "J",
                "Abstract": "In February 2008, the New York Times published an unusual chart of box office revenues for 7500 movies over 21 years. The chart was based on a similar visualization, developed by the first author, that displayed trends in music listening. This paper describes the design decisions and algorithms behind these graphics, and discusses the reaction on the Web. We suggest that this type of complex layered graph is effective for displaying large data sets to a mass audience. We provide a mathematical analysis of how this layered graph relates to traditional stacked graphs and to techniques such as ThemeRiver, showing how each method is optimizing a different ldquoenergy functionrdquo. Finally, we discuss techniques for coloring and ordering the layers of such graphs. Throughout the paper, we emphasize the interplay between considerations of aesthetics and legibility.",
                "AuthorNamesDeduped": "Lee Byron;Martin Wattenberg",
                "AuthorNames": "Lee Byron;Martin Wattenberg",
                "AuthorAffiliation": "The New York Times;Visual Communication Laboratory at IBM",
                "InternalReferences": "0.1109/tvcg.2006.163;10.1109/infvis.2005.1532122;10.1109/tvcg.2007.70577;10.1109/infvis.2000.885098",
                "AuthorKeywords": "Streamgraph, ThemeRiver, listening history, lastfm, aesthetics, communication-minded visualization, time series",
                "AminerCitationCount": 557,
                "CitationCountCrossRef": 249,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 2955,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1964,
                "i": [
                    1964
                ]
            }
        },
        {
            "name": "Marc Weber",
            "value": 85,
            "numPapers": 2,
            "cluster": "1",
            "visible": 1,
            "index": 1221,
            "x": -255.5242439109496,
            "y": 238.44781561955554,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "InfoVis",
                "Year": 2001,
                "Title": "Visualizing time-series on spirals",
                "DOI": "10.1109/infvis.2001.963273",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2001.963273",
                "FirstPage": 7,
                "LastPage": 13,
                "PaperType": "C",
                "Abstract": null,
                "AuthorNamesDeduped": "Marc Weber;Marc Alexa;Wolfgang Müller 0004",
                "AuthorNames": "M. Weber;M. Alexa;W. Muller",
                "AuthorAffiliation": "Crcp Technische Universität Darmstadt, Germany;Crcp Technische Universität Darmstadt, Germany;Crcp Technische Universität Darmstadt, Germany",
                "InternalReferences": "0.1109/visual.1991.175794;10.1109/infvis.2000.885098;10.1109/infvis.1995.528685",
                "AuthorKeywords": "Information Visualization, Graph Drawing, Visualization of Time-Series Data, Data Mining ",
                "AminerCitationCount": 517,
                "CitationCountCrossRef": 134,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 878,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2841,
                "i": [
                    2841
                ]
            }
        },
        {
            "name": "Marc Alexa",
            "value": 102,
            "numPapers": 3,
            "cluster": "1",
            "visible": 1,
            "index": 1222,
            "x": 27.35763141998158,
            "y": -348.57073887962605,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "InfoVis",
                "Year": 2001,
                "Title": "Visualizing time-series on spirals",
                "DOI": "10.1109/infvis.2001.963273",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2001.963273",
                "FirstPage": 7,
                "LastPage": 13,
                "PaperType": "C",
                "Abstract": null,
                "AuthorNamesDeduped": "Marc Weber;Marc Alexa;Wolfgang Müller 0004",
                "AuthorNames": "M. Weber;M. Alexa;W. Muller",
                "AuthorAffiliation": "Crcp Technische Universität Darmstadt, Germany;Crcp Technische Universität Darmstadt, Germany;Crcp Technische Universität Darmstadt, Germany",
                "InternalReferences": "0.1109/visual.1991.175794;10.1109/infvis.2000.885098;10.1109/infvis.1995.528685",
                "AuthorKeywords": "Information Visualization, Graph Drawing, Visualization of Time-Series Data, Data Mining ",
                "AminerCitationCount": 517,
                "CitationCountCrossRef": 134,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 878,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2841,
                "i": [
                    2841
                ]
            }
        },
        {
            "name": "Wolfgang Müller 0004",
            "value": 85,
            "numPapers": 2,
            "cluster": "1",
            "visible": 1,
            "index": 1223,
            "x": 215.37151749595353,
            "y": 275.61768711657487,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "InfoVis",
                "Year": 2001,
                "Title": "Visualizing time-series on spirals",
                "DOI": "10.1109/infvis.2001.963273",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2001.963273",
                "FirstPage": 7,
                "LastPage": 13,
                "PaperType": "C",
                "Abstract": null,
                "AuthorNamesDeduped": "Marc Weber;Marc Alexa;Wolfgang Müller 0004",
                "AuthorNames": "M. Weber;M. Alexa;W. Muller",
                "AuthorAffiliation": "Crcp Technische Universität Darmstadt, Germany;Crcp Technische Universität Darmstadt, Germany;Crcp Technische Universität Darmstadt, Germany",
                "InternalReferences": "0.1109/visual.1991.175794;10.1109/infvis.2000.885098;10.1109/infvis.1995.528685",
                "AuthorKeywords": "Information Visualization, Graph Drawing, Visualization of Time-Series Data, Data Mining ",
                "AminerCitationCount": 517,
                "CitationCountCrossRef": 134,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 878,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2841,
                "i": [
                    2841
                ]
            }
        },
        {
            "name": "Shusen Liu 0001",
            "value": 76,
            "numPapers": 28,
            "cluster": "11",
            "visible": 1,
            "index": 1224,
            "x": -345.12628097420685,
            "y": -57.77413072399082,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "NLIZE: A Perturbation-Driven Visual Interrogation Tool for Analyzing and Interpreting Natural Language Inference Models",
                "DOI": "10.1109/tvcg.2018.2865230",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865230",
                "FirstPage": 651,
                "LastPage": 660,
                "PaperType": "J",
                "Abstract": "With the recent advances in deep learning, neural network models have obtained state-of-the-art performances for many linguistic tasks in natural language processing. However, this rapid progress also brings enormous challenges. The opaque nature of a neural network model leads to hard-to-debug-systems and difficult-to-interpret mechanisms. Here, we introduce a visualization system that, through a tight yet flexible integration between visualization elements and the underlying model, allows a user to interrogate the model by perturbing the input, internal state, and prediction while observing changes in other parts of the pipeline. We use the natural language inference problem as an example to illustrate how a perturbation-driven paradigm can help domain experts assess the potential limitation of a model, probe its inner states, and interpret and form hypotheses about fundamental model mechanisms such as attention.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Zhimin Li;Tao Li 0039;Vivek Srikumar;Valerio Pascucci;Peer-Timo Bremer",
                "AuthorNames": "Shusen Liu;Zhimin Li;Tao Li;Vivek Srikumar;Valerio Pascucci;Peer-Timo Bremer",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;School of Computing, University of Utah;School of Computing, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/tvcg.2017.2744683;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2745141;10.1109/tvcg.2017.2744358;10.1109/tvcg.2017.2744158;10.1109/visual.2005.1532820;10.1109/tvcg.2017.2744878",
                "AuthorKeywords": "Natural Language Processing,Interpretable Machine Learning,Natural Language Inference,Attention Visualization",
                "AminerCitationCount": 33,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1186,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 677,
                "i": [
                    677
                ]
            }
        },
        {
            "name": "Jim Gaffney",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1225,
            "x": 293.6310681272246,
            "y": -190.60638979442734,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "J. Luc Peterson",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1226,
            "x": -87.79744410675013,
            "y": 339.0303951098221,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Peter B. Robinson",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1227,
            "x": -164.3395929073516,
            "y": -309.42284693126004,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Brian K. Spears",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1228,
            "x": 330.3255011544291,
            "y": 117.19668633146267,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Di Wang",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1229,
            "x": -322.8682956811696,
            "y": 136.7701123927914,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Dan Maljovec",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1230,
            "x": 145.74538354097115,
            "y": -319.0741029549331,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Rushil Anirudh",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1231,
            "x": 108.10725117316497,
            "y": 333.86048320186416,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Jayaraman J. Thiagarajan",
            "value": 25,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 1232,
            "x": -305.3583413107647,
            "y": -173.22321839735727,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Visual Exploration of Semantic Relationships in Neural Word Embeddings",
                "DOI": "10.1109/tvcg.2017.2745141",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745141",
                "FirstPage": 553,
                "LastPage": 562,
                "PaperType": "J",
                "Abstract": "Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. Here, we introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Peer-Timo Bremer;Jayaraman J. Thiagarajan;Vivek Srikumar;Bei Wang 0001;Yarden Livnat;Valerio Pascucci",
                "AuthorNames": "Shusen Liu;Peer-Timo Bremer;Jayaraman J. Thiagarajan;Vivek Srikumar;Bei Wang;Yarden Livnat;Valerio Pascucci",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;School of Computing, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/visual.1990.146402;10.1109/tvcg.2013.196",
                "AuthorKeywords": "Natural Language Processing,Word Embedding,High-Dimensional Data",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2314,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 781,
                "i": [
                    781
                ]
            }
        },
        {
            "name": "Sam Ade Jacobs",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1233,
            "x": 342.3111244755395,
            "y": -78.56904008763053,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Brian C. Van Essen",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1234,
            "x": -199.41773078603478,
            "y": 289.27939547805425,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "David Hysom",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1235,
            "x": -48.38057887210714,
            "y": -348.15128836182674,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Jae-Seung Yeom",
            "value": 0,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1236,
            "x": 270.95673755608186,
            "y": 224.12596095268518,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications",
                "DOI": "10.1109/tvcg.2019.2934594",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934594",
                "FirstPage": 291,
                "LastPage": 300,
                "PaperType": "J",
                "Abstract": "With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Jim Gaffney;J. Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom",
                "AuthorNames": "Shusen Liu;Luc Peterson;Peter B. Robinson;Harsh Bhatia;Valerio Pascucci;Brian K. Spears;Peer-Timo Bremer;Di Wang;Dan Maljovec;Rushil Anirudh;Jayaraman J. Thiagarajan;Sam Ade Jacobs;Brian C. Van Essen;David Hysom;Jae-Seung Yeom;Jim Gaffney",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;SCI Institute, University of Utah;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;SCI Institute, University of Utah;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory",
                "InternalReferences": "0.1109/infvis.2004.68;10.1109/tvcg.2011.245;10.1109/tvcg.2011.244;10.1109/tvcg.2010.197;10.1109/tvcg.2010.213;10.1109/tvcg.2010.213;10.1109/tvcg.2008.110;10.1109/visual.2005.1532839;10.1109/tvcg.2013.179;10.1109/vast.2018.8802509;10.1109/tvcg.2018.2865230;10.1109/tvcg.2018.2864812;10.1109/visual.1998.745348;10.1109/tvcg.2013.148;10.1109/tvcg.2018.2864504",
                "AuthorKeywords": "Model Evaluation,Deep Learning,High-Dimensional Space,Topological Data Analysis,Inertial Confinement Fusion",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 996,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 639,
                "i": [
                    639
                ]
            }
        },
        {
            "name": "Zhen Cao",
            "value": 8,
            "numPapers": 13,
            "cluster": "11",
            "visible": 1,
            "index": 1237,
            "x": -351.33195789011035,
            "y": 17.772320194664907,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "ICE: An Interactive Configuration Explorer for High Dimensional Categorical Parameter Spaces",
                "DOI": "10.1109/vast47406.2019.8986923",
                "Link": "http://dx.doi.org/10.1109/VAST47406.2019.8986923",
                "FirstPage": 23,
                "LastPage": 34,
                "PaperType": "C",
                "Abstract": "There are many applications where users seek to explore the impact of the settings of several categorical variables with respect to one dependent numerical variable. For example, a computer systems analyst might want to study how the type of file system or storage device affects system performance. A usual choice is the method of Parallel Sets designed to visualize multivariate categorical variables, However, we found that the magnitude of the parameter impacts on the numerical variable cannot be easily observed here. We also attempted a dimension reduction approach based on Multiple Correspondence Analysis but found that the SVD-generated 2D layout resulted in a loss of information. We hence propose a novel approach, the Interactive Configuration Explorer (ICE), which directly addresses the need of analysts to learn how the dependent numerical variable is affected by the parameter settings given multiple optimization objectives. No information is lost as ICE shows the complete distribution and statistics of the dependent variable in context with each categorical variable. Analysts can interactively filter the variables to optimize for certain goals such as achieving a system with maximum performance, low variance, etc. Our system was developed in tight collaboration with a group of systems performance researchers and its final effectiveness was evaluated with expert interviews, a comparative user study, and two case studies.",
                "AuthorNamesDeduped": "Anjul Kumar Tyagi;Zhen Cao;Tyler Estro;Erez Zadok;Klaus Mueller 0001",
                "AuthorNames": "Anjul Tyagi;Zhen Cao;Tyler Estro;Erez Zadok;Klaus Mueller",
                "AuthorAffiliation": "Department of Computer Science, Stony Brook University;Department of Computer Science, Stony Brook University;Department of Computer Science, Stony Brook University;Department of Computer Science, Stony Brook University;Department of Computer Science, Stony Brook University",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2017.2745278;10.1109/tvcg.2014.2346448;10.1109/visual.1997.663916;10.1109/tvcg.2013.182;10.1109/tvcg.2015.2467132;10.1109/tvcg.2015.2467132;10.1109/tvcg.2009.111;10.1109/tvcg.2015.2467324;10.1109/tvcg.2014.2346321;10.1109/tvcg.2017.2744686;10.1109/tvcg.2018.2864510;10.1109/tvcg.2010.183;10.1109/tvcg.2010.223",
                "AuthorKeywords": "Data Clustering,Illustrative Visualization,User Interfaces,High Dimensional Data",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 237,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 641,
                "i": [
                    641
                ]
            }
        },
        {
            "name": "Michael Schwärzler",
            "value": 31,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1238,
            "x": 247.15602236251976,
            "y": -250.527245244779,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "LiteVis: Integrated Visualization for Simulation-Based Decision Support in Lighting Design",
                "DOI": "10.1109/tvcg.2015.2468011",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2468011",
                "FirstPage": 290,
                "LastPage": 299,
                "PaperType": "J",
                "Abstract": "State-of-the-art lighting design is based on physically accurate lighting simulations of scenes such as offices. The simulation results support lighting designers in the creation of lighting configurations, which must meet contradicting customer objectives regarding quality and price while conforming to industry standards. However, current tools for lighting design impede rapid feedback cycles. On the one side, they decouple analysis and simulation specification. On the other side, they lack capabilities for a detailed comparison of multiple configurations. The primary contribution of this paper is a design study of LiteVis, a system for efficient decision support in lighting design. LiteVis tightly integrates global illumination-based lighting simulation, a spatial representation of the scene, and non-spatial visualizations of parameters and result indicators. This enables an efficient iterative cycle of simulation parametrization and analysis. Specifically, a novel visualization supports decision making by ranking simulated lighting configurations with regard to a weight-based prioritization of objectives that considers both spatial and non-spatial characteristics. In the spatial domain, novel concepts support a detailed comparison of illumination scenarios. We demonstrate LiteVis using a real-world use case and report qualitative feedback of lighting designers. This feedback indicates that LiteVis successfully supports lighting designers to achieve key tasks more efficiently and with greater certainty.",
                "AuthorNamesDeduped": "Johannes Sorger;Thomas Ortner;Christian Luksch;Michael Schwärzler;M. Eduard Gröller;Harald Piringer",
                "AuthorNames": "Johannes Sorger;Thomas Ortner;Christian Luksch;Michael Schwärzler;Eduard Gröller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;VRVis Research Center;VRVis Research Center;VRVis Research Center;TU Wien;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2014.2346626;10.1109/tvcg.2011.185;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/infvis.2003.1249032;10.1109/tvcg.2013.173;10.1109/tvcg.2009.110;10.1109/tvcg.2014.2346321;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Integrating Spatial and Non-Spatial Data Visualization, Visualization in Physical Sciences and Engineering, Coordinated and Multiple Views, Visual Knowledge Discovery",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 823,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1131,
                "i": [
                    1131
                ]
            }
        },
        {
            "name": "Christian Luksch",
            "value": 31,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1239,
            "x": -13.021690168298143,
            "y": 351.82443858430423,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "LiteVis: Integrated Visualization for Simulation-Based Decision Support in Lighting Design",
                "DOI": "10.1109/tvcg.2015.2468011",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2468011",
                "FirstPage": 290,
                "LastPage": 299,
                "PaperType": "J",
                "Abstract": "State-of-the-art lighting design is based on physically accurate lighting simulations of scenes such as offices. The simulation results support lighting designers in the creation of lighting configurations, which must meet contradicting customer objectives regarding quality and price while conforming to industry standards. However, current tools for lighting design impede rapid feedback cycles. On the one side, they decouple analysis and simulation specification. On the other side, they lack capabilities for a detailed comparison of multiple configurations. The primary contribution of this paper is a design study of LiteVis, a system for efficient decision support in lighting design. LiteVis tightly integrates global illumination-based lighting simulation, a spatial representation of the scene, and non-spatial visualizations of parameters and result indicators. This enables an efficient iterative cycle of simulation parametrization and analysis. Specifically, a novel visualization supports decision making by ranking simulated lighting configurations with regard to a weight-based prioritization of objectives that considers both spatial and non-spatial characteristics. In the spatial domain, novel concepts support a detailed comparison of illumination scenarios. We demonstrate LiteVis using a real-world use case and report qualitative feedback of lighting designers. This feedback indicates that LiteVis successfully supports lighting designers to achieve key tasks more efficiently and with greater certainty.",
                "AuthorNamesDeduped": "Johannes Sorger;Thomas Ortner;Christian Luksch;Michael Schwärzler;M. Eduard Gröller;Harald Piringer",
                "AuthorNames": "Johannes Sorger;Thomas Ortner;Christian Luksch;Michael Schwärzler;Eduard Gröller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;VRVis Research Center;VRVis Research Center;VRVis Research Center;TU Wien;VRVis Research Center",
                "InternalReferences": "0.1109/tvcg.2014.2346626;10.1109/tvcg.2011.185;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/infvis.2003.1249032;10.1109/tvcg.2013.173;10.1109/tvcg.2009.110;10.1109/tvcg.2014.2346321;10.1109/tvcg.2009.111",
                "AuthorKeywords": "Integrating Spatial and Non-Spatial Data Visualization, Visualization in Physical Sciences and Engineering, Coordinated and Multiple Views, Visual Knowledge Discovery",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 823,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1131,
                "i": [
                    1131
                ]
            }
        },
        {
            "name": "Ji Qi",
            "value": 0,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 1240,
            "x": -228.14417955879026,
            "y": -268.3285920908289,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "STBins: Visual Tracking and Comparison of Multiple Data Sequences Using Temporal Binning",
                "DOI": "10.1109/tvcg.2019.2934289",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934289",
                "FirstPage": 1054,
                "LastPage": 1063,
                "PaperType": "J",
                "Abstract": "While analyzing multiple data sequences, the following questions typically arise: how does a single sequence change over time, how do multiple sequences compare within a period, and how does such comparison change over time. This paper presents a visual technique named STBins to answer these questions. STBins is designed for visual tracking of individual data sequences and also for comparison of sequences. The latter is done by showing the similarity of sequences within temporal windows. A perception study is conducted to examine the readability of alternative visual designs based on sequence tracking and comparison tasks. Also, two case studies based on real-world datasets are presented in detail to demonstrate usage of our technique.",
                "AuthorNamesDeduped": "Ji Qi;Vincent Bloemen;Shihan Wang 0001;Jarke J. van Wijk;Huub van de Wetering",
                "AuthorNames": "Ji Qi;Vincent Bloemen;Shihan Wang;Jarke van Wijk;Huub van de Wetering",
                "AuthorAffiliation": "Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands;Faculty of Electrical Engineering, Mathematics Computer Science, University of Twente, Enschede, Netherlands;University of Amsterdam, Amsterdam, Netherlands;Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands;Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands",
                "InternalReferences": "0.1109/tvcg.2011.232;10.1109/tvcg.2013.124;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2744199;10.1109/tvcg.2014.2346682;10.1109/tvcg.2018.2864885;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2008.125;10.1109/tvcg.2014.2346919;10.1109/tvcg.2009.117;10.1109/vast.2016.7883511;10.1109/tvcg.2012.225;10.1109/tvcg.2012.189",
                "AuthorKeywords": "Visualization,time series data,data sequence",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 644,
                "i": [
                    644
                ]
            }
        },
        {
            "name": "Vincent Bloemen",
            "value": 0,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 1241,
            "x": 349.6206114404507,
            "y": 43.765603571816065,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "STBins: Visual Tracking and Comparison of Multiple Data Sequences Using Temporal Binning",
                "DOI": "10.1109/tvcg.2019.2934289",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934289",
                "FirstPage": 1054,
                "LastPage": 1063,
                "PaperType": "J",
                "Abstract": "While analyzing multiple data sequences, the following questions typically arise: how does a single sequence change over time, how do multiple sequences compare within a period, and how does such comparison change over time. This paper presents a visual technique named STBins to answer these questions. STBins is designed for visual tracking of individual data sequences and also for comparison of sequences. The latter is done by showing the similarity of sequences within temporal windows. A perception study is conducted to examine the readability of alternative visual designs based on sequence tracking and comparison tasks. Also, two case studies based on real-world datasets are presented in detail to demonstrate usage of our technique.",
                "AuthorNamesDeduped": "Ji Qi;Vincent Bloemen;Shihan Wang 0001;Jarke J. van Wijk;Huub van de Wetering",
                "AuthorNames": "Ji Qi;Vincent Bloemen;Shihan Wang;Jarke van Wijk;Huub van de Wetering",
                "AuthorAffiliation": "Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands;Faculty of Electrical Engineering, Mathematics Computer Science, University of Twente, Enschede, Netherlands;University of Amsterdam, Amsterdam, Netherlands;Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands;Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands",
                "InternalReferences": "0.1109/tvcg.2011.232;10.1109/tvcg.2013.124;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2744199;10.1109/tvcg.2014.2346682;10.1109/tvcg.2018.2864885;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2008.125;10.1109/tvcg.2014.2346919;10.1109/tvcg.2009.117;10.1109/vast.2016.7883511;10.1109/tvcg.2012.225;10.1109/tvcg.2012.189",
                "AuthorKeywords": "Visualization,time series data,data sequence",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 644,
                "i": [
                    644
                ]
            }
        },
        {
            "name": "Shihan Wang 0001",
            "value": 0,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 1242,
            "x": -287.4783071677602,
            "y": 203.9760351314804,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "STBins: Visual Tracking and Comparison of Multiple Data Sequences Using Temporal Binning",
                "DOI": "10.1109/tvcg.2019.2934289",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934289",
                "FirstPage": 1054,
                "LastPage": 1063,
                "PaperType": "J",
                "Abstract": "While analyzing multiple data sequences, the following questions typically arise: how does a single sequence change over time, how do multiple sequences compare within a period, and how does such comparison change over time. This paper presents a visual technique named STBins to answer these questions. STBins is designed for visual tracking of individual data sequences and also for comparison of sequences. The latter is done by showing the similarity of sequences within temporal windows. A perception study is conducted to examine the readability of alternative visual designs based on sequence tracking and comparison tasks. Also, two case studies based on real-world datasets are presented in detail to demonstrate usage of our technique.",
                "AuthorNamesDeduped": "Ji Qi;Vincent Bloemen;Shihan Wang 0001;Jarke J. van Wijk;Huub van de Wetering",
                "AuthorNames": "Ji Qi;Vincent Bloemen;Shihan Wang;Jarke van Wijk;Huub van de Wetering",
                "AuthorAffiliation": "Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands;Faculty of Electrical Engineering, Mathematics Computer Science, University of Twente, Enschede, Netherlands;University of Amsterdam, Amsterdam, Netherlands;Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands;Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands",
                "InternalReferences": "0.1109/tvcg.2011.232;10.1109/tvcg.2013.124;10.1109/tvcg.2017.2745278;10.1109/tvcg.2017.2745083;10.1109/tvcg.2011.239;10.1109/tvcg.2014.2346433;10.1109/vast.2016.7883512;10.1109/tvcg.2017.2744199;10.1109/tvcg.2014.2346682;10.1109/tvcg.2018.2864885;10.1109/tvcg.2016.2598797;10.1109/tvcg.2013.200;10.1109/tvcg.2008.125;10.1109/tvcg.2014.2346919;10.1109/tvcg.2009.117;10.1109/vast.2016.7883511;10.1109/tvcg.2012.225;10.1109/tvcg.2012.189",
                "AuthorKeywords": "Visualization,time series data,data sequence",
                "AminerCitationCount": 3,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 732,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 644,
                "i": [
                    644
                ]
            }
        },
        {
            "name": "Yan Lyu",
            "value": 0,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1243,
            "x": 74.22357545964236,
            "y": -344.7330283654103,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling",
                "DOI": "10.1109/tvcg.2019.2934657",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934657",
                "FirstPage": 811,
                "LastPage": 821,
                "PaperType": "J",
                "Abstract": "OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.",
                "AuthorNamesDeduped": "Yan Lyu;Xu Liu 0014;Hanyi Chen;Arpan Mangal;Kai Liu 0001;Chao Chen 0004;Brian Y. Lim",
                "AuthorNames": "Yan Lyu;Xu Liu;Hanyi Chen;Arpan Mangal;Kai Liu;Chao Chen;Brian Lim",
                "AuthorAffiliation": "National University of Singapore;Southeast University, China;Zhejiang University, China;Indian Institute of Technology, Delhi;Chongqing University, China;Chongqing University, China;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2017.2744322;10.1109/vast.2009.5332584;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2014.2346271;10.1109/tvcg.2007.70539;10.1109/tvcg.2006.147;10.1109/tvcg.2015.2467771;10.1109/tvcg.2017.2744338;10.1109/tvcg.2011.223;10.1109/vast.2011.6102455;10.1109/tvcg.2011.190;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "OD Visualization,Edge Bundling,Trajectory",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 646,
                "i": [
                    646
                ]
            }
        },
        {
            "name": "Xu Liu 0014",
            "value": 0,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1244,
            "x": 178.2052717331632,
            "y": 304.45505600418176,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling",
                "DOI": "10.1109/tvcg.2019.2934657",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934657",
                "FirstPage": 811,
                "LastPage": 821,
                "PaperType": "J",
                "Abstract": "OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.",
                "AuthorNamesDeduped": "Yan Lyu;Xu Liu 0014;Hanyi Chen;Arpan Mangal;Kai Liu 0001;Chao Chen 0004;Brian Y. Lim",
                "AuthorNames": "Yan Lyu;Xu Liu;Hanyi Chen;Arpan Mangal;Kai Liu;Chao Chen;Brian Lim",
                "AuthorAffiliation": "National University of Singapore;Southeast University, China;Zhejiang University, China;Indian Institute of Technology, Delhi;Chongqing University, China;Chongqing University, China;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2017.2744322;10.1109/vast.2009.5332584;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2014.2346271;10.1109/tvcg.2007.70539;10.1109/tvcg.2006.147;10.1109/tvcg.2015.2467771;10.1109/tvcg.2017.2744338;10.1109/tvcg.2011.223;10.1109/vast.2011.6102455;10.1109/tvcg.2011.190;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "OD Visualization,Edge Bundling,Trajectory",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 646,
                "i": [
                    646
                ]
            }
        },
        {
            "name": "Hanyi Chen",
            "value": 0,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1245,
            "x": -337.19484909153005,
            "y": -104.16157519037576,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling",
                "DOI": "10.1109/tvcg.2019.2934657",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934657",
                "FirstPage": 811,
                "LastPage": 821,
                "PaperType": "J",
                "Abstract": "OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.",
                "AuthorNamesDeduped": "Yan Lyu;Xu Liu 0014;Hanyi Chen;Arpan Mangal;Kai Liu 0001;Chao Chen 0004;Brian Y. Lim",
                "AuthorNames": "Yan Lyu;Xu Liu;Hanyi Chen;Arpan Mangal;Kai Liu;Chao Chen;Brian Lim",
                "AuthorAffiliation": "National University of Singapore;Southeast University, China;Zhejiang University, China;Indian Institute of Technology, Delhi;Chongqing University, China;Chongqing University, China;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2017.2744322;10.1109/vast.2009.5332584;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2014.2346271;10.1109/tvcg.2007.70539;10.1109/tvcg.2006.147;10.1109/tvcg.2015.2467771;10.1109/tvcg.2017.2744338;10.1109/tvcg.2011.223;10.1109/vast.2011.6102455;10.1109/tvcg.2011.190;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "OD Visualization,Edge Bundling,Trajectory",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 646,
                "i": [
                    646
                ]
            }
        },
        {
            "name": "Arpan Mangal",
            "value": 0,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1246,
            "x": 319.125154814073,
            "y": -151.02693655402638,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling",
                "DOI": "10.1109/tvcg.2019.2934657",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934657",
                "FirstPage": 811,
                "LastPage": 821,
                "PaperType": "J",
                "Abstract": "OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.",
                "AuthorNamesDeduped": "Yan Lyu;Xu Liu 0014;Hanyi Chen;Arpan Mangal;Kai Liu 0001;Chao Chen 0004;Brian Y. Lim",
                "AuthorNames": "Yan Lyu;Xu Liu;Hanyi Chen;Arpan Mangal;Kai Liu;Chao Chen;Brian Lim",
                "AuthorAffiliation": "National University of Singapore;Southeast University, China;Zhejiang University, China;Indian Institute of Technology, Delhi;Chongqing University, China;Chongqing University, China;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2017.2744322;10.1109/vast.2009.5332584;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2014.2346271;10.1109/tvcg.2007.70539;10.1109/tvcg.2006.147;10.1109/tvcg.2015.2467771;10.1109/tvcg.2017.2744338;10.1109/tvcg.2011.223;10.1109/vast.2011.6102455;10.1109/tvcg.2011.190;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "OD Visualization,Edge Bundling,Trajectory",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 646,
                "i": [
                    646
                ]
            }
        },
        {
            "name": "Kai Liu 0001",
            "value": 0,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1247,
            "x": -133.349184839628,
            "y": 327.0596197982972,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling",
                "DOI": "10.1109/tvcg.2019.2934657",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934657",
                "FirstPage": 811,
                "LastPage": 821,
                "PaperType": "J",
                "Abstract": "OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.",
                "AuthorNamesDeduped": "Yan Lyu;Xu Liu 0014;Hanyi Chen;Arpan Mangal;Kai Liu 0001;Chao Chen 0004;Brian Y. Lim",
                "AuthorNames": "Yan Lyu;Xu Liu;Hanyi Chen;Arpan Mangal;Kai Liu;Chao Chen;Brian Lim",
                "AuthorAffiliation": "National University of Singapore;Southeast University, China;Zhejiang University, China;Indian Institute of Technology, Delhi;Chongqing University, China;Chongqing University, China;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2017.2744322;10.1109/vast.2009.5332584;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2014.2346271;10.1109/tvcg.2007.70539;10.1109/tvcg.2006.147;10.1109/tvcg.2015.2467771;10.1109/tvcg.2017.2744338;10.1109/tvcg.2011.223;10.1109/vast.2011.6102455;10.1109/tvcg.2011.190;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "OD Visualization,Edge Bundling,Trajectory",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 646,
                "i": [
                    646
                ]
            }
        },
        {
            "name": "Chao Chen 0004",
            "value": 0,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1248,
            "x": -122.64718765650707,
            "y": -331.3723998162031,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling",
                "DOI": "10.1109/tvcg.2019.2934657",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934657",
                "FirstPage": 811,
                "LastPage": 821,
                "PaperType": "J",
                "Abstract": "OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.",
                "AuthorNamesDeduped": "Yan Lyu;Xu Liu 0014;Hanyi Chen;Arpan Mangal;Kai Liu 0001;Chao Chen 0004;Brian Y. Lim",
                "AuthorNames": "Yan Lyu;Xu Liu;Hanyi Chen;Arpan Mangal;Kai Liu;Chao Chen;Brian Lim",
                "AuthorAffiliation": "National University of Singapore;Southeast University, China;Zhejiang University, China;Indian Institute of Technology, Delhi;Chongqing University, China;Chongqing University, China;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2017.2744322;10.1109/vast.2009.5332584;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2014.2346271;10.1109/tvcg.2007.70539;10.1109/tvcg.2006.147;10.1109/tvcg.2015.2467771;10.1109/tvcg.2017.2744338;10.1109/tvcg.2011.223;10.1109/vast.2011.6102455;10.1109/tvcg.2011.190;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "OD Visualization,Edge Bundling,Trajectory",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 646,
                "i": [
                    646
                ]
            }
        },
        {
            "name": "Brian Y. Lim",
            "value": 0,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1249,
            "x": 314.4008948882452,
            "y": 161.5613731480102,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2019,
                "Title": "OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling",
                "DOI": "10.1109/tvcg.2019.2934657",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934657",
                "FirstPage": 811,
                "LastPage": 821,
                "PaperType": "J",
                "Abstract": "OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.",
                "AuthorNamesDeduped": "Yan Lyu;Xu Liu 0014;Hanyi Chen;Arpan Mangal;Kai Liu 0001;Chao Chen 0004;Brian Y. Lim",
                "AuthorNames": "Yan Lyu;Xu Liu;Hanyi Chen;Arpan Mangal;Kai Liu;Chao Chen;Brian Lim",
                "AuthorAffiliation": "National University of Singapore;Southeast University, China;Zhejiang University, China;Indian Institute of Technology, Delhi;Chongqing University, China;Chongqing University, China;National University of Singapore",
                "InternalReferences": "0.1109/tvcg.2016.2598416;10.1109/tvcg.2017.2744322;10.1109/vast.2009.5332584;10.1109/tvcg.2016.2598958;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/tvcg.2014.2346271;10.1109/tvcg.2007.70539;10.1109/tvcg.2006.147;10.1109/tvcg.2015.2467771;10.1109/tvcg.2017.2744338;10.1109/tvcg.2011.223;10.1109/vast.2011.6102455;10.1109/tvcg.2011.190;10.1109/tvcg.2015.2467691;10.1109/infvis.2003.1249008;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864503",
                "AuthorKeywords": "OD Visualization,Edge Bundling,Trajectory",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 720,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 646,
                "i": [
                    646
                ]
            }
        },
        {
            "name": "Ronell Sicat",
            "value": 129,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 1250,
            "x": -341.0989869373784,
            "y": 93.28172977756213,
            "vy": 0,
            "vx": 0,
            "r": 1.14853195164076,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "DXR: A Toolkit for Building Immersive Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865152",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865152",
                "FirstPage": 715,
                "LastPage": 725,
                "PaperType": "J",
                "Abstract": "This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.",
                "AuthorNamesDeduped": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorNames": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorAffiliation": "Harvard University, Cambridge, MA, US;Harvard University, Cambridge, MA, US;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;Monash University, Clayton, VIC, AU;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;The University of Edinburgh, Edinburgh, Edinburgh, GB;Harvard University, Cambridge, MA, US",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346322;10.1109/tvcg.2016.2599107;10.1109/infvis.2004.64;10.1109/tvcg.2010.144;10.1109/tvcg.2016.2598620;10.1109/tvcg.2015.2467449;10.1109/tvcg.2014.2346318;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2017.2744079;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Augmented Reality,Virtual Reality,Immersive Visualization,Immersive Analytics,Visualization Toolkit",
                "AminerCitationCount": 137,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 4869,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 649,
                "i": [
                    649
                ]
            }
        },
        {
            "name": "Jiabao Li",
            "value": 79,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1251,
            "x": 188.58023095365212,
            "y": -299.3117045714505,
            "vy": 0,
            "vx": 0,
            "r": 1.0909614277489925,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "DXR: A Toolkit for Building Immersive Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865152",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865152",
                "FirstPage": 715,
                "LastPage": 725,
                "PaperType": "J",
                "Abstract": "This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.",
                "AuthorNamesDeduped": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorNames": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorAffiliation": "Harvard University, Cambridge, MA, US;Harvard University, Cambridge, MA, US;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;Monash University, Clayton, VIC, AU;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;The University of Edinburgh, Edinburgh, Edinburgh, GB;Harvard University, Cambridge, MA, US",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346322;10.1109/tvcg.2016.2599107;10.1109/infvis.2004.64;10.1109/tvcg.2010.144;10.1109/tvcg.2016.2598620;10.1109/tvcg.2015.2467449;10.1109/tvcg.2014.2346318;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2017.2744079;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Augmented Reality,Virtual Reality,Immersive Visualization,Immersive Analytics,Visualization Toolkit",
                "AminerCitationCount": 137,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 4869,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 649,
                "i": [
                    649
                ]
            }
        },
        {
            "name": "Junyoung Choi",
            "value": 81,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 1252,
            "x": 63.15417435228306,
            "y": 348.22629174414936,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "DXR: A Toolkit for Building Immersive Data Visualizations",
                "DOI": "10.1109/tvcg.2018.2865152",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865152",
                "FirstPage": 715,
                "LastPage": 725,
                "PaperType": "J",
                "Abstract": "This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.",
                "AuthorNamesDeduped": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorNames": "Ronell Sicat;Jiabao Li;Junyoung Choi;Maxime Cordeil;Won-Ki Jeong;Benjamin Bach;Hanspeter Pfister",
                "AuthorAffiliation": "Harvard University, Cambridge, MA, US;Harvard University, Cambridge, MA, US;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;Monash University, Clayton, VIC, AU;Ulsan National Institute of Science and Technology, Ulsan, Ulsan, KR;The University of Edinburgh, Edinburgh, Edinburgh, GB;Harvard University, Cambridge, MA, US",
                "InternalReferences": "0.1109/tvcg.2017.2745941;10.1109/tvcg.2016.2598609;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346322;10.1109/tvcg.2016.2599107;10.1109/infvis.2004.64;10.1109/tvcg.2010.144;10.1109/tvcg.2016.2598620;10.1109/tvcg.2015.2467449;10.1109/tvcg.2014.2346318;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2017.2744079;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Augmented Reality,Virtual Reality,Immersive Visualization,Immersive Analytics,Visualization Toolkit",
                "AminerCitationCount": 137,
                "CitationCountCrossRef": 109,
                "PubsCitedCrossRef": 72,
                "DownloadsXplore": 4869,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 649,
                "i": [
                    649
                ]
            }
        },
        {
            "name": "Richard Alligier",
            "value": 47,
            "numPapers": 14,
            "cluster": "3",
            "visible": 1,
            "index": 1253,
            "x": -281.90387210154114,
            "y": -214.1966547221454,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "FiberClay: Sculpting Three Dimensional Trajectories to Reveal Structural Insights",
                "DOI": "10.1109/tvcg.2018.2865191",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865191",
                "FirstPage": 704,
                "LastPage": 714,
                "PaperType": "J",
                "Abstract": "Visualizing 3D trajectories to extract insights about their similarities and spatial configuration is a critical task in several domains. Air traffic controllers for example deal with large quantities of aircrafts routes to optimize safety in airspace and neuroscientists attempt to understand neuronal pathways in the human brain by visualizing bundles of fibers from DTI images. Extracting insights from masses of 3D trajectories is challenging as the multiple three dimensional lines have complex geometries, may overlap, cross or even merge with each other, making it impossible to follow individual ones in dense areas. As trajectories are inherently spatial and three dimensional, we propose FiberClay: a system to display and interact with 3D trajectories in immersive environments. FiberClay renders a large quantity of trajectories in real time using GP-GPU techniques. FiberClay also introduces a new set of interactive techniques for composing complex queries in 3D space leveraging immersive environment controllers and user position. These techniques enable an analyst to select and compare sets of trajectories with specific geometries and data properties. We conclude by discussing insights found using FiberClay with domain experts in air traffic control and neurology.",
                "AuthorNamesDeduped": "Christophe Hurter;Nathalie Henry Riche;Steven Mark Drucker;Maxime Cordeil;Richard Alligier;Romain Vuillemot",
                "AuthorNames": "Christophe Hurter;Nathalie Henry Riche;Steven M. Drucker;Maxime Cordeil;Richard Alligier;Romain Vuillemot",
                "AuthorAffiliation": "ENAC, Toulouse University, France;Microsoft Research;Microsoft Research;Monash University;ENAC, Toulouse University, France;Universite de Lyon, Lyon, Auvergne-Rhône-Alpes, FR",
                "InternalReferences": "0.1109/tvcg.2016.2599217;10.1109/tvcg.2011.192;10.1109/visual.1991.175794;10.1109/tvcg.2008.153;10.1109/tvcg.2011.233;10.1109/tvcg.2013.226;10.1109/tvcg.2017.2744338;10.1109/tvcg.2009.145;10.1109/infvis.2004.27;10.1109/tvcg.2011.224;10.1109/tvcg.2015.2467112;10.1109/tvcg.2013.153;10.1109/tvcg.2012.265;10.1109/tvcg.2017.2744079;10.1109/tvcg.2012.217",
                "AuthorKeywords": "Immersive Analytics,3D Visualization,Dynamic Queries,Bimanual Interaction,Multidimensional Data",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1442,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 653,
                "i": [
                    653
                ]
            }
        },
        {
            "name": "Benjamin Tissoires",
            "value": 64,
            "numPapers": 4,
            "cluster": "3",
            "visible": 1,
            "index": 1254,
            "x": 352.6955033919407,
            "y": -32.49433623118925,
            "vy": 0,
            "vx": 0,
            "r": 1.0736902705814624,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "FromDaDy: Spreading Aircraft Trajectories Across Views to Support Iterative Queries",
                "DOI": "10.1109/tvcg.2009.145",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.145",
                "FirstPage": 1017,
                "LastPage": 1024,
                "PaperType": "J",
                "Abstract": "When displaying thousands of aircraft trajectories on a screen, the visualization is spoiled by a tangle of trails. The visual analysis is therefore difficult, especially if a specific class of trajectories in an erroneous dataset has to be studied. We designed FromDaDy, a trajectory visualization tool that tackles the difficulties of exploring the visualization of multiple trails. This multidimensional data exploration is based on scatterplots, brushing, pick and drop, juxtaposed views and rapid visual design. Users can organize the workspace composed of multiple juxtaposed views. They can define the visual configuration of the views by connecting data dimensions from the dataset to Bertin's visual variables. They can then brush trajectories, and with a pick and drop operation they can spread the brushed information across views. They can then repeat these interactions, until they extract a set of relevant data, thus formulating complex queries. Through two real-world scenarios, we show how FromDaDy supports iterative queries and the extraction of trajectories in a dataset that contains up to 5 million data.",
                "AuthorNamesDeduped": "Christophe Hurter;Benjamin Tissoires;Stéphane Conversy",
                "AuthorNames": "Christophe Hurter;Benjamin Tissoires;Stéphane Conversy",
                "AuthorAffiliation": "DSNA DTI Research and Development, ENAC and IRIT IHCS, France;DSNA DTI Research and Development, ENAC and IRIT IHCS, France;ENAC and IRIT IHCS, France",
                "InternalReferences": "0.1109/infvis.2000.885086;10.1109/visual.1995.485139;10.1109/visual.1994.346302;10.1109/tvcg.2008.153;10.1109/infvis.2004.64",
                "AuthorKeywords": "visualization, iterative exploration, direct manipulation, trajectories",
                "AminerCitationCount": 204,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 1057,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1825,
                "i": [
                    1825
                ]
            }
        },
        {
            "name": "Stéphane Conversy",
            "value": 64,
            "numPapers": 4,
            "cluster": "3",
            "visible": 1,
            "index": 1255,
            "x": -238.21196516426758,
            "y": 262.30718566706815,
            "vy": 0,
            "vx": 0,
            "r": 1.0736902705814624,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "FromDaDy: Spreading Aircraft Trajectories Across Views to Support Iterative Queries",
                "DOI": "10.1109/tvcg.2009.145",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.145",
                "FirstPage": 1017,
                "LastPage": 1024,
                "PaperType": "J",
                "Abstract": "When displaying thousands of aircraft trajectories on a screen, the visualization is spoiled by a tangle of trails. The visual analysis is therefore difficult, especially if a specific class of trajectories in an erroneous dataset has to be studied. We designed FromDaDy, a trajectory visualization tool that tackles the difficulties of exploring the visualization of multiple trails. This multidimensional data exploration is based on scatterplots, brushing, pick and drop, juxtaposed views and rapid visual design. Users can organize the workspace composed of multiple juxtaposed views. They can define the visual configuration of the views by connecting data dimensions from the dataset to Bertin's visual variables. They can then brush trajectories, and with a pick and drop operation they can spread the brushed information across views. They can then repeat these interactions, until they extract a set of relevant data, thus formulating complex queries. Through two real-world scenarios, we show how FromDaDy supports iterative queries and the extraction of trajectories in a dataset that contains up to 5 million data.",
                "AuthorNamesDeduped": "Christophe Hurter;Benjamin Tissoires;Stéphane Conversy",
                "AuthorNames": "Christophe Hurter;Benjamin Tissoires;Stéphane Conversy",
                "AuthorAffiliation": "DSNA DTI Research and Development, ENAC and IRIT IHCS, France;DSNA DTI Research and Development, ENAC and IRIT IHCS, France;ENAC and IRIT IHCS, France",
                "InternalReferences": "0.1109/infvis.2000.885086;10.1109/visual.1995.485139;10.1109/visual.1994.346302;10.1109/tvcg.2008.153;10.1109/infvis.2004.64",
                "AuthorKeywords": "visualization, iterative exploration, direct manipulation, trajectories",
                "AminerCitationCount": 204,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 1057,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1825,
                "i": [
                    1825
                ]
            }
        },
        {
            "name": "William Wright",
            "value": 182,
            "numPapers": 8,
            "cluster": "3",
            "visible": 1,
            "index": 1256,
            "x": -1.5364800695317242,
            "y": -354.46810749205054,
            "vy": 0,
            "vx": 0,
            "r": 1.2095567069660333,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "GeoTime Information Visualization",
                "DOI": "10.1109/infvis.2004.27",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.27",
                "FirstPage": 25,
                "LastPage": 32,
                "PaperType": "C",
                "Abstract": "Analyzing observations over time and geography is a common task but typically requires multiple, separate tools. The objective of our research has been to develop a method to visualize, and work with, the spatial interconnectedness of information over time and geography within a single, highly interactive 3D view. A novel visualization technique for displaying and tracking events, objects and activities within a combined temporal and geospatial display has been developed. This technique has been implemented as a demonstratable prototype called GeoTime in order to determine potential utility. Initial evaluations have been with military users. However, we believe the concept is applicable to a variety of government and business analysis tasks",
                "AuthorNamesDeduped": "Thomas Kapler;William Wright",
                "AuthorNames": "T. Kapler;W. Wright",
                "AuthorAffiliation": "Oculus Info, Inc.;Oculus Info, Inc.",
                "InternalReferences": "0.1109/infvis.2003.1249006",
                "AuthorKeywords": "3-D visualization, spatiotemporal, geospatial, interactive visualization, visual data analysis, link analysis",
                "AminerCitationCount": 359,
                "CitationCountCrossRef": 66,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 827,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2458,
                "i": [
                    2458
                ]
            }
        },
        {
            "name": "Kartik Chanana",
            "value": 82,
            "numPapers": 5,
            "cluster": "5",
            "visible": 1,
            "index": 1257,
            "x": 240.66843106640195,
            "y": 260.4394484098685,
            "vy": 0,
            "vx": 0,
            "r": 1.0944156591824985,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "IDMVis: Temporal Event Sequence Visualization for Type 1 Diabetes Treatment Decision Support",
                "DOI": "10.1109/tvcg.2018.2865076",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865076",
                "FirstPage": 512,
                "LastPage": 522,
                "PaperType": "J",
                "Abstract": "Type 1 diabetes is a chronic, incurable autoimmune disease affecting millions of Americans in which the body stops producing insulin and blood glucose levels rise. The goal of intensive diabetes management is to lower average blood glucose through frequent adjustments to insulin protocol, diet, and behavior. Manual logs and medical device data are collected by patients, but these multiple sources are presented in disparate visualization designs to the clinician-making temporal inference difficult. We conducted a design study over 18 months with clinicians performing intensive diabetes management. We present a data abstraction and novel hierarchical task abstraction for this domain. We also contribute IDMVis: a visualization tool for temporal event sequences with multidimensional, interrelated data. IDMVis includes a novel technique for folding and aligning records by dual sentinel events and scaling the intermediate timeline. We validate our design decisions based on our domain abstractions, best practices, and through a qualitative evaluation with six clinicians. The results of this study indicate that IDMVis accurately reflects the workflow of clinicians. Using IDMVis, clinicians are able to identify issues of data quality such as missing or conflicting data, reconstruct patient records when data is missing, differentiate between days with different patterns, and promote educational interventions after identifying discrepancies.",
                "AuthorNamesDeduped": "Yixuan Zhang 0001;Kartik Chanana;Cody Dunne",
                "AuthorNames": "Yixuan Zhang;Kartik Chanana;Cody Dunne",
                "AuthorAffiliation": "Northeastern University, Boston, MA, US;Northeastern University, Boston, MA, US;Northeastern University, Boston, MA, US",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2013.124;10.1109/tvcg.2017.2744319;10.1109/tvcg.2009.111;10.1109/tvcg.2012.213;10.1109/visual.1992.235203",
                "AuthorKeywords": "Design study,task analysis,event sequence visualization,time series data,qualitative evaluation,health applications",
                "AminerCitationCount": 62,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 2132,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 657,
                "i": [
                    657
                ]
            }
        },
        {
            "name": "Mingwei Li",
            "value": 30,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1258,
            "x": -353.52621394936745,
            "y": -29.48247022598475,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Looks Good To Me: Visualizations As Sanity Checks",
                "DOI": "10.1109/tvcg.2018.2864907",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864907",
                "FirstPage": 830,
                "LastPage": 839,
                "PaperType": "J",
                "Abstract": "Famous examples such as Anscombe's Quartet highlight that one of the core benefits of visualizations is allowing people to discover visual patterns that might otherwise be hidden by summary statistics. This visual inspection is particularly important in exploratory data analysis, where analysts can use visualizations such as histograms and dot plots to identify data quality issues. Yet, these visualizations are driven by parameters such as histogram bin size or mark opacity that have a great deal of impact on the final visual appearance of the chart, but are rarely optimized to make important features visible. In this paper, we show that data flaws have varying impact on the visual features of visualizations, and that the adversarial or merely uncritical setting of design parameters of visualizations can obscure the visual signatures of these flaws. Drawing on the framework of Algebraic Visualization Design, we present the results of a crowdsourced study showing that common visualization types can appear to reasonably summarize distributional data while hiding large and important flaws such as missing data and extraneous modes. We make use of these results to propose additional best practices for visualizations of distributions for data quality tasks.",
                "AuthorNamesDeduped": "Michael Correll;Mingwei Li;Gordon L. Kindlmann;Carlos Scheidegger",
                "AuthorNames": "Michael Correll;Mingwei Li;Gordon Kindlmann;Carlos Scheidegger",
                "AuthorAffiliation": "Tableau Research;University of Arizona;University of Chicago;University of Arizona",
                "InternalReferences": "0.1109/tvcg.2016.2598862;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346298;10.1109/vast.2016.7883519;10.1109/tvcg.2016.2598618;10.1109/tvcg.2014.2346978;10.1109/tvcg.2014.2346979;10.1109/tvcg.2012.230;10.1109/tvcg.2014.2346325;10.1109/tvcg.2016.2599030;10.1109/tvcg.2017.2744359;10.1109/tvcg.2015.2469125;10.1109/tvcg.2010.161;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Graphical perception,data quality,univariate visualizations",
                "AminerCitationCount": 52,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 1235,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 662,
                "i": [
                    662
                ]
            }
        },
        {
            "name": "Heike Hofmann",
            "value": 104,
            "numPapers": 17,
            "cluster": "11",
            "visible": 1,
            "index": 1259,
            "x": 280.70580778135877,
            "y": -217.15029237331188,
            "vy": 0,
            "vx": 0,
            "r": 1.1197466896948762,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Product Plots",
                "DOI": "10.1109/tvcg.2011.227",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.227",
                "FirstPage": 2223,
                "LastPage": 2230,
                "PaperType": "J",
                "Abstract": "We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams.",
                "AuthorNamesDeduped": "Hadley Wickham;Heike Hofmann",
                "AuthorNames": "Hadley Wickham;Heike Hofmann",
                "AuthorAffiliation": "Rice University, USA;Iowa State University, USA",
                "InternalReferences": "0.1109/tvcg.2007.70594;10.1109/tvcg.2006.200;10.1109/infvis.2002.1173141;10.1109/infvis.2000.885091;10.1109/visual.1990.146386;10.1109/tvcg.2010.186;10.1109/infvis.2005.1532128;10.1109/infvis.2005.1532142;10.1109/tvcg.2010.209;10.1109/tvcg.2009.128;10.1109/infvis.2005.1532145",
                "AuthorKeywords": "Statistics, joint distribution, conditional distribution, treemap, bar chart, mosaic plot",
                "AminerCitationCount": 65,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 986,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1567,
                "i": [
                    1567
                ]
            }
        },
        {
            "name": "Dianne Cook",
            "value": 60,
            "numPapers": 1,
            "cluster": "11",
            "visible": 1,
            "index": 1260,
            "x": -60.324745302782034,
            "y": 349.8727270082,
            "vy": 0,
            "vx": 0,
            "r": 1.0690846286701208,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "Graphical inference for infovis",
                "DOI": "10.1109/tvcg.2010.161",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.161",
                "FirstPage": 973,
                "LastPage": 979,
                "PaperType": "J",
                "Abstract": "How do we know if what we see is really there? When visualizing data, how do we avoid falling into the trap of apophenia where we see patterns in random noise? Traditionally, infovis has been concerned with discovering new relationships, and statistics with preventing spurious relationships from being reported. We pull these opposing poles closer with two new techniques for rigorous statistical inference of visual discoveries. The \"Rorschach\" helps the analyst calibrate their understanding of uncertainty and \"line-up\" provides a protocol for assessing the significance of visual discoveries, protecting against the discovery of spurious structure.",
                "AuthorNamesDeduped": "Hadley Wickham;Dianne Cook;Heike Hofmann;Andreas Buja",
                "AuthorNames": "Hadley Wickham;Dianne Cook;Heike Hofmann;Andreas Buja",
                "AuthorAffiliation": "Rice University, USA;Iowa State University, USA;Iowa State University, USA;Wharton School, University of Pennsylvania, USA",
                "InternalReferences": "0.1109/tvcg.2007.70577",
                "AuthorKeywords": "Statistics, visual testing, permutation tests, null hypotheses, data plots",
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 82,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 1830,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1694,
                "i": [
                    1694
                ]
            }
        },
        {
            "name": "Marcus A. Magnor",
            "value": 228,
            "numPapers": 22,
            "cluster": "2",
            "visible": 1,
            "index": 1261,
            "x": -191.9301288257025,
            "y": -298.85251487807363,
            "vy": 0,
            "vx": 0,
            "r": 1.2625215889464594,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Perception-based visual quality measures",
                "DOI": "10.1109/vast.2011.6102437",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102437",
                "FirstPage": 13,
                "LastPage": 20,
                "PaperType": "C",
                "Abstract": "In recent years diverse quality measures to support the exploration of high-dimensional data sets have been proposed. Such measures can be very useful to rank and select information-bearing projections of very high dimensional data, when the visual exploration of all possible projections becomes unfeasible. But even though a ranking of the low dimensional projections may support the user in the visual exploration task, different measures deliver different distances between the views that do not necessarily match the expectations of human perception. As an alternative solution, we propose a perception-based approach that, similar to the existing measures, can be used to select information bearing projections of the data. Specifically, we construct a perceptual embedding for the different projections based on the data from a psychophysics study and multi-dimensional scaling. This embedding together with a ranking function is then used to estimate the value of the projections for a specific user task in a perceptual sense.",
                "AuthorNamesDeduped": "Georgia Albuquerque;Martin Eisemann;Marcus A. Magnor",
                "AuthorNames": "Georgia Albuquerque;Martin Eisemann;Marcus Magnor",
                "AuthorAffiliation": "Technical University of Braunschweig, Germany;Technical University of Braunschweig, Germany;Technical University of Braunschweig, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532142;10.1109/vast.2010.5652433;10.1109/vast.2006.261423;10.1109/vast.2009.5332628;10.1109/tvcg.2010.184;10.1109/tvcg.2009.153",
                "AuthorKeywords": null,
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 634,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1590,
                "i": [
                    1590
                ]
            }
        },
        {
            "name": "Ricardo Langner",
            "value": 57,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 1262,
            "x": 343.5313559512502,
            "y": 90.75355352985038,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "VisTiles: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration",
                "DOI": "10.1109/tvcg.2017.2744019",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744019",
                "FirstPage": 626,
                "LastPage": 636,
                "PaperType": "J",
                "Abstract": "We present VisTiles, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated &amp; multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.",
                "AuthorNamesDeduped": "Ricardo Langner;Tom Horak;Raimund Dachselt",
                "AuthorNames": "Ricardo Langner;Tom Horak;Raimund Dachselt",
                "AuthorAffiliation": "Interactive Media Lab, Technische Universität Dresden, Germany;Interactive Media Lab, Technische Universität Dresden, Germany;Interactive Media Lab, Technische Universität Dresden, Germany",
                "InternalReferences": "0.1109/vast.2015.7347628;10.1109/tvcg.2007.70568;10.1109/tvcg.2012.204;10.1109/tvcg.2016.2598586;10.1109/tvcg.2014.2346573;10.1109/tvcg.2009.162;10.1109/tvcg.2012.237;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Mobile devices,coordinated & multiple views,multi-display environment,cross-device interaction",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1247,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 790,
                "i": [
                    790
                ]
            }
        },
        {
            "name": "Ulrike Kister",
            "value": 18,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1263,
            "x": -314.7370494383929,
            "y": 165.19863713364782,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Multiple Coordinated Views at Large Displays for Multiple Users: Empirical Findings on User Behavior, Movements, and Distances",
                "DOI": "10.1109/tvcg.2018.2865235",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865235",
                "FirstPage": 608,
                "LastPage": 618,
                "PaperType": "J",
                "Abstract": "Interactive wall-sized displays benefit data visualization. Due to their sheer display size, they make it possible to show large amounts of data in multiple coordinated views (MCV) and facilitate collaborative data analysis. In this work, we propose a set of important design considerations and contribute a fundamental input vocabulary and interaction mapping for MCV functionality. We also developed a fully functional application with more than 45 coordinated views visualizing a real-world, multivariate data set of crime activities, which we used in a comprehensive qualitative user study investigating how pairs of users behave. Most importantly, we found that flexible movement is essential and-depending on user goals-is connected to collaboration, perception, and interaction. Therefore, we argue that for future systems interaction from the distance is required and needs good support. We show that our consistent design for both direct touch at the large display and distant interaction using mobile phones enables the seamless exploration of large-scale MCV at wall-sized displays. Our MCV application builds on design aspects such as simplicity, flexibility, and visual consistency and, therefore, supports realistic workflows. We believe that in the future, many visual data analysis scenarios will benefit from wall-sized displays presenting numerous coordinated visualizations, for which our findings provide a valuable foundation.",
                "AuthorNamesDeduped": "Ricardo Langner;Ulrike Kister;Raimund Dachselt",
                "AuthorNames": "Ricardo Langner;Ulrike Kister;Raimund Dachselt",
                "AuthorAffiliation": "Technische Universitat Dresden, Dresden, Sachsen, DE;Technische Universitat Dresden, Dresden, Sachsen, DE;Technische Universitat Dresden, Dresden, Sachsen, DE",
                "InternalReferences": "0.1109/vast.2016.7883506;10.1109/tvcg.2012.251;10.1109/vast.2010.5652880;10.1109/tvcg.2013.166;10.1109/tvcg.2013.134;10.1109/tvcg.2017.2743859;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2017.2744198;10.1109/tvcg.2017.2745219;10.1109/tvcg.2009.162;10.1109/tvcg.2012.237;10.1109/tvcg.2012.275;10.1109/infvis.1996.559216;10.1109/tvcg.2006.184",
                "AuthorKeywords": "Multiple coordinated views,wall-sized displays,mobile devices,distant interaction,physical navigation,user behavior,user movements,multi-user,collaborative data analysis",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 1515,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 668,
                "i": [
                    668
                ]
            }
        },
        {
            "name": "Mathieu Le Goc",
            "value": 12,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1264,
            "x": 120.53489954285858,
            "y": -334.54646611822545,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Dynamic Composite Data Physicalization Using Wheeled Micro-Robots",
                "DOI": "10.1109/tvcg.2018.2865159",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865159",
                "FirstPage": 737,
                "LastPage": 747,
                "PaperType": "J",
                "Abstract": "This paper introduces dynamic composite physicalizations, a new class of physical visualizations that use collections of self-propelled objects to represent data. Dynamic composite physicalizations can be used both to give physical form to well-known interactive visualization techniques, and to explore new visualizations and interaction paradigms. We first propose a design space characterizing composite physicalizations based on previous work in the fields of Information Visualization and Human Computer Interaction. We illustrate dynamic composite physicalizations in two scenarios demonstrating potential benefits for collaboration and decision making, as well as new opportunities for physical interaction. We then describe our implementation using wheeled micro-robots capable of locating themselves and sensing user input, before discussing limitations and opportunities for future work.",
                "AuthorNamesDeduped": "Mathieu Le Goc;Charles Perin;Sean Follmer;Jean-Daniel Fekete;Pierre Dragicevic",
                "AuthorNames": "Mathieu Le Goc;Charles Perin;Sean Follmer;Jean-Daniel Fekete;Pierre Dragicevic",
                "AuthorAffiliation": "Stanford University, Stanford, CA, US;University of Victoria, Victoria, BC, CA;Stanford University, Stanford, CA, US;Inria, Le Chesnay, ÃŽle-de-France, FR;Inria, Le Chesnay, ÃŽle-de-France, FR",
                "InternalReferences": "0.1109/tvcg.2014.2346984;10.1109/tvcg.2014.2346424;10.1109/tvcg.2008.153;10.1109/tvcg.2007.70539;10.1109/tvcg.2014.2346292;10.1109/tvcg.2013.227;10.1109/tvcg.2013.134;10.1109/tvcg.2014.2346250;10.1109/tvcg.2017.2743859;10.1109/tvcg.2016.2598920;10.1109/tvcg.2012.199;10.1109/tvcg.2014.2346279;10.1109/tvcg.2007.70541;10.1109/tvcg.2016.2598498",
                "AuthorKeywords": "information visualization,data physicalization,tangible user interfaces",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 1497,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 670,
                "i": [
                    670
                ]
            }
        },
        {
            "name": "Sean Follmer",
            "value": 12,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1265,
            "x": 137.15840929739014,
            "y": 328.234018284226,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Dynamic Composite Data Physicalization Using Wheeled Micro-Robots",
                "DOI": "10.1109/tvcg.2018.2865159",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865159",
                "FirstPage": 737,
                "LastPage": 747,
                "PaperType": "J",
                "Abstract": "This paper introduces dynamic composite physicalizations, a new class of physical visualizations that use collections of self-propelled objects to represent data. Dynamic composite physicalizations can be used both to give physical form to well-known interactive visualization techniques, and to explore new visualizations and interaction paradigms. We first propose a design space characterizing composite physicalizations based on previous work in the fields of Information Visualization and Human Computer Interaction. We illustrate dynamic composite physicalizations in two scenarios demonstrating potential benefits for collaboration and decision making, as well as new opportunities for physical interaction. We then describe our implementation using wheeled micro-robots capable of locating themselves and sensing user input, before discussing limitations and opportunities for future work.",
                "AuthorNamesDeduped": "Mathieu Le Goc;Charles Perin;Sean Follmer;Jean-Daniel Fekete;Pierre Dragicevic",
                "AuthorNames": "Mathieu Le Goc;Charles Perin;Sean Follmer;Jean-Daniel Fekete;Pierre Dragicevic",
                "AuthorAffiliation": "Stanford University, Stanford, CA, US;University of Victoria, Victoria, BC, CA;Stanford University, Stanford, CA, US;Inria, Le Chesnay, ÃŽle-de-France, FR;Inria, Le Chesnay, ÃŽle-de-France, FR",
                "InternalReferences": "0.1109/tvcg.2014.2346984;10.1109/tvcg.2014.2346424;10.1109/tvcg.2008.153;10.1109/tvcg.2007.70539;10.1109/tvcg.2014.2346292;10.1109/tvcg.2013.227;10.1109/tvcg.2013.134;10.1109/tvcg.2014.2346250;10.1109/tvcg.2017.2743859;10.1109/tvcg.2016.2598920;10.1109/tvcg.2012.199;10.1109/tvcg.2014.2346279;10.1109/tvcg.2007.70541;10.1109/tvcg.2016.2598498",
                "AuthorKeywords": "information visualization,data physicalization,tangible user interfaces",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 92,
                "DownloadsXplore": 1497,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 670,
                "i": [
                    670
                ]
            }
        },
        {
            "name": "Yarden Livnat",
            "value": 254,
            "numPapers": 25,
            "cluster": "6",
            "visible": 1,
            "index": 1266,
            "x": -322.982771136535,
            "y": -149.43938419628424,
            "vy": 0,
            "vx": 0,
            "r": 1.2924582613701785,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Visual Exploration of Semantic Relationships in Neural Word Embeddings",
                "DOI": "10.1109/tvcg.2017.2745141",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745141",
                "FirstPage": 553,
                "LastPage": 562,
                "PaperType": "J",
                "Abstract": "Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. Here, we introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.",
                "AuthorNamesDeduped": "Shusen Liu 0001;Peer-Timo Bremer;Jayaraman J. Thiagarajan;Vivek Srikumar;Bei Wang 0001;Yarden Livnat;Valerio Pascucci",
                "AuthorNames": "Shusen Liu;Peer-Timo Bremer;Jayaraman J. Thiagarajan;Vivek Srikumar;Bei Wang;Yarden Livnat;Valerio Pascucci",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;Lawrence Livermore National Laboratory;School of Computing, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/visual.1990.146402;10.1109/tvcg.2013.196",
                "AuthorKeywords": "Natural Language Processing,Word Embedding,High-Dimensional Data",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 64,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2314,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 781,
                "i": [
                    781
                ]
            }
        },
        {
            "name": "Sriram Karthik Badam",
            "value": 29,
            "numPapers": 32,
            "cluster": "5",
            "visible": 1,
            "index": 1267,
            "x": 339.2361445659933,
            "y": -108.02239684436043,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Elastic Documents: Coupling Text and Tables through Contextual Visualizations for Enhanced Document Reading",
                "DOI": "10.1109/tvcg.2018.2865119",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865119",
                "FirstPage": 661,
                "LastPage": 671,
                "PaperType": "J",
                "Abstract": "Today's data-rich documents are often complex datasets in themselves, consisting of information in different formats such as text, figures, and data tables. These additional media augment the textual narrative in the document. However, the static layout of a traditional for-print document often impedes deep understanding of its content because of the need to navigate to access content scattered throughout the text. In this paper, we seek to facilitate enhanced comprehension of such documents through a contextual visualization technique that couples text content with data tables contained in the document. We parse the text content and data tables, cross-link the components using a keyword-based matching algorithm, and generate on-demand visualizations based on the reader's current focus within a document. We evaluate this technique in a user study comparing our approach to a traditional reading experience. Results from our study show that (1) participants comprehend the content better with tighter coupling of text and data, (2) the contextual visualizations enable participants to develop better summaries that capture the main data-rich insights within the document, and (3) overall, our method enables participants to develop a more detailed understanding of the document content.",
                "AuthorNamesDeduped": "Sriram Karthik Badam;Zhicheng Liu 0001;Niklas Elmqvist",
                "AuthorNames": "Sriram Karthik Badam;Zhicheng Liu;Niklas Elmqvist",
                "AuthorAffiliation": "University of Maryland at College Park, College Park, MD, US;Adobe Research, Seattle, WA, USA;University of Maryland at College Park, College Park, MD, US",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2016.2598594;10.1109/tvcg.2014.2346435;10.1109/tvcg.2011.255;10.1109/tvcg.2007.70594;10.1109/tvcg.2014.2346279;10.1109/infvis.2000.885091;10.1109/tvcg.2009.139;10.1109/tvcg.2009.165;10.1109/tvcg.2009.171",
                "AuthorKeywords": "Document reading,contextual visualizations,visual aids,comprehension,summarization",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1150,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 679,
                "i": [
                    679
                ]
            }
        },
        {
            "name": "Andreas Mathisen",
            "value": 0,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 1268,
            "x": -177.24397207281984,
            "y": 308.9248684775142,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Vistrates: A Component Model for Ubiquitous Analytics",
                "DOI": "10.1109/tvcg.2018.2865144",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865144",
                "FirstPage": 586,
                "LastPage": 596,
                "PaperType": "J",
                "Abstract": "Visualization tools are often specialized for specific tasks, which turns the user's analytical workflow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components-the building blocks of this model-can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic “anytime” and “anywhere” motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices..",
                "AuthorNamesDeduped": "Sriram Karthik Badam;Andreas Mathisen;Roman Rädle;Clemens Nylandsted Klokmose;Niklas Elmqvist",
                "AuthorNames": "Sriram Karthik Badam;Andreas Mathisen;Roman Rädle;Clemens N. Klokmose;Niklas Elmqvist",
                "AuthorAffiliation": "University of Maryland at College Park, College Park, MD, US;Aarhus Universitet, Aarhus, DK;Aarhus Universitet, Aarhus, DK;Aarhus Universitet, Aarhus, DK;University of Maryland at College Park, College Park, MD, US",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2017.2743990;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2745278;10.1109/infvis.2000.885092;10.1109/tvcg.2013.197;10.1109/vast.2007.4389011;10.1109/tvcg.2008.137;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2013.191;10.1109/tvcg.2014.2346573;10.1109/tvcg.2013.200;10.1109/tvcg.2014.2346291;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2346574;10.1109/infvis.2000.885086;10.1109/tvcg.2009.162;10.1109/tvcg.2007.70577;10.1109/tvcg.2007.70589",
                "AuthorKeywords": "Components,literate computing,development,exploration,dissemination,collaboration,heterogeneous devices",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 812,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 680,
                "i": [
                    680
                ]
            }
        },
        {
            "name": "Roman Rädle",
            "value": 0,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 1269,
            "x": -78.01229313013462,
            "y": -347.6551195086561,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Vistrates: A Component Model for Ubiquitous Analytics",
                "DOI": "10.1109/tvcg.2018.2865144",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865144",
                "FirstPage": 586,
                "LastPage": 596,
                "PaperType": "J",
                "Abstract": "Visualization tools are often specialized for specific tasks, which turns the user's analytical workflow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components-the building blocks of this model-can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic “anytime” and “anywhere” motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices..",
                "AuthorNamesDeduped": "Sriram Karthik Badam;Andreas Mathisen;Roman Rädle;Clemens Nylandsted Klokmose;Niklas Elmqvist",
                "AuthorNames": "Sriram Karthik Badam;Andreas Mathisen;Roman Rädle;Clemens N. Klokmose;Niklas Elmqvist",
                "AuthorAffiliation": "University of Maryland at College Park, College Park, MD, US;Aarhus Universitet, Aarhus, DK;Aarhus Universitet, Aarhus, DK;Aarhus Universitet, Aarhus, DK;University of Maryland at College Park, College Park, MD, US",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2017.2743990;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2745278;10.1109/infvis.2000.885092;10.1109/tvcg.2013.197;10.1109/vast.2007.4389011;10.1109/tvcg.2008.137;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2013.191;10.1109/tvcg.2014.2346573;10.1109/tvcg.2013.200;10.1109/tvcg.2014.2346291;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2346574;10.1109/infvis.2000.885086;10.1109/tvcg.2009.162;10.1109/tvcg.2007.70577;10.1109/tvcg.2007.70589",
                "AuthorKeywords": "Components,literate computing,development,exploration,dissemination,collaboration,heterogeneous devices",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 812,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 680,
                "i": [
                    680
                ]
            }
        },
        {
            "name": "Clemens Nylandsted Klokmose",
            "value": 0,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 1270,
            "x": 292.4766216582926,
            "y": 203.73371292781172,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Vistrates: A Component Model for Ubiquitous Analytics",
                "DOI": "10.1109/tvcg.2018.2865144",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865144",
                "FirstPage": 586,
                "LastPage": 596,
                "PaperType": "J",
                "Abstract": "Visualization tools are often specialized for specific tasks, which turns the user's analytical workflow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components-the building blocks of this model-can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic “anytime” and “anywhere” motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices..",
                "AuthorNamesDeduped": "Sriram Karthik Badam;Andreas Mathisen;Roman Rädle;Clemens Nylandsted Klokmose;Niklas Elmqvist",
                "AuthorNames": "Sriram Karthik Badam;Andreas Mathisen;Roman Rädle;Clemens N. Klokmose;Niklas Elmqvist",
                "AuthorAffiliation": "University of Maryland at College Park, College Park, MD, US;Aarhus Universitet, Aarhus, DK;Aarhus Universitet, Aarhus, DK;Aarhus Universitet, Aarhus, DK;University of Maryland at College Park, College Park, MD, US",
                "InternalReferences": "0.1109/tvcg.2016.2598647;10.1109/tvcg.2017.2743990;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2017.2745278;10.1109/infvis.2000.885092;10.1109/tvcg.2013.197;10.1109/vast.2007.4389011;10.1109/tvcg.2008.137;10.1109/tvcg.2017.2744019;10.1109/tvcg.2012.204;10.1109/tvcg.2013.191;10.1109/tvcg.2014.2346573;10.1109/tvcg.2013.200;10.1109/tvcg.2014.2346291;10.1109/tvcg.2016.2599030;10.1109/tvcg.2014.2346574;10.1109/infvis.2000.885086;10.1109/tvcg.2009.162;10.1109/tvcg.2007.70577;10.1109/tvcg.2007.70589",
                "AuthorKeywords": "Components,literate computing,development,exploration,dissemination,collaboration,heterogeneous devices",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 812,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 680,
                "i": [
                    680
                ]
            }
        },
        {
            "name": "Heather Richter Lipford",
            "value": 65,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 1271,
            "x": -353.4223099032271,
            "y": 47.356845995772034,
            "vy": 0,
            "vx": 0,
            "r": 1.0748416810592976,
            "node": {
                "Conference": "VAST",
                "Year": 2010,
                "Title": "Helping users recall their reasoning process",
                "DOI": "10.1109/vast.2010.5653598",
                "Link": "http://dx.doi.org/10.1109/VAST.2010.5653598",
                "FirstPage": 187,
                "LastPage": 194,
                "PaperType": "C",
                "Abstract": "The final product of an analyst's investigation using a visualization is often a report of the discovered knowledge, as well as the methods employed and reasoning behind the discovery. We believe that analysts may have difficulty keeping track of their knowledge discovery process and will require tools to assist in accurately recovering their reasoning. We first report on a study examining analysts' recall of their strategies and methods, demonstrating their lack of memory of the path of knowledge discovery. We then explore whether a tool visualizing the steps of the visual analysis can aid users in recalling their reasoning process. The results of our second study indicate that visualizations of interaction logs can serve as an effective memory aid, allowing analysts to recall additional details of their strategies and decisions.",
                "AuthorNamesDeduped": "Heather Richter Lipford;Felesia Stukes;Wenwen Dou;Matthew E. Hawkins;Remco Chang",
                "AuthorNames": "Heather Richter Lipford;Felesia Stukes;Wenwen Dou;Matthew E. Hawkins;Remco Chang",
                "AuthorAffiliation": "University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA and Tufts University, USA",
                "InternalReferences": "0.1109/tvcg.2008.137;10.1109/vast.2007.4388992;10.1109/vast.2008.4677365;10.1109/vast.2008.4677360;10.1109/vast.2007.4389009",
                "AuthorKeywords": "Visual analytics, visualization, reasoning process ",
                "AminerCitationCount": 51,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1732,
                "i": [
                    1732
                ]
            }
        },
        {
            "name": "Felesia Stukes",
            "value": 65,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 1272,
            "x": 228.70340377900308,
            "y": -273.76039359245937,
            "vy": 0,
            "vx": 0,
            "r": 1.0748416810592976,
            "node": {
                "Conference": "VAST",
                "Year": 2010,
                "Title": "Helping users recall their reasoning process",
                "DOI": "10.1109/vast.2010.5653598",
                "Link": "http://dx.doi.org/10.1109/VAST.2010.5653598",
                "FirstPage": 187,
                "LastPage": 194,
                "PaperType": "C",
                "Abstract": "The final product of an analyst's investigation using a visualization is often a report of the discovered knowledge, as well as the methods employed and reasoning behind the discovery. We believe that analysts may have difficulty keeping track of their knowledge discovery process and will require tools to assist in accurately recovering their reasoning. We first report on a study examining analysts' recall of their strategies and methods, demonstrating their lack of memory of the path of knowledge discovery. We then explore whether a tool visualizing the steps of the visual analysis can aid users in recalling their reasoning process. The results of our second study indicate that visualizations of interaction logs can serve as an effective memory aid, allowing analysts to recall additional details of their strategies and decisions.",
                "AuthorNamesDeduped": "Heather Richter Lipford;Felesia Stukes;Wenwen Dou;Matthew E. Hawkins;Remco Chang",
                "AuthorNames": "Heather Richter Lipford;Felesia Stukes;Wenwen Dou;Matthew E. Hawkins;Remco Chang",
                "AuthorAffiliation": "University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA;University of North Carolina, Charlotte, Charlotte, NC, USA and Tufts University, USA",
                "InternalReferences": "0.1109/tvcg.2008.137;10.1109/vast.2007.4388992;10.1109/vast.2008.4677365;10.1109/vast.2008.4677360;10.1109/vast.2007.4389009",
                "AuthorKeywords": "Visual analytics, visualization, reasoning process ",
                "AminerCitationCount": 51,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 465,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1732,
                "i": [
                    1732
                ]
            }
        },
        {
            "name": "Timothy Major",
            "value": 85,
            "numPapers": 18,
            "cluster": "5",
            "visible": 1,
            "index": 1273,
            "x": 16.290113628083184,
            "y": 356.48931568559544,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "InfoVis",
                "Year": 2018,
                "Title": "Graphicle: Exploring Units, Networks, and Context in a Blended Visualization Approach",
                "DOI": "10.1109/tvcg.2018.2865151",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865151",
                "FirstPage": 576,
                "LastPage": 585,
                "PaperType": "J",
                "Abstract": "Many real-world datasets are large, multivariate, and relational in nature and relevant associated decisions frequently require a simultaneous consideration of both attributes and connections. Existing visualization systems and approaches, however, often make an explicit trade-off between either affording rich exploration of individual data units and their attributes or exploration of the underlying network structure. In doing so, important analysis opportunities and insights are potentially missed. In this study, we aim to address this gap by (1) considering visualizations and interaction techniques that blend the spectrum between unit and network visualizations, (2) discussing the nature of different forms of contexts and the challenges in implementing them, and (3) demonstrating the value of our approach for visual exploration of multivariate, relational data for a real-world use case. Specifically, we demonstrate through a system called Graphicle how network structure can be layered on top of unit visualization techniques to create new opportunities for visual exploration of physician characteristics and referral data. We report on the design, implementation, and evaluation of the system and effectiveness of our blended approach.",
                "AuthorNamesDeduped": "Timothy Major;Rahul C. Basole",
                "AuthorNames": "Timothy Major;Rahul C. Basole",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, US;Georgia Institute of Technology, Atlanta, GA, US",
                "InternalReferences": "0.1109/tvcg.2012.252;10.1109/infvis.2005.1532126;10.1109/tvcg.2007.70539;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346292;10.1109/tvcg.2013.227;10.1109/tvcg.2006.166;10.1109/tvcg.2014.2346441;10.1109/tvcg.2009.108;10.1109/infvis.2003.1249004;10.1109/tvcg.2010.205;10.1109/tvcg.2007.70589;10.1109/vast.2009.5333880;10.1109/tvcg.2013.167",
                "AuthorKeywords": "Unit visualization,network visualization,context",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 722,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 689,
                "i": [
                    689
                ]
            }
        },
        {
            "name": "Rahul C. Basole",
            "value": 99,
            "numPapers": 67,
            "cluster": "1",
            "visible": 1,
            "index": 1274,
            "x": -252.9161367429654,
            "y": -251.9591787869885,
            "vy": 0,
            "vx": 0,
            "r": 1.1139896373056994,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Duet: Helping Data Analysis Novices Conduct Pairwise Comparisons by Minimal Specification",
                "DOI": "10.1109/tvcg.2018.2864526",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864526",
                "FirstPage": 427,
                "LastPage": 437,
                "PaperType": "J",
                "Abstract": "Data analysis novices often encounter barriers in executing low-level operations for pairwise comparisons. They may also run into barriers in interpreting the artifacts (e.g., visualizations) created as a result of the operations. We developed Duet, a visual analysis system designed to help data analysis novices conduct pairwise comparisons by addressing execution and interpretation barriers. To reduce the barriers in executing low-level operations during pairwise comparison, Duet employs minimal specification: when one object group (i.e. a group of records in a data table) is specified, Duet recommends object groups that are similar to or different from the specified one; when two object groups are specified, Duet recommends similar and different attributes between them. To lower the barriers in interpreting its recommendations, Duet explains the recommended groups and attributes using both visualizations and textual descriptions. We conducted a qualitative evaluation with eight participants to understand the effectiveness of Duet. The results suggest that minimal specification is easy to use and Duet's explanations are helpful for interpreting the recommendations despite some usability issues.",
                "AuthorNamesDeduped": "Po-Ming Law;Rahul C. Basole;Yanhong Wu",
                "AuthorNames": "Po-Ming Law;Rahul C. Basole;Yanhong Wu",
                "AuthorAffiliation": "Georgia Institute of Technology;Georgia Institute of Technology;Visa Research",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2016.2598468;10.1109/vast.2011.6102435;10.1109/tvcg.2017.2744199;10.1109/tvcg.2010.164;10.1109/tvcg.2017.2744684;10.1109/tvcg.2008.109;10.1109/tvcg.2015.2467195;10.1109/tvcg.2017.2745219;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Pairwise comparison,novices,data analysis,automatic insight generation",
                "AminerCitationCount": 33,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 614,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 756,
                "i": [
                    756
                ]
            }
        },
        {
            "name": "Lars Linsen",
            "value": 32,
            "numPapers": 15,
            "cluster": "11",
            "visible": 1,
            "index": 1275,
            "x": 356.8283730870796,
            "y": 14.950323074365627,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Skeleton-Based Scagnostics",
                "DOI": "10.1109/tvcg.2017.2744339",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744339",
                "FirstPage": 542,
                "LastPage": 552,
                "PaperType": "J",
                "Abstract": "Scatterplot matrices (SPLOMs) are widely used for exploring multidimensional data. Scatterplot diagnostics (scagnostics) approaches measure characteristics of scatterplots to automatically find potentially interesting plots, thereby making SPLOMs more scalable with the dimension count. While statistical measures such as regression lines can capture orientation, and graph-theoretic scagnostics measures can capture shape, there is no scatterplot characterization measure that uses both descriptors. Based on well-known results in shape analysis, we propose a scagnostics approach that captures both scatterplot shape and orientation using skeletons (or medial axes). Our representation can handle complex spatial distributions, helps discovery of principal trends in a multiscale way, scales visually well with the number of samples, is robust to noise, and is automatic and fast to compute. We define skeleton-based similarity metrics for the visual exploration and analysis of SPLOMs. We perform a user study to measure the human perception of scatterplot similarity and compare the outcome to our results as well as to graph-based scagnostics and other visual quality metrics. Our skeleton-based metrics outperform previously defined measures both in terms of closeness to perceptually-based similarity and computation time efficiency.",
                "AuthorNamesDeduped": "José Matute;Alexandru C. Telea;Lars Linsen",
                "AuthorNames": "José Matute;Alexandru C. Telea;Lars Linsen",
                "AuthorAffiliation": "Institute of Computer Science, University of Münster, Germany;Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, Groningen, The Netherlands;Institute of Computer Science, University of Münster, Germany",
                "InternalReferences": "0.1109/vast.2011.6102437;10.1109/tvcg.2011.233;10.1109/tvcg.2010.213;10.1109/tvcg.2011.223;10.1109/tvcg.2011.220;10.1109/vast.2008.4677367;10.1109/vast.2009.5332628",
                "AuthorKeywords": "Multidimensional Data (primary keyword),High-Dimensional Data",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 702,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 806,
                "i": [
                    806
                ]
            }
        },
        {
            "name": "Brian Duffy",
            "value": 47,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1276,
            "x": -273.3200146063588,
            "y": 230.10034683932957,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "On Histograms and Isosurface Statistics",
                "DOI": "10.1109/tvcg.2006.168",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.168",
                "FirstPage": 1259,
                "LastPage": 1266,
                "PaperType": "J",
                "Abstract": "In this paper, we show that histograms represent spatial function distributions with a nearest neighbour interpolation. We confirm that this results in systematic underrepresentation of transitional features of the data, and provide new insight why this occurs. We further show that isosurface statistics, which use higher quality interpolation, give better representations of the function distribution. We also use our experimentally collected isosurface statistics to resolve some questions as to the formal complexity of isosurfaces",
                "AuthorNamesDeduped": "Hamish A. Carr;Brian Duffy;Brian Denby",
                "AuthorNames": "Carr Hamish;Brian Duffy;Brain Denby",
                "AuthorAffiliation": "University College Dublin, Ireland;University College Dublin, Ireland;University College Dublin, Ireland",
                "InternalReferences": "0.1109/visual.1994.346334;10.1109/visual.2001.964519;10.1109/visual.1994.346331;10.1109/visual.1991.175782;10.1109/visual.2001.964515;10.1109/visual.2001.964516;10.1109/visual.1997.663875",
                "AuthorKeywords": "histograms, isosurfaces, isosurface statistics",
                "AminerCitationCount": 76,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 461,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2288,
                "i": [
                    2288
                ]
            }
        },
        {
            "name": "Julian Heinrich",
            "value": 102,
            "numPapers": 20,
            "cluster": "6",
            "visible": 1,
            "index": 1277,
            "x": 46.125177971935805,
            "y": -354.4326000201692,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Continuous Parallel Coordinates",
                "DOI": "10.1109/tvcg.2009.131",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.131",
                "FirstPage": 1531,
                "LastPage": 1538,
                "PaperType": "J",
                "Abstract": "Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.",
                "AuthorNamesDeduped": "Julian Heinrich;Daniel Weiskopf",
                "AuthorNames": "Julian Heinrich;Daniel Weiskopf",
                "AuthorAffiliation": "VISUS (Visualization Research Center), Universität Stuttgart, Stuttgart, Germany;VISUS (Visualization Research Center), Universität Stuttgart, Stuttgart, Germany",
                "InternalReferences": "0.1109/tvcg.2006.168;10.1109/tvcg.2008.119;10.1109/tvcg.2008.131;10.1109/infvis.2005.1532139;10.1109/tvcg.2009.179;10.1109/tvcg.2006.138;10.1109/visual.1990.146402;10.1109/infvis.2005.1532138;10.1109/tvcg.2008.160;10.1109/infvis.2002.1173157;10.1109/visual.1999.809866;10.1109/tvcg.2006.170;10.1109/infvis.2004.68",
                "AuthorKeywords": "Parallel coordinates, integrating spatial and non-spatial data visualization, multi-variate visualization, interpolation",
                "AminerCitationCount": 128,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 1332,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1911,
                "i": [
                    1911
                ]
            }
        },
        {
            "name": "John M. Schreiner",
            "value": 43,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1278,
            "x": 205.48488797962477,
            "y": 292.61913951756657,
            "vy": 0,
            "vx": 0,
            "r": 1.04951065054692,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Revisiting Histograms and Isosurface Statistics",
                "DOI": "10.1109/tvcg.2008.160",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.160",
                "FirstPage": 1659,
                "LastPage": 1666,
                "PaperType": "J",
                "Abstract": "Recent results have shown a link between geometric properties of isosurfaces and statistical properties of the underlying sampled data. However, this has two defects: not all of the properties described converge to the same solution, and the statistics computed are not always invariant under isosurface-preserving transformations. We apply Federer's Coarea Formula from geometric measure theory to explain these discrepancies. We describe an improved substitute for histograms based on weighting with the inverse gradient magnitude, develop a statistical model that is invariant under isosurface-preserving transformations, and argue that this provides a consistent method for algorithm evaluation across multiple datasets based on histogram equalization. We use our corrected formulation to reevaluate recent results on average isosurface complexity, and show evidence that noise is one cause of the discrepancy between the expected figure and the observed one.",
                "AuthorNamesDeduped": "Carlos Eduardo Scheidegger;John M. Schreiner;Brian Duffy;Hamish A. Carr;Cláudio T. Silva",
                "AuthorNames": "Carlos E. Scheidegger;John M. Schreiner;Brian Duffy;Hamish Carr;Cláudio T. Silva",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;UCD School of Computer Science & Informatics, University College Dublin, Ireland;UCD School of Computer Science & Informatics, University College Dublin, Ireland;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/tvcg.2006.168;10.1109/tvcg.2008.119;10.1109/visual.2001.964519;10.1109/visual.2001.964515;10.1109/visual.1997.663875;10.1109/visual.2001.964516",
                "AuthorKeywords": "Isosurfaces, Histograms, Coarea Formula",
                "AminerCitationCount": 75,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 500,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2055,
                "i": [
                    2055
                ]
            }
        },
        {
            "name": "Noura Faraj",
            "value": 27,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 1279,
            "x": -349.31608156941877,
            "y": -76.99529308332534,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Persistence Atlas for Critical Point Variability in Ensembles",
                "DOI": "10.1109/tvcg.2018.2864432",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864432",
                "FirstPage": 1152,
                "LastPage": 1162,
                "PaperType": "J",
                "Abstract": "This paper presents a new approach for the visualization and analysis of the spatial variability of features of interest represented by critical points in ensemble data. Our framework, called Persistence Atlas, enables the visualization of the dominant spatial patterns of critical points, along with statistics regarding their occurrence in the ensemble. The persistence atlas represents in the geometrical domain each dominant pattern in the form of a confidence map for the appearance of critical points. As a by-product, our method also provides 2-dimensional layouts of the entire ensemble, highlighting the main trends at a global level. Our approach is based on the new notion of Persistence Map, a measure of the geometrical density in critical points which leverages the robustness to noise of topological persistence to better emphasize salient features. We show how to leverage spectral embedding to represent the ensemble members as points in a low-dimensional Euclidean space, where distances between points measure the dissimilarities between critical point layouts and where statistical tasks, such as clustering, can be easily carried out. Further, we show how the notion of mandatory critical point can be leveraged to evaluate for each cluster confidence regions for the appearance of critical points. Most of the steps of this framework can be trivially parallelized and we show how to efficiently implement them. Extensive experiments demonstrate the relevance of our approach. The accuracy of the confidence regions provided by the persistence atlas is quantitatively evaluated and compared to a baseline strategy using an off-the-shelf clustering approach. We illustrate the importance of the persistence atlas in a variety of real-life datasets, where clear trends in feature layouts are identified and analyzed. We provide a lightweight VTK-based C++ implementation of our approach that can be used for reproduction purposes.",
                "AuthorNamesDeduped": "Guillaume Favelier;Noura Faraj;Brian Summa;Julien Tierny",
                "AuthorNames": "Guillaume Favelier;Noura Faraj;Brian Summa;Julien Tierny",
                "AuthorAffiliation": "Sorbonne Université, CNRS (LIP6);Tulane University;Tulane University;Sorbonne Université, CNRS (LIP6)",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2015.2467958;10.1109/tvcg.2015.2467204;10.1109/tvcg.2014.2346403;10.1109/tvcg.2008.110;10.1109/tvcg.2015.2467432;10.1109/tvcg.2013.141;10.1109/tvcg.2011.249;10.1109/tvcg.2006.186;10.1109/tvcg.2014.2346455;10.1109/tvcg.2015.2467754;10.1109/tvcg.2010.181;10.1109/visual.1999.809897;10.1109/tvcg.2012.249;10.1109/tvcg.2014.2346332;10.1109/tvcg.2013.143;10.1109/tvcg.2007.70603",
                "AuthorKeywords": "Topological data analysis,scalar data,ensemble data",
                "AminerCitationCount": 27,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 87,
                "DownloadsXplore": 417,
                "Award": null,
                "GraphicsReplicabilityStamp": "X",
                "cluster": 1,
                "selected": true,
                "seqId": 696,
                "i": [
                    696
                ]
            }
        },
        {
            "name": "Johannes Weissenbock",
            "value": 33,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1280,
            "x": 309.70533537037966,
            "y": -179.25569793766863,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "PorosityAnalyzer: Visual Analysis and Evaluation of Segmentation Pipelines to Determine the Porosity in Fiber-Reinforced Polymers",
                "DOI": "10.1109/vast.2016.7883516",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883516",
                "FirstPage": 101,
                "LastPage": 110,
                "PaperType": "C",
                "Abstract": "In this paper we present PorosityAnalyzer, a novel tool for detailed evaluation and visual analysis of pore segmentation pipelines to determine the porosity in fiber-reinforced polymers (FRPs). The presented tool consists of two modules: the computation module and the analysis module. The computation module enables a convenient setup and execution of distributed off-line-computations on industrial 3D X-ray computed tomography datasets. It allows the user to assemble individual segmentation pipelines in the form of single pipeline steps, and to specify the parameter ranges as well as the sampling of the parameter-space of each pipeline segment. The result of a single segmentation run consists of the input parameters, the calculated 3D binary-segmentation mask, the resulting porosity value, and other derived results (e.g., segmentation pipeline run-time). The analysis module presents the data at different levels of detail by drill-down filtering in order to determine accurate and robust segmentation pipelines. Overview visualizations allow to initially compare and evaluate the segmentation pipelines. With a scatter plot matrix (SPLOM), the segmentation pipelines are examined in more detail based on their input and output parameters. Individual segmentation-pipeline runs are selected in the SPLOM and visually examined and compared in 2D slice views and 3D renderings by using aggregated segmentation masks and statistical contour renderings. PorosityAnalyzer has been thoroughly evaluated with the help of twelve domain experts. Two case studies demonstrate the applicability of our proposed concepts and visualization techniques, and show that our tool helps domain experts to gain new insights and improve their workflow efficiency.",
                "AuthorNamesDeduped": "Johannes Weissenbock;Artem Amirkhanov;M. Eduard Gröller;Johann Kastner;Christoph Heinzl",
                "AuthorNames": "Johannes Weissenböck;Artem Amirkhanov;Eduard Gröller;Johann Kastner;Christoph Heinzl",
                "AuthorAffiliation": "University of Applied Sciences, Wels, Upper Austria, Austria;University of Applied Sciences, Wels, Upper Austria, Austria;VrVis Research Center, Austria and TU Wien, Vienna, Austria;University of Applied Sciences, Wels, Upper Austria, Austria;University of Applied Sciences, Wels, Upper Austria, Austria",
                "InternalReferences": "0.1109/tvcg.2013.147;10.1109/tvcg.2008.153;10.1109/visual.1993.398859;10.1109/tvcg.2012.200;10.1109/tvcg.2011.253;10.1109/tvcg.2014.2346321;10.1109/tvcg.2013.177;10.1109/tvcg.2011.248",
                "AuthorKeywords": null,
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 495,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 996,
                "i": [
                    996
                ]
            }
        },
        {
            "name": "Bernhard Fröhler",
            "value": 21,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1281,
            "x": -107.32347386181475,
            "y": 341.5137946821949,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Dynamic Volume Lines: Visual Comparison of 3D Volumes through Space-filling Curves",
                "DOI": "10.1109/tvcg.2018.2864510",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864510",
                "FirstPage": 1040,
                "LastPage": 1049,
                "PaperType": "J",
                "Abstract": "The comparison of many members of an ensemble is difficult, tedious, and error-prone, which is aggravated by often just subtle differences. In this paper, we introduce Dynamic Volume Lines for the interactive visual analysis and comparison of sets of 3D volumes. Each volume is linearized along a Hilbert space-filling curve into a 1D Hilbert line plot, which depicts the intensities over the Hilbert indices. We present a nonlinear scaling of these 1D Hilbert line plots based on the intensity variations in the ensemble of 3D volumes, which enables a more effective use of the available screen space. The nonlinear scaling builds the basis for our interactive visualization techniques. An interactive histogram heatmap of the intensity frequencies serves as overview visualization. When zooming in, the frequencies are replaced by detailed 1D Hilbert line plots and optional functional boxplots. To focus on important regions of the volume ensemble, nonlinear scaling is incorporated into the plots. An interactive scaling widget depicts the local ensemble variations. Our brushing and linking interface reveals, for example, regions with a high ensemble variation by showing the affected voxels in a 3D spatial view. We show the applicability of our concepts using two case studies on ensembles of 3D volumes resulting from tomographic reconstruction. In the first case study, we evaluate an artificial specimen from simulated industrial 3D X-ray computed tomography (XCT). In the second case study, a real-world XCT foam specimen is investigated. Our results show that Dynamic Volume Lines can identify regions with high local intensity variations, allowing the user to draw conclusions, for example, about the choice of reconstruction parameters. Furthermore, it is possible to detect ring artifacts in reconstructions volumes.",
                "AuthorNamesDeduped": "Johannes Weissenbock;Bernhard Fröhler;M. Eduard Gröller;Johann Kastner;Christoph Heinzl",
                "AuthorNames": "Johannes Weissenböck;Bernhard Fröhler;Eduard Gröller;Johann Kastner;Christoph Heinzl",
                "AuthorAffiliation": "University of Applied Sciences Upper Austria, Wels, Austria;University of Applied Sciences Upper Austria, Wels, Austria;Technische Universitat Wien, Wien, Wien, AT;University of Applied Sciences Upper Austria, Wels, Austria;University of Applied Sciences Upper Austria, Wels, Austria",
                "InternalReferences": "0.1109/tvcg.2014.2346448;10.1109/vast.2015.7347634;10.1109/tvcg.2009.155;10.1109/tvcg.2014.2346455;10.1109/visual.2005.1532847;10.1109/tvcg.2013.213;10.1109/vast.2014.7042491;10.1109/tvcg.2014.2346321;10.1109/vast.2016.7883516;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Ensemble data,comparative visualization,visual analysis,Hilbert curve,nonlinear scaling,X-ray computed tomography",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 861,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 698,
                "i": [
                    698
                ]
            }
        },
        {
            "name": "Johann Kastner",
            "value": 57,
            "numPapers": 27,
            "cluster": "6",
            "visible": 1,
            "index": 1282,
            "x": -151.61138342958802,
            "y": -324.4441221760173,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "PorosityAnalyzer: Visual Analysis and Evaluation of Segmentation Pipelines to Determine the Porosity in Fiber-Reinforced Polymers",
                "DOI": "10.1109/vast.2016.7883516",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883516",
                "FirstPage": 101,
                "LastPage": 110,
                "PaperType": "C",
                "Abstract": "In this paper we present PorosityAnalyzer, a novel tool for detailed evaluation and visual analysis of pore segmentation pipelines to determine the porosity in fiber-reinforced polymers (FRPs). The presented tool consists of two modules: the computation module and the analysis module. The computation module enables a convenient setup and execution of distributed off-line-computations on industrial 3D X-ray computed tomography datasets. It allows the user to assemble individual segmentation pipelines in the form of single pipeline steps, and to specify the parameter ranges as well as the sampling of the parameter-space of each pipeline segment. The result of a single segmentation run consists of the input parameters, the calculated 3D binary-segmentation mask, the resulting porosity value, and other derived results (e.g., segmentation pipeline run-time). The analysis module presents the data at different levels of detail by drill-down filtering in order to determine accurate and robust segmentation pipelines. Overview visualizations allow to initially compare and evaluate the segmentation pipelines. With a scatter plot matrix (SPLOM), the segmentation pipelines are examined in more detail based on their input and output parameters. Individual segmentation-pipeline runs are selected in the SPLOM and visually examined and compared in 2D slice views and 3D renderings by using aggregated segmentation masks and statistical contour renderings. PorosityAnalyzer has been thoroughly evaluated with the help of twelve domain experts. Two case studies demonstrate the applicability of our proposed concepts and visualization techniques, and show that our tool helps domain experts to gain new insights and improve their workflow efficiency.",
                "AuthorNamesDeduped": "Johannes Weissenbock;Artem Amirkhanov;M. Eduard Gröller;Johann Kastner;Christoph Heinzl",
                "AuthorNames": "Johannes Weissenböck;Artem Amirkhanov;Eduard Gröller;Johann Kastner;Christoph Heinzl",
                "AuthorAffiliation": "University of Applied Sciences, Wels, Upper Austria, Austria;University of Applied Sciences, Wels, Upper Austria, Austria;VrVis Research Center, Austria and TU Wien, Vienna, Austria;University of Applied Sciences, Wels, Upper Austria, Austria;University of Applied Sciences, Wels, Upper Austria, Austria",
                "InternalReferences": "0.1109/tvcg.2013.147;10.1109/tvcg.2008.153;10.1109/visual.1993.398859;10.1109/tvcg.2012.200;10.1109/tvcg.2011.253;10.1109/tvcg.2014.2346321;10.1109/tvcg.2013.177;10.1109/tvcg.2011.248",
                "AuthorKeywords": null,
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 495,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 996,
                "i": [
                    996
                ]
            }
        },
        {
            "name": "Jian Huang 0007",
            "value": 100,
            "numPapers": 55,
            "cluster": "6",
            "visible": 1,
            "index": 1283,
            "x": 331.0813723875585,
            "y": 136.87631226026963,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Scalable Data Servers for Large Multivariate Volume Visualization",
                "DOI": "10.1109/tvcg.2006.175",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.175",
                "FirstPage": 1291,
                "LastPage": 1298,
                "PaperType": "J",
                "Abstract": "Volumetric datasets with multiple variables on each voxel over multiple time steps are often complex, especially when considering the exponentially large attribute space formed by the variables in combination with the spatial and temporal dimensions. It is intuitive, practical, and thus often desirable, to interactively select a subset of the data from within that high-dimensional value space for efficient visualization. This approach is straightforward to implement if the dataset is small enough to be stored entirely in-core. However, to handle datasets sized at hundreds of gigabytes and beyond, this simplistic approach becomes infeasible and thus, more sophisticated solutions are needed. In this work, we developed a system that supports efficient visualization of an arbitrary subset, selected by range-queries, of a large multivariate time-varying dataset. By employing specialized data structures and schemes of data distribution, our system can leverage a large number of networked computers as parallel data servers, and guarantees a near optimal load-balance. We demonstrate our system of scalable data servers using two large time-varying simulation datasets",
                "AuthorNamesDeduped": "Markus Glatter;Jian Huang 0007;Jinzhu Gao;Colin Mollenhour",
                "AuthorNames": "Markus Glatter;Jian Huang;Jinzhu Gao;Colin Mollenhour",
                "AuthorAffiliation": "University of Tennessee, USA;University of Tennessee, USA;University of Tennessee, USA;Oak Ridge National Laboratory, USA",
                "InternalReferences": "0.1109/visual.2005.1532792;10.1109/visual.2005.1532794;10.1109/visual.1999.809910;10.1109/visual.1996.568121;10.1109/visual.2003.1250412;10.1109/visual.2001.964519;10.1109/visual.1998.745311;10.1109/visual.2000.885698",
                "AuthorKeywords": "Parallel and distributed volume visualization, large Data Set Visualization, multi-variate Visualization, volume Visualization",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 248,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2313,
                "i": [
                    2313
                ]
            }
        },
        {
            "name": "Seung Hyun Kim",
            "value": 7,
            "numPapers": 23,
            "cluster": "6",
            "visible": 1,
            "index": 1284,
            "x": -336.718816070604,
            "y": 122.76171595416359,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Exploring Time-Varying Multivariate Volume Data Using Matrix of Isosurface Similarity Maps",
                "DOI": "10.1109/tvcg.2018.2864808",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864808",
                "FirstPage": 1236,
                "LastPage": 1245,
                "PaperType": "J",
                "Abstract": "We present a novel visual representation and interface named the matrix of isosurface similarity maps (MISM) for effective exploration of large time-varying multivariate volumetric data sets. MISM synthesizes three types of similarity maps (i.e., self, temporal, and variable similarity maps) to capture the essential relationships among isosurfaces of different variables and time steps. Additionally, it serves as the main visual mapping and navigation tool for examining the vast number of isosurfaces and exploring the underlying time-varying multivariate data set. We present temporal clustering, variable grouping, and interactive filtering to reduce the huge exploration space of MISM. In conjunction with the isovalue and isosurface views, MISM allows users to identify important isosurfaces or isosurface pairs and compare them over space, time, and value range. More importantly, we introduce path recommendation that suggests, animates, and compares traversal paths for effectively exploring MISM under varied criteria and at different levels-of-detail. A silhouette-based method is applied to render multiple surfaces of interest in a visually succinct manner. We demonstrate the effectiveness of our approach with case studies of several time-varying multivariate data sets and an ensemble data set, and evaluate our work with two domain experts.",
                "AuthorNamesDeduped": "Jun Tao 0002;Martin Imre;Chaoli Wang 0001;Nitesh V. Chawla;Hanqi Guo 0001;Gokhan Sever;Seung Hyun Kim",
                "AuthorNames": "Jun Tao;Martin Imre;Chaoli Wang;Nitesh V. Chawla;Hanqi Guo;Gökhan Sever;Seung Hyun Kim",
                "AuthorAffiliation": "University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;Argonne National Laboratory, Argonne, IL, US;Argonne National Laboratory, Argonne, IL, US;Ohio State University, Columbus, OH, US",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2012.284;10.1109/tvcg.2008.184;10.1109/tvcg.2011.246;10.1109/tvcg.2011.258;10.1109/tvcg.2008.116;10.1109/visual.2005.1532857;10.1109/tvcg.2009.136;10.1109/tvcg.2015.2467431;10.1109/tvcg.2006.165;10.1109/tvcg.2013.213;10.1109/tvcg.2008.143;10.1109/visual.1999.809910;10.1109/visual.2005.1532792;10.1109/tvcg.2008.140;10.1109/tvcg.2006.164;10.1109/visual.2003.1250402",
                "AuthorKeywords": "Time-varying multivariate data visualization,isosurface,similarity map,visual interface,path recommendation",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 733,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 704,
                "i": [
                    704
                ]
            }
        },
        {
            "name": "Martin Imre",
            "value": 7,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1285,
            "x": 165.42598356880347,
            "y": -318.09470910452745,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Exploring Time-Varying Multivariate Volume Data Using Matrix of Isosurface Similarity Maps",
                "DOI": "10.1109/tvcg.2018.2864808",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864808",
                "FirstPage": 1236,
                "LastPage": 1245,
                "PaperType": "J",
                "Abstract": "We present a novel visual representation and interface named the matrix of isosurface similarity maps (MISM) for effective exploration of large time-varying multivariate volumetric data sets. MISM synthesizes three types of similarity maps (i.e., self, temporal, and variable similarity maps) to capture the essential relationships among isosurfaces of different variables and time steps. Additionally, it serves as the main visual mapping and navigation tool for examining the vast number of isosurfaces and exploring the underlying time-varying multivariate data set. We present temporal clustering, variable grouping, and interactive filtering to reduce the huge exploration space of MISM. In conjunction with the isovalue and isosurface views, MISM allows users to identify important isosurfaces or isosurface pairs and compare them over space, time, and value range. More importantly, we introduce path recommendation that suggests, animates, and compares traversal paths for effectively exploring MISM under varied criteria and at different levels-of-detail. A silhouette-based method is applied to render multiple surfaces of interest in a visually succinct manner. We demonstrate the effectiveness of our approach with case studies of several time-varying multivariate data sets and an ensemble data set, and evaluate our work with two domain experts.",
                "AuthorNamesDeduped": "Jun Tao 0002;Martin Imre;Chaoli Wang 0001;Nitesh V. Chawla;Hanqi Guo 0001;Gokhan Sever;Seung Hyun Kim",
                "AuthorNames": "Jun Tao;Martin Imre;Chaoli Wang;Nitesh V. Chawla;Hanqi Guo;Gökhan Sever;Seung Hyun Kim",
                "AuthorAffiliation": "University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;Argonne National Laboratory, Argonne, IL, US;Argonne National Laboratory, Argonne, IL, US;Ohio State University, Columbus, OH, US",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2012.284;10.1109/tvcg.2008.184;10.1109/tvcg.2011.246;10.1109/tvcg.2011.258;10.1109/tvcg.2008.116;10.1109/visual.2005.1532857;10.1109/tvcg.2009.136;10.1109/tvcg.2015.2467431;10.1109/tvcg.2006.165;10.1109/tvcg.2013.213;10.1109/tvcg.2008.143;10.1109/visual.1999.809910;10.1109/visual.2005.1532792;10.1109/tvcg.2008.140;10.1109/tvcg.2006.164;10.1109/visual.2003.1250402",
                "AuthorKeywords": "Time-varying multivariate data visualization,isosurface,similarity map,visual interface,path recommendation",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 733,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 704,
                "i": [
                    704
                ]
            }
        },
        {
            "name": "Nitesh V. Chawla",
            "value": 7,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1286,
            "x": 92.92603960138469,
            "y": 346.43145233076325,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Exploring Time-Varying Multivariate Volume Data Using Matrix of Isosurface Similarity Maps",
                "DOI": "10.1109/tvcg.2018.2864808",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864808",
                "FirstPage": 1236,
                "LastPage": 1245,
                "PaperType": "J",
                "Abstract": "We present a novel visual representation and interface named the matrix of isosurface similarity maps (MISM) for effective exploration of large time-varying multivariate volumetric data sets. MISM synthesizes three types of similarity maps (i.e., self, temporal, and variable similarity maps) to capture the essential relationships among isosurfaces of different variables and time steps. Additionally, it serves as the main visual mapping and navigation tool for examining the vast number of isosurfaces and exploring the underlying time-varying multivariate data set. We present temporal clustering, variable grouping, and interactive filtering to reduce the huge exploration space of MISM. In conjunction with the isovalue and isosurface views, MISM allows users to identify important isosurfaces or isosurface pairs and compare them over space, time, and value range. More importantly, we introduce path recommendation that suggests, animates, and compares traversal paths for effectively exploring MISM under varied criteria and at different levels-of-detail. A silhouette-based method is applied to render multiple surfaces of interest in a visually succinct manner. We demonstrate the effectiveness of our approach with case studies of several time-varying multivariate data sets and an ensemble data set, and evaluate our work with two domain experts.",
                "AuthorNamesDeduped": "Jun Tao 0002;Martin Imre;Chaoli Wang 0001;Nitesh V. Chawla;Hanqi Guo 0001;Gokhan Sever;Seung Hyun Kim",
                "AuthorNames": "Jun Tao;Martin Imre;Chaoli Wang;Nitesh V. Chawla;Hanqi Guo;Gökhan Sever;Seung Hyun Kim",
                "AuthorAffiliation": "University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;Argonne National Laboratory, Argonne, IL, US;Argonne National Laboratory, Argonne, IL, US;Ohio State University, Columbus, OH, US",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2012.284;10.1109/tvcg.2008.184;10.1109/tvcg.2011.246;10.1109/tvcg.2011.258;10.1109/tvcg.2008.116;10.1109/visual.2005.1532857;10.1109/tvcg.2009.136;10.1109/tvcg.2015.2467431;10.1109/tvcg.2006.165;10.1109/tvcg.2013.213;10.1109/tvcg.2008.143;10.1109/visual.1999.809910;10.1109/visual.2005.1532792;10.1109/tvcg.2008.140;10.1109/tvcg.2006.164;10.1109/visual.2003.1250402",
                "AuthorKeywords": "Time-varying multivariate data visualization,isosurface,similarity map,visual interface,path recommendation",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 733,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 704,
                "i": [
                    704
                ]
            }
        },
        {
            "name": "Gokhan Sever",
            "value": 7,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1287,
            "x": -302.64940982383126,
            "y": -192.75200318877793,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Exploring Time-Varying Multivariate Volume Data Using Matrix of Isosurface Similarity Maps",
                "DOI": "10.1109/tvcg.2018.2864808",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864808",
                "FirstPage": 1236,
                "LastPage": 1245,
                "PaperType": "J",
                "Abstract": "We present a novel visual representation and interface named the matrix of isosurface similarity maps (MISM) for effective exploration of large time-varying multivariate volumetric data sets. MISM synthesizes three types of similarity maps (i.e., self, temporal, and variable similarity maps) to capture the essential relationships among isosurfaces of different variables and time steps. Additionally, it serves as the main visual mapping and navigation tool for examining the vast number of isosurfaces and exploring the underlying time-varying multivariate data set. We present temporal clustering, variable grouping, and interactive filtering to reduce the huge exploration space of MISM. In conjunction with the isovalue and isosurface views, MISM allows users to identify important isosurfaces or isosurface pairs and compare them over space, time, and value range. More importantly, we introduce path recommendation that suggests, animates, and compares traversal paths for effectively exploring MISM under varied criteria and at different levels-of-detail. A silhouette-based method is applied to render multiple surfaces of interest in a visually succinct manner. We demonstrate the effectiveness of our approach with case studies of several time-varying multivariate data sets and an ensemble data set, and evaluate our work with two domain experts.",
                "AuthorNamesDeduped": "Jun Tao 0002;Martin Imre;Chaoli Wang 0001;Nitesh V. Chawla;Hanqi Guo 0001;Gokhan Sever;Seung Hyun Kim",
                "AuthorNames": "Jun Tao;Martin Imre;Chaoli Wang;Nitesh V. Chawla;Hanqi Guo;Gökhan Sever;Seung Hyun Kim",
                "AuthorAffiliation": "University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;University of Notre Dame, Notre Dame, IN, US;Argonne National Laboratory, Argonne, IL, US;Argonne National Laboratory, Argonne, IL, US;Ohio State University, Columbus, OH, US",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2012.284;10.1109/tvcg.2008.184;10.1109/tvcg.2011.246;10.1109/tvcg.2011.258;10.1109/tvcg.2008.116;10.1109/visual.2005.1532857;10.1109/tvcg.2009.136;10.1109/tvcg.2015.2467431;10.1109/tvcg.2006.165;10.1109/tvcg.2013.213;10.1109/tvcg.2008.143;10.1109/visual.1999.809910;10.1109/visual.2005.1532792;10.1109/tvcg.2008.140;10.1109/tvcg.2006.164;10.1109/visual.2003.1250402",
                "AuthorKeywords": "Time-varying multivariate data visualization,isosurface,similarity map,visual interface,path recommendation",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 733,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 704,
                "i": [
                    704
                ]
            }
        },
        {
            "name": "Wolfram von Funck",
            "value": 40,
            "numPapers": 6,
            "cluster": "11",
            "visible": 1,
            "index": 1288,
            "x": 353.5035661457411,
            "y": -62.33160291732974,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Smoke Surfaces: An Interactive Flow Visualization Technique Inspired by Real-World Flow Experiments",
                "DOI": "10.1109/tvcg.2008.163",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.163",
                "FirstPage": 1396,
                "LastPage": 1403,
                "PaperType": "J",
                "Abstract": "Smoke rendering is a standard technique for flow visualization. Most approaches are based on a volumetric, particle based, or image based representation of the smoke. This paper introduces an alternative representation of smoke structures: as semi-transparent streak surfaces. In order to make streak surface integration fast enough for interactive applications, we avoid expensive adaptive retriangulations by coupling the opacity of the triangles to their shapes. This way, the surface shows a smoke-like look even in rather turbulent areas. Furthermore, we show modifications of the approach to mimic smoke nozzles, wool tufts, and time surfaces. The technique is applied to a number of test data sets.",
                "AuthorNamesDeduped": "Wolfram von Funck;Tino Weinkauf;Holger Theisel;Hans-Peter Seidel",
                "AuthorNames": "Wolfram von Funck;Tino Weinkauf;Holger Theisel;Hans-Peter Seidel",
                "AuthorAffiliation": "MPI Informatik Saarbrücken;Zuse Institute Berlin;University of Magdeburg, Germany;MPI Informatik Saarbrücken",
                "InternalReferences": "0.1109/visual.1995.485141;10.1109/visual.1993.398846;10.1109/visual.1992.235211;10.1109/visual.2001.964506;10.1109/visual.1993.398877;10.1109/visual.1993.398875;10.1109/visual.1992.235226",
                "AuthorKeywords": "Unsteady flow visualization, streak surfaces, smoke visualization",
                "AminerCitationCount": 115,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1644,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2050,
                "i": [
                    2050
                ]
            }
        },
        {
            "name": "Luke J. Gosink",
            "value": 49,
            "numPapers": 15,
            "cluster": "6",
            "visible": 1,
            "index": 1289,
            "x": -218.64292979316372,
            "y": 284.86008715062496,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Characterizing and Visualizing Predictive Uncertainty in Numerical Ensembles Through Bayesian Model Averaging",
                "DOI": "10.1109/tvcg.2013.138",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.138",
                "FirstPage": 2703,
                "LastPage": 2712,
                "PaperType": "J",
                "Abstract": "Numerical ensemble forecasting is a powerful tool that drives many risk analysis efforts and decision making tasks. These ensembles are composed of individual simulations that each uniquely model a possible outcome for a common event of interest: e.g., the direction and force of a hurricane, or the path of travel and mortality rate of a pandemic. This paper presents a new visual strategy to help quantify and characterize a numerical ensemble's predictive uncertainty: i.e., the ability for ensemble constituents to accurately and consistently predict an event of interest based on ground truth observations. Our strategy employs a Bayesian framework to first construct a statistical aggregate from the ensemble. We extend the information obtained from the aggregate with a visualization strategy that characterizes predictive uncertainty at two levels: at a global level, which assesses the ensemble as a whole, as well as a local level, which examines each of the ensemble's constituents. Through this approach, modelers are able to better assess the predictive strengths and weaknesses of the ensemble as a whole, as well as individual models. We apply our method to two datasets to demonstrate its broad applicability.",
                "AuthorNamesDeduped": "Luke J. Gosink;Kevin Bensema;Trenton Pulsipher;Harald Obermaier;Michael J. Henry;Hank Childs;Kenneth I. Joy",
                "AuthorNames": "Luke Gosink;Kevin Bensema;Trenton Pulsipher;Harald Obermaier;Michael Henry;Hank Childs;Kenneth I. Joy",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;University of California, Davis, USA;University of California, Davis, USA;University of California, Davis, USA;University of Oregon, USA",
                "InternalReferences": "0.1109/visual.2002.1183769;10.1109/visual.2005.1532853;10.1109/visual.1996.568116;10.1109/tvcg.2010.208;10.1109/tvcg.2010.181",
                "AuthorKeywords": "Uncertainty visualization, numerical ensembles, statistical visualization",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 840,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1352,
                "i": [
                    1352
                ]
            }
        },
        {
            "name": "Wolfgang Kollmann",
            "value": 81,
            "numPapers": 12,
            "cluster": "11",
            "visible": 1,
            "index": 1290,
            "x": -31.211827515371215,
            "y": -357.87682493163857,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Multifield Visualization Using Local Statistical Complexity",
                "DOI": "10.1109/tvcg.2007.70615",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70615",
                "FirstPage": 1384,
                "LastPage": 1391,
                "PaperType": "J",
                "Abstract": "Modern unsteady (multi-)field visualizations require an effective reduction of the data to be displayed. From a huge amount of information the most informative parts have to be extracted. Instead of the fuzzy application dependent notion of feature, a new approach based on information theoretic concepts is introduced in this paper to detect important regions. This is accomplished by extending the concept of local statistical complexity from finite state cellular automata to discretized (multi-)fields. Thus, informative parts of the data can be highlighted in an application-independent, purely mathematical sense. The new measure can be applied to unsteady multifields on regular grids in any application domain. The ability to detect and visualize important parts is demonstrated using diffusion, flow, and weather simulations.",
                "AuthorNamesDeduped": "Heike Jänicke;Alexander Wiebel;Gerik Scheuermann;Wolfgang Kollmann",
                "AuthorNames": "Heike Janicke;Alexander Wiebel;Gerik Scheuermann;Wolfgang Kollmann",
                "AuthorAffiliation": "Image and Signal Processing Group, University of Leipzig, Germany;Image and Signal Processing Group, University of Leipzig, Germany;Image and Signal Processing Group, University of Leipzig, Germany;Department of Mechanical and Aeronautical Engineering, University of California, Davis, USA",
                "InternalReferences": "0.1109/visual.1999.809865;10.1109/visual.2003.1250372;10.1109/tvcg.2006.165;10.1109/visual.1999.809905;10.1109/tvcg.2006.183;10.1109/visual.2003.1250383",
                "AuthorKeywords": "Local statistical complexity, multifield visualization, time-dependent, coherent structures, feature detection, information theroy, flow visualization",
                "AminerCitationCount": 135,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 867,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2181,
                "i": [
                    2181
                ]
            }
        },
        {
            "name": "Johannes Kehrer",
            "value": 140,
            "numPapers": 38,
            "cluster": "6",
            "visible": 1,
            "index": 1291,
            "x": 264.8595113702063,
            "y": 242.89800171416715,
            "vy": 0,
            "vx": 0,
            "r": 1.1611974668969487,
            "node": {
                "Conference": "InfoVis",
                "Year": 2019,
                "Title": "The Impact of Immersion on Cluster Identification Tasks",
                "DOI": "10.1109/tvcg.2019.2934395",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934395",
                "FirstPage": 525,
                "LastPage": 535,
                "PaperType": "J",
                "Abstract": "Recent developments in technology encourage the use of head-mounted displays (HMDs) as a medium to explore visualizations in virtual realities (VRs). VR environments (VREs) enable new, more immersive visualization design spaces compared to traditional computer screens. Previous studies in different domains, such as medicine, psychology, and geology, report a positive effect of immersion, e.g., on learning performance or phobia treatment effectiveness. Our work presented in this paper assesses the applicability of those findings to a common task from the information visualization (InfoVis) domain. We conducted a quantitative user study to investigate the impact of immersion on cluster identification tasks in scatterplot visualizations. The main experiment was carried out with 18 participants in a within-subjects setting using four different visualizations, (1) a 2D scatterplot matrix on a screen, (2) a 3D scatterplot on a screen, (3) a 3D scatterplot miniature in a VRE and (4) a fully immersive 3D scatterplot in a VRE. The four visualization design spaces vary in their level of immersion, as shown in a supplementary study. The results of our main study indicate that task performance differs between the investigated visualization design spaces in terms of accuracy, efficiency, memorability, sense of orientation, and user preference. In particular, the 2D visualization on the screen performed worse compared to the 3D visualizations with regard to the measured variables. The study shows that an increased level of immersion can be a substantial benefit in the context of 3D data and cluster detection.",
                "AuthorNamesDeduped": "Matthias Kraus 0002;Niklas Weiler;Daniela Oelke;Johannes Kehrer;Daniel A. Keim;Johannes Fuchs 0001",
                "AuthorNames": "M. Kraus;N. Weiler;D. Oelke;J. Kehrer;D. A. Keim;J. Fuchs",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;Siemens Corporate Technology, Munich, Germany;Siemens Corporate Technology, Munich, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/tvcg.2018.2864477;10.1109/infvis.1998.729555;10.1109/tvcg.2008.153;10.1109/vast.2008.4677350;10.1109/tvcg.2013.153;10.1109/visual.2002.1183816;10.1109/infvis.1999.801851;10.1109/vast.2007.4389000;10.1109/tvcg.2015.2467202;10.1109/tvcg.2017.2745941",
                "AuthorKeywords": "Virtual reality,evaluation,visual analytics,clustering",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 66,
                "DownloadsXplore": 1309,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 523,
                "i": [
                    523
                ]
            }
        },
        {
            "name": "Bo Ma 0002",
            "value": 7,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 1292,
            "x": -359.5135071039213,
            "y": -0.19547337097530068,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "SciVis",
                "Year": 2020,
                "Title": "Direct Volume Rendering with Nonparametric Models of Uncertainty",
                "DOI": "10.1109/tvcg.2020.3030394",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3030394",
                "FirstPage": 1797,
                "LastPage": 1807,
                "PaperType": "J",
                "Abstract": "We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.",
                "AuthorNamesDeduped": "Tushar M. Athawale;Bo Ma 0002;Elham Sakhaee;Chris R. Johnson 0001;Alireza Entezari",
                "AuthorNames": "Tushar M. Athawale;Bo Ma;Elham Sakhaee;Chris R. Johnson;Alireza Entezari",
                "AuthorAffiliation": "University of Utah, Scientific Computing & Imaging (SCI) Institute, Salt Lake City;Department of CISE, Gainesville, University of Florida;Department of CISE, Gainesville, University of Florida;University of Utah, Scientific Computing & Imaging (SCI) Institute, Salt Lake City;Department of CISE, Gainesville, University of Florida",
                "InternalReferences": "0.1109/tvcg.2013.208;10.1109/tvcg.2018.2864505;10.1109/tvcg.2015.2467958;10.1109/vast.2009.5332611;10.1109/tvcg.2012.227;10.1109/tvcg.2018.2864432;10.1109/tvcg.2012.227;10.1109/visual.2001.964519;10.1109/visual.2005.1532807;10.1109/tvcg.2007.70518;10.1109/tvcg.2014.2346455;10.1109/visual.1997.663848;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Volumes,uncertainty,nonparametric,2D transfer function",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 581,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 442,
                "i": [
                    442
                ]
            }
        },
        {
            "name": "Haneen Mohammed",
            "value": 31,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1293,
            "x": 265.3286938915982,
            "y": -242.7976198354888,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Culling for Extreme-Scale Segmentation Volumes: A Hybrid Deterministic and Probabilistic Approach",
                "DOI": "10.1109/tvcg.2018.2864847",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864847",
                "FirstPage": 1132,
                "LastPage": 1141,
                "PaperType": "J",
                "Abstract": "With the rapid increase in raw volume data sizes, such as terabyte-sized microscopy volumes, the corresponding segmentation label volumes have become extremely large as well. We focus on integer label data, whose efficient representation in memory, as well as fast random data access, pose an even greater challenge than the raw image data. Often, it is crucial to be able to rapidly identify which segments are located where, whether for empty space skipping for fast rendering, or for spatial proximity queries. We refer to this process as<i>culling</i>. In order to enable efficient culling of millions of labeled segments, we present a novel hybrid approach that combines deterministic and probabilistic representations of label data in a data-adaptive hierarchical data structure that we call the label list tree. In each node, we adaptively encode label data using either a probabilistic constant-time access representation for fast conservative culling, or a deterministic logarithmic-time access representation for exact queries. We choose the best data structures for representing the labels of each spatial region while building the label list tree. At run time, we further employ a novel<i>query-adaptive</i>culling strategy. While filtering a query down the tree, we prune it successively, and in each node adaptively select the representation that is best suited for evaluating the pruned query, depending on its size. We show an analysis of the efficiency of our approach with several large data sets from connectomics, including a brain scan with more than 13 million labeled segments, and compare our method to conventional culling approaches. Our approach achieves significant reductions in storage size as well as faster query times.",
                "AuthorNamesDeduped": "Johanna Beyer;Haneen Mohammed;Marco Agus;Ali K. Al-Awami;Hanspeter Pfister;Markus Hadwiger",
                "AuthorNames": "Johanna Beyer;Haneen Mohammed;Marco Agus;Ali K. Al-Awami;Hanspeter Pfister;Markus Hadwiger",
                "AuthorAffiliation": "Harvard University, Cambridge, MA, US;Harvard University, Cambridge, MA, US;King Abdullah University of Science and Technology, Thuwal, SA;King Abdullah University of Science and Technology, Thuwal, SA;Harvard University, Cambridge, MA, US;King Abdullah University of Science and Technology, Thuwal, SA",
                "InternalReferences": "0.1109/visual.1992.235231;10.1109/tvcg.2013.142;10.1109/tvcg.2009.121;10.1109/visual.2003.1250386;10.1109/tvcg.2012.240;10.1109/visual.2003.1250384;10.1109/visual.1999.809908;10.1109/visual.1995.480792;10.1109/visual.2005.1532792;10.1109/visual.1990.146377;10.1109/visual.2001.964521;10.1109/tvcg.2017.2744238",
                "AuthorKeywords": "Hierarchical Culling,Segmented Volume Data,Bloom Filter,Volume Rendering,Spatial Queries",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 388,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 715,
                "i": [
                    715
                ]
            }
        },
        {
            "name": "Veronika Soltészová",
            "value": 42,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1294,
            "x": -31.64991279351498,
            "y": 358.39682339574796,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "BrainGazer - Visual Queries for Neurobiology Research",
                "DOI": "10.1109/tvcg.2009.121",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.121",
                "FirstPage": 1497,
                "LastPage": 1504,
                "PaperType": "J",
                "Abstract": "Neurobiology investigates how anatomical and physiological relationships in the nervous system mediate behavior. Molecular genetic techniques, applied to species such as the common fruit fly Drosophila melanogaster, have proven to be an important tool in this research. Large databases of transgenic specimens are being built and need to be analyzed to establish models of neural information processing. In this paper we present an approach for the exploration and analysis of neural circuits based on such a database. We have designed and implemented \\emph{BrainGazer}, a system which integrates visualization techniques for volume data acquired through confocal microscopy as well as annotated anatomical structures with an intuitive approach for accessing the available information. We focus on the ability to visually query the data based on semantic as well as spatial relationships. Additionally, we present visualization techniques for the concurrent depiction of neurobiological volume data and geometric objects which aim to reduce visual clutter. The described system is the result of an ongoing interdisciplinary collaboration between neurobiologists and visualization researchers.",
                "AuthorNamesDeduped": "Stefan Bruckner;Veronika Soltészová;M. Eduard Gröller;Jirí Hladuvka;Katja Bühler;Jai Y. Yu;Barry J. Dickson",
                "AuthorNames": "Stefan Bruckner;Veronika Solteszova;Eduard Groller;Jiri Hladuvka;Katja Buhler;Jai Y. Yu;Barry J. Dickson",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universitat Wien, Wien, Wien, AT;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2004.104;10.1109/visual.1990.146378;10.1109/visual.2003.1250412;10.1109/tvcg.2006.197;10.1109/visual.1995.485139;10.1109/visual.1996.568136;10.1109/tvcg.2006.195;10.1109/vast.2008.4677354",
                "AuthorKeywords": "Biomedical visualization, neurobiology, visual queries, volume visualization",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 405,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1926,
                "i": [
                    1926
                ]
            }
        },
        {
            "name": "Jirí Hladuvka",
            "value": 44,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1295,
            "x": -218.84039306505798,
            "y": -285.7601833060913,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "BrainGazer - Visual Queries for Neurobiology Research",
                "DOI": "10.1109/tvcg.2009.121",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.121",
                "FirstPage": 1497,
                "LastPage": 1504,
                "PaperType": "J",
                "Abstract": "Neurobiology investigates how anatomical and physiological relationships in the nervous system mediate behavior. Molecular genetic techniques, applied to species such as the common fruit fly Drosophila melanogaster, have proven to be an important tool in this research. Large databases of transgenic specimens are being built and need to be analyzed to establish models of neural information processing. In this paper we present an approach for the exploration and analysis of neural circuits based on such a database. We have designed and implemented \\emph{BrainGazer}, a system which integrates visualization techniques for volume data acquired through confocal microscopy as well as annotated anatomical structures with an intuitive approach for accessing the available information. We focus on the ability to visually query the data based on semantic as well as spatial relationships. Additionally, we present visualization techniques for the concurrent depiction of neurobiological volume data and geometric objects which aim to reduce visual clutter. The described system is the result of an ongoing interdisciplinary collaboration between neurobiologists and visualization researchers.",
                "AuthorNamesDeduped": "Stefan Bruckner;Veronika Soltészová;M. Eduard Gröller;Jirí Hladuvka;Katja Bühler;Jai Y. Yu;Barry J. Dickson",
                "AuthorNames": "Stefan Bruckner;Veronika Solteszova;Eduard Groller;Jiri Hladuvka;Katja Buhler;Jai Y. Yu;Barry J. Dickson",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universitat Wien, Wien, Wien, AT;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2004.104;10.1109/visual.1990.146378;10.1109/visual.2003.1250412;10.1109/tvcg.2006.197;10.1109/visual.1995.485139;10.1109/visual.1996.568136;10.1109/tvcg.2006.195;10.1109/vast.2008.4677354",
                "AuthorKeywords": "Biomedical visualization, neurobiology, visual queries, volume visualization",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 405,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1926,
                "i": [
                    1926
                ]
            }
        },
        {
            "name": "Katja Bühler",
            "value": 75,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1296,
            "x": 354.53107801288667,
            "y": 62.91037055224251,
            "vy": 0,
            "vx": 0,
            "r": 1.0863557858376511,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "An Integrated Visual Analysis System for Fusing MR Spectroscopy and Multi-Modal Radiology Imaging",
                "DOI": "10.1109/vast.2014.7042481",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042481",
                "FirstPage": 53,
                "LastPage": 62,
                "PaperType": "C",
                "Abstract": "For cancers such as glioblastoma multiforme, there is an increasing interest in defining \"biological target volumes\" (BTV), high tumour-burden regions which may be targeted with dose boosts in radiotherapy. The definition of a BTV requires insight into tumour characteristics going beyond conventionally defined radiological abnormalities and anatomical features. Molecular and biochemical imaging techniques, like positron emission tomography, the use of Magnetic Resonance (MR) Imaging contrast agents or MR Spectroscopy deliver this information and support BTV delineation. MR Spectroscopy Imaging (MRSI) is the only non-invasive technique in this list. Studies with MRSI have shown that voxels with certain metabolic signatures are more susceptible to predict the site of relapse. Nevertheless, the discovery of complex relationships between a high number of different metabolites, anatomical, molecular and functional features is an ongoing topic of research - still lacking appropriate tools supporting a smooth workflow by providing data integration and fusion of MRSI data with other imaging modalities. We present a solution bridging this gap which gives fast and flexible access to all data at once. By integrating a customized visualization of the multi-modal and multi-variate image data with a highly flexible visual analytics (VA) framework, it is for the first time possible to interactively fuse, visualize and explore user defined metabolite relations derived from MRSI in combination with markers delivered by other imaging modalities. Real-world medical cases demonstrate the utility of our solution. By making MRSI data available both in a VA tool and in a multi-modal visualization renderer we can combine insights from each side to arrive at a superior BTV delineation. We also report feedback from domain experts indicating significant positive impact in how this work can improve the understanding of MRSI data and its integration into radiotherapy planning.",
                "AuthorNamesDeduped": "Miguel Nunes;Benjamin Rowland;Matthias Schlachter;Soléakhéna Ken;Kresimir Matkovic;Anne Laprie;Katja Bühler",
                "AuthorNames": "Miguel Nunes;Benjamin Rowland;Matthias Schlachter;Soléakhéna Ken;Kresimir Matkovic;Anne Laprie;Katja Bühler",
                "AuthorAffiliation": "VRVis Research Center, Vienna, Austria;Institut Claudius Regaud, Toulouse, France;VRVis Research Center, Vienna, Austria;Institut Claudius Regaud, Toulouse, France;VRVis Research Center, Vienna, Austria;Institut Claudius Regaud, Toulouse, France;VRVis Research Center, Vienna, Austria",
                "InternalReferences": "0.1109/tvcg.2007.70569;10.1109/tvcg.2013.180;10.1109/tvcg.2010.176",
                "AuthorKeywords": "MR spectroscopy, cancer, brain, visualization, multi-modality data, radiotherapy planning, medical decision support systems",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 272,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1294,
                "i": [
                    1294
                ]
            }
        },
        {
            "name": "Jai Y. Yu",
            "value": 42,
            "numPapers": 7,
            "cluster": "6",
            "visible": 1,
            "index": 1297,
            "x": -304.0327115055916,
            "y": 193.1686059756029,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "BrainGazer - Visual Queries for Neurobiology Research",
                "DOI": "10.1109/tvcg.2009.121",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.121",
                "FirstPage": 1497,
                "LastPage": 1504,
                "PaperType": "J",
                "Abstract": "Neurobiology investigates how anatomical and physiological relationships in the nervous system mediate behavior. Molecular genetic techniques, applied to species such as the common fruit fly Drosophila melanogaster, have proven to be an important tool in this research. Large databases of transgenic specimens are being built and need to be analyzed to establish models of neural information processing. In this paper we present an approach for the exploration and analysis of neural circuits based on such a database. We have designed and implemented \\emph{BrainGazer}, a system which integrates visualization techniques for volume data acquired through confocal microscopy as well as annotated anatomical structures with an intuitive approach for accessing the available information. We focus on the ability to visually query the data based on semantic as well as spatial relationships. Additionally, we present visualization techniques for the concurrent depiction of neurobiological volume data and geometric objects which aim to reduce visual clutter. The described system is the result of an ongoing interdisciplinary collaboration between neurobiologists and visualization researchers.",
                "AuthorNamesDeduped": "Stefan Bruckner;Veronika Soltészová;M. Eduard Gröller;Jirí Hladuvka;Katja Bühler;Jai Y. Yu;Barry J. Dickson",
                "AuthorNames": "Stefan Bruckner;Veronika Solteszova;Eduard Groller;Jiri Hladuvka;Katja Buhler;Jai Y. Yu;Barry J. Dickson",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universitat Wien, Wien, Wien, AT;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2004.104;10.1109/visual.1990.146378;10.1109/visual.2003.1250412;10.1109/tvcg.2006.197;10.1109/visual.1995.485139;10.1109/visual.1996.568136;10.1109/tvcg.2006.195;10.1109/vast.2008.4677354",
                "AuthorKeywords": "Biomedical visualization, neurobiology, visual queries, volume visualization",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 405,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1926,
                "i": [
                    1926
                ]
            }
        },
        {
            "name": "Barry J. Dickson",
            "value": 42,
            "numPapers": 7,
            "cluster": "6",
            "visible": 1,
            "index": 1298,
            "x": 93.7368421201724,
            "y": -347.94166814185667,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "BrainGazer - Visual Queries for Neurobiology Research",
                "DOI": "10.1109/tvcg.2009.121",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.121",
                "FirstPage": 1497,
                "LastPage": 1504,
                "PaperType": "J",
                "Abstract": "Neurobiology investigates how anatomical and physiological relationships in the nervous system mediate behavior. Molecular genetic techniques, applied to species such as the common fruit fly Drosophila melanogaster, have proven to be an important tool in this research. Large databases of transgenic specimens are being built and need to be analyzed to establish models of neural information processing. In this paper we present an approach for the exploration and analysis of neural circuits based on such a database. We have designed and implemented \\emph{BrainGazer}, a system which integrates visualization techniques for volume data acquired through confocal microscopy as well as annotated anatomical structures with an intuitive approach for accessing the available information. We focus on the ability to visually query the data based on semantic as well as spatial relationships. Additionally, we present visualization techniques for the concurrent depiction of neurobiological volume data and geometric objects which aim to reduce visual clutter. The described system is the result of an ongoing interdisciplinary collaboration between neurobiologists and visualization researchers.",
                "AuthorNamesDeduped": "Stefan Bruckner;Veronika Soltészová;M. Eduard Gröller;Jirí Hladuvka;Katja Bühler;Jai Y. Yu;Barry J. Dickson",
                "AuthorNames": "Stefan Bruckner;Veronika Solteszova;Eduard Groller;Jiri Hladuvka;Katja Buhler;Jai Y. Yu;Barry J. Dickson",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universitat Wien, Wien, Wien, AT;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria;Research Institute of Molecular Pathology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2004.104;10.1109/visual.1990.146378;10.1109/visual.2003.1250412;10.1109/tvcg.2006.197;10.1109/visual.1995.485139;10.1109/visual.1996.568136;10.1109/tvcg.2006.195;10.1109/vast.2008.4677354",
                "AuthorKeywords": "Biomedical visualization, neurobiology, visual queries, volume visualization",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 405,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1926,
                "i": [
                    1926
                ]
            }
        },
        {
            "name": "Eric LaMar",
            "value": 77,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1299,
            "x": 165.97646364435525,
            "y": 320.00283360638235,
            "vy": 0,
            "vx": 0,
            "r": 1.0886586067933217,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multiresolution Techniques for Interactive Texture-based Volume Visualization",
                "DOI": "10.1109/visual.1999.809908",
                "Link": "http://doi.ieeecomputersociety.org/10.1109/VISUAL.1999.809908",
                "FirstPage": 355,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "We present a multiresolution technique for interactive texture-based volume visualization of very large data sets. This method uses an adaptive scheme that renders the volume in a region-of-interest at a high resolution and the volume away from this region at progressively lower resolutions. The algorithm is based on the segmentation of texture space into an octree, where the leaves of the tree define the original data and the internal nodes define lower-resolution versions. Rendering is done adaptively by selecting high-resolution cells close to a center of attention and low-resolution cells away from this area. We limit the artifacts introduced by this method by modifying the transfer functions in the lower-resolution data sets and utilizing spherical shells as a proxy geometry. It is possible to use this technique to produce viewpoint-dependent renderings of very large data sets.",
                "AuthorNamesDeduped": "Eric LaMar;Bernd Hamann;Kenneth I. Joy",
                "AuthorNames": "E. LaMar;B. Hamann;K.I. Joy",
                "AuthorAffiliation": "University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US",
                "InternalReferences": null,
                "AuthorKeywords": "multiresolution rendering, volume visualization, hardware texture",
                "AminerCitationCount": 421,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 85,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3054,
                "i": [
                    3054
                ]
            }
        },
        {
            "name": "Thomas Hoffmann 0002",
            "value": 22,
            "numPapers": 3,
            "cluster": "6",
            "visible": 1,
            "index": 1300,
            "x": -338.6749215670713,
            "y": -123.89228184813653,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Combined Visualization of Wall Thickness and Wall Shear Stress for the Evaluation of Aneurysms",
                "DOI": "10.1109/tvcg.2014.2346406",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346406",
                "FirstPage": 2506,
                "LastPage": 2515,
                "PaperType": "J",
                "Abstract": "For an individual rupture risk assessment of aneurysms, the aneurysm's wall morphology and hemodynamics provide valuable information. Hemodynamic information is usually extracted via computational fluid dynamic (CFD) simulation on a previously extracted 3D aneurysm surface mesh or directly measured with 4D phase-contrast magnetic resonance imaging. In contrast, a noninvasive imaging technique that depicts the aneurysm wall in vivo is still not available. Our approach comprises an experiment, where intravascular ultrasound (IVUS) is employed to probe a dissected saccular aneurysm phantom, which we modeled from a porcine kidney artery. Then, we extracted a 3D surface mesh to gain the vessel wall thickness and hemodynamic information from a CFD simulation. Building on this, we developed a framework that depicts the inner and outer aneurysm wall with dedicated information about local thickness via distance ribbons. For both walls, a shading is adapted such that the inner wall as well as its distance to the outer wall is always perceivable. The exploration of the wall is further improved by combining it with hemodynamic information from the CFD simulation. Hence, the visual analysis comprises a brushing and linking concept for individual highlighting of pathologic areas. Also, a surface clustering is integrated to provide an automatic division of different aneurysm parts combined with a risk score depending on wall thickness and hemodynamic information. In general, our approach can be employed for vessel visualization purposes where an inner and outer wall has to be adequately represented.",
                "AuthorNamesDeduped": "Sylvia Glaßer;Kai Lawonn;Thomas Hoffmann 0002;Martin Skalej;Bernhard Preim",
                "AuthorNames": "Sylvia Glaßer;Kai Lawonn;Thomas Hoffmann;Martin Skalej;Bernhard Preim",
                "AuthorAffiliation": "Department for Simulation and Graphics, Otto von Guericke Universitat Magdeburg, Magdeburg, Sachsen-Anhalt, DE;Department for Simulation and Graphics, University of Magdeburg, Germany;Neuroradiology Department, University hospital of Magdeburg, Germany;Neuroradiology Department, University hospital of Magdeburg, Germany;Department for Simulation and Graphics, University of Magdeburg, Germany",
                "InternalReferences": "0.1109/tvcg.2012.202;10.1109/tvcg.2007.70550;10.1109/visual.1995.480795;10.1109/tvcg.2011.189",
                "AuthorKeywords": "Aneurysm, IVUS, Wall Thickness, Wall Shear Stress, Brushing and Linking, Focus + Context",
                "AminerCitationCount": 27,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1222,
                "i": [
                    1222
                ]
            }
        },
        {
            "name": "Martin Skalej",
            "value": 33,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1301,
            "x": 333.54454400553647,
            "y": -137.47013189831006,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Combined Visualization of Wall Thickness and Wall Shear Stress for the Evaluation of Aneurysms",
                "DOI": "10.1109/tvcg.2014.2346406",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346406",
                "FirstPage": 2506,
                "LastPage": 2515,
                "PaperType": "J",
                "Abstract": "For an individual rupture risk assessment of aneurysms, the aneurysm's wall morphology and hemodynamics provide valuable information. Hemodynamic information is usually extracted via computational fluid dynamic (CFD) simulation on a previously extracted 3D aneurysm surface mesh or directly measured with 4D phase-contrast magnetic resonance imaging. In contrast, a noninvasive imaging technique that depicts the aneurysm wall in vivo is still not available. Our approach comprises an experiment, where intravascular ultrasound (IVUS) is employed to probe a dissected saccular aneurysm phantom, which we modeled from a porcine kidney artery. Then, we extracted a 3D surface mesh to gain the vessel wall thickness and hemodynamic information from a CFD simulation. Building on this, we developed a framework that depicts the inner and outer aneurysm wall with dedicated information about local thickness via distance ribbons. For both walls, a shading is adapted such that the inner wall as well as its distance to the outer wall is always perceivable. The exploration of the wall is further improved by combining it with hemodynamic information from the CFD simulation. Hence, the visual analysis comprises a brushing and linking concept for individual highlighting of pathologic areas. Also, a surface clustering is integrated to provide an automatic division of different aneurysm parts combined with a risk score depending on wall thickness and hemodynamic information. In general, our approach can be employed for vessel visualization purposes where an inner and outer wall has to be adequately represented.",
                "AuthorNamesDeduped": "Sylvia Glaßer;Kai Lawonn;Thomas Hoffmann 0002;Martin Skalej;Bernhard Preim",
                "AuthorNames": "Sylvia Glaßer;Kai Lawonn;Thomas Hoffmann;Martin Skalej;Bernhard Preim",
                "AuthorAffiliation": "Department for Simulation and Graphics, Otto von Guericke Universitat Magdeburg, Magdeburg, Sachsen-Anhalt, DE;Department for Simulation and Graphics, University of Magdeburg, Germany;Neuroradiology Department, University hospital of Magdeburg, Germany;Neuroradiology Department, University hospital of Magdeburg, Germany;Department for Simulation and Graphics, University of Magdeburg, Germany",
                "InternalReferences": "0.1109/tvcg.2012.202;10.1109/tvcg.2007.70550;10.1109/visual.1995.480795;10.1109/tvcg.2011.189",
                "AuthorKeywords": "Aneurysm, IVUS, Wall Thickness, Wall Shear Stress, Brushing and Linking, Focus + Context",
                "AminerCitationCount": 27,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1222,
                "i": [
                    1222
                ]
            }
        },
        {
            "name": "Roy van Pelt",
            "value": 57,
            "numPapers": 15,
            "cluster": "6",
            "visible": 1,
            "index": 1302,
            "x": -153.14442628981922,
            "y": 326.7977733956615,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Exploration of 4D MRI Blood Flow using Stylistic Visualization",
                "DOI": "10.1109/tvcg.2010.153",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.153",
                "FirstPage": 1339,
                "LastPage": 1347,
                "PaperType": "J",
                "Abstract": "Insight into the dynamics of blood-flow considerably improves the understanding of the complex cardiovascular system and its pathologies. Advances in MRI technology enable acquisition of 4D blood-flow data, providing quantitative blood-flow velocities over time. The currently typical slice-by-slice analysis requires a full mental reconstruction of the unsteady blood-flow field, which is a tedious and highly challenging task, even for skilled physicians. We endeavor to alleviate this task by means of comprehensive visualization and interaction techniques. In this paper we present a framework for pre-clinical cardiovascular research, providing tools to both interactively explore the 4D blood-flow data and depict the essential blood-flow characteristics. The framework encompasses a variety of visualization styles, comprising illustrative techniques as well as improved methods from the established field of flow visualization. Each of the incorporated styles, including exploded planar reformats, flow-direction highlights, and arrow-trails, locally captures the blood-flow dynamics and may be initiated by an interactively probed vessel cross-section. Additionally, we present the results of an evaluation with domain experts, measuring the value of each of the visualization styles and related rendering parameters.",
                "AuthorNamesDeduped": "Roy van Pelt;Javier Oliván Bescós;Marcel Breeuwer;Rachel E. Clough;M. Eduard Gröller;Bart M. ter Haar Romeny;Anna Vilanova",
                "AuthorNames": "Roy van Pelt;Javier Olivan Bescos;Marcel Breeuwer;Rachel E. Clough;M. Eduard Groller;Bart ter Haar Romenij;Anna Vilanova",
                "AuthorAffiliation": "Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Division of Imaging Sciences and the Department of Vascular Surgery, NIHR Comprehensive Biomedical Research Centre of Guy s and St Thomas NHS Foundation Trust, King''s College, UK;Department of Computer Science, Group of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universiteit Eindhoven, Eindhoven, Noord-Brabant, NL;Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands",
                "InternalReferences": "0.1109/tvcg.2006.140;10.1109/tvcg.2009.138",
                "AuthorKeywords": "4D MRI blood-flow, Probing, Flow visualization, Illustrative visualization, Phase-contrast cine MRI",
                "AminerCitationCount": 86,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 1001,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1782,
                "i": [
                    1782
                ]
            }
        },
        {
            "name": "Javier Oliván Bescós",
            "value": 100,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1303,
            "x": -107.8661738504113,
            "y": -344.5502699735887,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Exploration of 4D MRI Blood Flow using Stylistic Visualization",
                "DOI": "10.1109/tvcg.2010.153",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.153",
                "FirstPage": 1339,
                "LastPage": 1347,
                "PaperType": "J",
                "Abstract": "Insight into the dynamics of blood-flow considerably improves the understanding of the complex cardiovascular system and its pathologies. Advances in MRI technology enable acquisition of 4D blood-flow data, providing quantitative blood-flow velocities over time. The currently typical slice-by-slice analysis requires a full mental reconstruction of the unsteady blood-flow field, which is a tedious and highly challenging task, even for skilled physicians. We endeavor to alleviate this task by means of comprehensive visualization and interaction techniques. In this paper we present a framework for pre-clinical cardiovascular research, providing tools to both interactively explore the 4D blood-flow data and depict the essential blood-flow characteristics. The framework encompasses a variety of visualization styles, comprising illustrative techniques as well as improved methods from the established field of flow visualization. Each of the incorporated styles, including exploded planar reformats, flow-direction highlights, and arrow-trails, locally captures the blood-flow dynamics and may be initiated by an interactively probed vessel cross-section. Additionally, we present the results of an evaluation with domain experts, measuring the value of each of the visualization styles and related rendering parameters.",
                "AuthorNamesDeduped": "Roy van Pelt;Javier Oliván Bescós;Marcel Breeuwer;Rachel E. Clough;M. Eduard Gröller;Bart M. ter Haar Romeny;Anna Vilanova",
                "AuthorNames": "Roy van Pelt;Javier Olivan Bescos;Marcel Breeuwer;Rachel E. Clough;M. Eduard Groller;Bart ter Haar Romenij;Anna Vilanova",
                "AuthorAffiliation": "Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Division of Imaging Sciences and the Department of Vascular Surgery, NIHR Comprehensive Biomedical Research Centre of Guy s and St Thomas NHS Foundation Trust, King''s College, UK;Department of Computer Science, Group of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universiteit Eindhoven, Eindhoven, Noord-Brabant, NL;Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands",
                "InternalReferences": "0.1109/tvcg.2006.140;10.1109/tvcg.2009.138",
                "AuthorKeywords": "4D MRI blood-flow, Probing, Flow visualization, Illustrative visualization, Phase-contrast cine MRI",
                "AminerCitationCount": 86,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 1001,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1782,
                "i": [
                    1782
                ]
            }
        },
        {
            "name": "Marcel Breeuwer",
            "value": 87,
            "numPapers": 32,
            "cluster": "6",
            "visible": 1,
            "index": 1304,
            "x": 312.3972841257701,
            "y": 181.26758361837028,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Exploration of 4D MRI Blood Flow using Stylistic Visualization",
                "DOI": "10.1109/tvcg.2010.153",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.153",
                "FirstPage": 1339,
                "LastPage": 1347,
                "PaperType": "J",
                "Abstract": "Insight into the dynamics of blood-flow considerably improves the understanding of the complex cardiovascular system and its pathologies. Advances in MRI technology enable acquisition of 4D blood-flow data, providing quantitative blood-flow velocities over time. The currently typical slice-by-slice analysis requires a full mental reconstruction of the unsteady blood-flow field, which is a tedious and highly challenging task, even for skilled physicians. We endeavor to alleviate this task by means of comprehensive visualization and interaction techniques. In this paper we present a framework for pre-clinical cardiovascular research, providing tools to both interactively explore the 4D blood-flow data and depict the essential blood-flow characteristics. The framework encompasses a variety of visualization styles, comprising illustrative techniques as well as improved methods from the established field of flow visualization. Each of the incorporated styles, including exploded planar reformats, flow-direction highlights, and arrow-trails, locally captures the blood-flow dynamics and may be initiated by an interactively probed vessel cross-section. Additionally, we present the results of an evaluation with domain experts, measuring the value of each of the visualization styles and related rendering parameters.",
                "AuthorNamesDeduped": "Roy van Pelt;Javier Oliván Bescós;Marcel Breeuwer;Rachel E. Clough;M. Eduard Gröller;Bart M. ter Haar Romeny;Anna Vilanova",
                "AuthorNames": "Roy van Pelt;Javier Olivan Bescos;Marcel Breeuwer;Rachel E. Clough;M. Eduard Groller;Bart ter Haar Romenij;Anna Vilanova",
                "AuthorAffiliation": "Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Division of Imaging Sciences and the Department of Vascular Surgery, NIHR Comprehensive Biomedical Research Centre of Guy s and St Thomas NHS Foundation Trust, King''s College, UK;Department of Computer Science, Group of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universiteit Eindhoven, Eindhoven, Noord-Brabant, NL;Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands",
                "InternalReferences": "0.1109/tvcg.2006.140;10.1109/tvcg.2009.138",
                "AuthorKeywords": "4D MRI blood-flow, Probing, Flow visualization, Illustrative visualization, Phase-contrast cine MRI",
                "AminerCitationCount": 86,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 1001,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1782,
                "i": [
                    1782
                ]
            }
        },
        {
            "name": "Rachel E. Clough",
            "value": 48,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1305,
            "x": -352.9317252973928,
            "y": 77.38990424212743,
            "vy": 0,
            "vx": 0,
            "r": 1.0552677029360968,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Exploration of 4D MRI Blood Flow using Stylistic Visualization",
                "DOI": "10.1109/tvcg.2010.153",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.153",
                "FirstPage": 1339,
                "LastPage": 1347,
                "PaperType": "J",
                "Abstract": "Insight into the dynamics of blood-flow considerably improves the understanding of the complex cardiovascular system and its pathologies. Advances in MRI technology enable acquisition of 4D blood-flow data, providing quantitative blood-flow velocities over time. The currently typical slice-by-slice analysis requires a full mental reconstruction of the unsteady blood-flow field, which is a tedious and highly challenging task, even for skilled physicians. We endeavor to alleviate this task by means of comprehensive visualization and interaction techniques. In this paper we present a framework for pre-clinical cardiovascular research, providing tools to both interactively explore the 4D blood-flow data and depict the essential blood-flow characteristics. The framework encompasses a variety of visualization styles, comprising illustrative techniques as well as improved methods from the established field of flow visualization. Each of the incorporated styles, including exploded planar reformats, flow-direction highlights, and arrow-trails, locally captures the blood-flow dynamics and may be initiated by an interactively probed vessel cross-section. Additionally, we present the results of an evaluation with domain experts, measuring the value of each of the visualization styles and related rendering parameters.",
                "AuthorNamesDeduped": "Roy van Pelt;Javier Oliván Bescós;Marcel Breeuwer;Rachel E. Clough;M. Eduard Gröller;Bart M. ter Haar Romeny;Anna Vilanova",
                "AuthorNames": "Roy van Pelt;Javier Olivan Bescos;Marcel Breeuwer;Rachel E. Clough;M. Eduard Groller;Bart ter Haar Romenij;Anna Vilanova",
                "AuthorAffiliation": "Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Department of Clinical Sciences and Advanced Development, Philips Healthcare, Netherlands;Division of Imaging Sciences and the Department of Vascular Surgery, NIHR Comprehensive Biomedical Research Centre of Guy s and St Thomas NHS Foundation Trust, King''s College, UK;Department of Computer Science, Group of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Technische Universiteit Eindhoven, Eindhoven, Noord-Brabant, NL;Department of Biomedical Engineering, Group of Biomedical Image Analysis, Eindhovan University of Technology, Netherlands",
                "InternalReferences": "0.1109/tvcg.2006.140;10.1109/tvcg.2009.138",
                "AuthorKeywords": "4D MRI blood-flow, Probing, Flow visualization, Illustrative visualization, Phase-contrast cine MRI",
                "AminerCitationCount": 86,
                "CitationCountCrossRef": 40,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 1001,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1782,
                "i": [
                    1782
                ]
            }
        },
        {
            "name": "Bart M. ter Haar Romeny",
            "value": 61,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 1306,
            "x": 208.044375344975,
            "y": -295.58000251593336,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Parameter Sensitivity Visualization for DTI fiber Tracking",
                "DOI": "10.1109/tvcg.2009.170",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.170",
                "FirstPage": 1441,
                "LastPage": 1448,
                "PaperType": "J",
                "Abstract": "Fiber tracking of diffusion tensor imaging (DTI) data offers a unique insight into the three-dimensional organisation of white matter structures in the living brain. However, fiber tracking algorithms require a number of user-defined input parameters that strongly affect the output results. Usually the fiber tracking parameters are set once and are then re-used for several patient datasets. However, the stability of the chosen parameters is not evaluated and a small change in the parameter values can give very different results. The user remains completely unaware of such effects. Furthermore, it is difficult to reproduce output results between different users. We propose a visualization tool that allows the user to visually explore how small variations in parameter values affect the output of fiber tracking. With this knowledge the user cannot only assess the stability of commonly used parameter values but also evaluate in a more reliable way the output results between different patients. Existing tools do not provide such information. A small user evaluation of our tool has been done to show the potential of the technique.",
                "AuthorNamesDeduped": "Ralph Brecheisen;Anna Vilanova;Bram Platel;Bart M. ter Haar Romeny",
                "AuthorNames": "Ralph Brecheisen;Anna Vilanova;Bram Platel;Bart ter Haar Romeny",
                "AuthorAffiliation": "Technical University Eindhoven, Netherlands;Maastricht Medical Center, Netherlands;Technical University Eindhoven, Netherlands;Technical University Eindhoven, Netherlands",
                "InternalReferences": "0.1109/tvcg.2008.147;10.1109/visual.2005.1532853;10.1109/tvcg.2007.70518;10.1109/visual.2005.1532778;10.1109/visual.2005.1532779;10.1109/visual.1996.568116;10.1109/visual.1999.809894;10.1109/visual.2004.30;10.1109/visual.2001.964552",
                "AuthorKeywords": "fiber Tracking, Parameter Sensitivity, Stopping Criteria, Diffusion Tensor Imaging, Uncertainty Visualization",
                "AminerCitationCount": 78,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 624,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1932,
                "i": [
                    1932
                ]
            }
        },
        {
            "name": "Min Shih",
            "value": 17,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1307,
            "x": 46.27367410109868,
            "y": 358.6206172059623,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "A Declarative Grammar of Flexible Volume Visualization Pipelines",
                "DOI": "10.1109/tvcg.2018.2864841",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864841",
                "FirstPage": 1050,
                "LastPage": 1059,
                "PaperType": "J",
                "Abstract": "This paper presents a declarative grammar for conveniently and effectively specifying advanced volume visualizations. Existing methods for creating volume visualizations either lack the flexibility to specify sophisticated visualizations or are difficult to use for those unfamiliar with volume rendering implementation and parameterization. Our design provides the ability to quickly create expressive visualizations without knowledge of the volume rendering implementation. It attempts to capture aspects of those difficult but powerful methods while remaining flexible and easy to use. As a proof of concept, our current implementation of the grammar allows users to combine multiple data variables in various ways and define transfer functions for diverse input data. The grammar also has the ability to describe advanced shading effects and create animations. We demonstrate the power and flexibility of our approach using multiple practical volume visualizations.",
                "AuthorNamesDeduped": "Min Shih;Charles Rozhon;Kwan-Liu Ma",
                "AuthorNames": "Min Shih;Charles Rozhon;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US",
                "InternalReferences": "0.1109/visual.2005.1532788;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70555;10.1109/tvcg.2014.2346322;10.1109/tvcg.2009.189;10.1109/tvcg.2015.2467449;10.1109/visual.1992.235219;10.1109/visual.2004.95;10.1109/tvcg.2007.70534;10.1109/tvcg.2014.2346318;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/scivis.2015.7429514;10.1109/tvcg.2016.2599041",
                "AuthorKeywords": "Volume visualization,direct volume rendering,declarative specification,multivariate/multimodal volume data,animation",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 715,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 721,
                "i": [
                    721
                ]
            }
        },
        {
            "name": "Charles Rozhon",
            "value": 17,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1308,
            "x": -276.4711779102733,
            "y": -233.2674168950863,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "A Declarative Grammar of Flexible Volume Visualization Pipelines",
                "DOI": "10.1109/tvcg.2018.2864841",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864841",
                "FirstPage": 1050,
                "LastPage": 1059,
                "PaperType": "J",
                "Abstract": "This paper presents a declarative grammar for conveniently and effectively specifying advanced volume visualizations. Existing methods for creating volume visualizations either lack the flexibility to specify sophisticated visualizations or are difficult to use for those unfamiliar with volume rendering implementation and parameterization. Our design provides the ability to quickly create expressive visualizations without knowledge of the volume rendering implementation. It attempts to capture aspects of those difficult but powerful methods while remaining flexible and easy to use. As a proof of concept, our current implementation of the grammar allows users to combine multiple data variables in various ways and define transfer functions for diverse input data. The grammar also has the ability to describe advanced shading effects and create animations. We demonstrate the power and flexibility of our approach using multiple practical volume visualizations.",
                "AuthorNamesDeduped": "Min Shih;Charles Rozhon;Kwan-Liu Ma",
                "AuthorNames": "Min Shih;Charles Rozhon;Kwan-Liu Ma",
                "AuthorAffiliation": "University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US;University of California Davis, Davis, CA, US",
                "InternalReferences": "0.1109/visual.2005.1532788;10.1109/tvcg.2009.174;10.1109/tvcg.2011.185;10.1109/tvcg.2007.70555;10.1109/tvcg.2014.2346322;10.1109/tvcg.2009.189;10.1109/tvcg.2015.2467449;10.1109/visual.1992.235219;10.1109/visual.2004.95;10.1109/tvcg.2007.70534;10.1109/tvcg.2014.2346318;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/scivis.2015.7429514;10.1109/tvcg.2016.2599041",
                "AuthorKeywords": "Volume visualization,direct volume rendering,declarative specification,multivariate/multimodal volume data,animation",
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 715,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 721,
                "i": [
                    721
                ]
            }
        },
        {
            "name": "Marzieh Berenjkoub",
            "value": 3,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 1309,
            "x": 361.56920092852226,
            "y": -14.755098776690676,
            "vy": 0,
            "vx": 0,
            "r": 1.003454231433506,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Visual Analysis of Spatio-temporal Relations of Pairwise Attributes in Unsteady Flow",
                "DOI": "10.1109/tvcg.2018.2864817",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864817",
                "FirstPage": 1246,
                "LastPage": 1256,
                "PaperType": "J",
                "Abstract": "Despite significant advances in the analysis and visualization of unsteady flow, the interpretation of it's behavior still remains a challenge. In this work, we focus on the linear correlation and non-linear dependency of different physical attributes of unsteady flows to aid their study from a new perspective. Specifically, we extend the existing spatial correlation quantification, i.e. the Local Correlation Coefficient (LCC), to the spatio-temporal domain to study the correlation of attribute-pairs from both the Eulerian and Lagrangian views. To study the dependency among attributes, which need not be linear, we extend and compute the mutual information (MI) among attributes over time. To help visualize and interpret the derived correlation and dependency among attributes associated with a particle, we encode the correlation and dependency values on individual pathlines. Finally, to utilize the correlation and MI computation results to identify regions with interesting flow behavior, we propose a segmentation strategy of the flow domain based on the ranking of the strength of the attributes relations. We have applied our correlation and dependency metrics to a number of 2D and 3D unsteady flows with varying spatio-temporal kernel sizes to demonstrate and assess their effectiveness.",
                "AuthorNamesDeduped": "Marzieh Berenjkoub;Rodolfo Ostilla Monico;Robert S. Laramee;Guoning Chen",
                "AuthorNames": "Marzieh Berenjkoub;Rodolfo Ostilla Monico;Robert S. Laramee;Guoning Chen",
                "AuthorAffiliation": "University of Houston, Houston, TX, US;University of Houston, Houston, TX, US;Swansea University, Swansea, West Glamorgan, GB;University of Houston, Houston, TX, US",
                "InternalReferences": "0.1109/tvcg.2010.131;10.1109/visual.2004.99;10.1109/tvcg.2010.198;10.1109/tvcg.2015.2467200;10.1109/tvcg.2009.200;10.1109/tvcg.2010.131;10.1109/tvcg.2013.133;10.1109/tvcg.2011.249;10.1109/tvcg.2010.132;10.1109/tvcg.2006.165",
                "AuthorKeywords": "Unsteady flow,correlation study,mutual information",
                "AminerCitationCount": 8,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": null,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 722,
                "i": [
                    722
                ]
            }
        },
        {
            "name": "Miquel Feixas",
            "value": 71,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1310,
            "x": -256.7409240471521,
            "y": 255.2138278373931,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Importance-Driven Focus of Attention",
                "DOI": "10.1109/tvcg.2006.152",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.152",
                "FirstPage": 933,
                "LastPage": 940,
                "PaperType": "J",
                "Abstract": "This paper introduces a concept for automatic focusing on features within a volumetric data set. The user selects a focus, i.e., object of interest, from a set of pre-defined features. Our system automatically determines the most expressive view on this feature. A characteristic viewpoint is estimated by a novel information-theoretic framework which is based on the mutual information measure. Viewpoints change smoothly by switching the focus from one feature to another one. This mechanism is controlled by changes in the importance distribution among features in the volume. The highest importance is assigned to the feature in focus. Apart from viewpoint selection, the focusing mechanism also steers visual emphasis by assigning a visually more prominent representation. To allow a clear view on features that are normally occluded by other parts of the volume, the focusing for example incorporates cut-away views",
                "AuthorNamesDeduped": "Ivan Viola;Miquel Feixas;Mateu Sbert;M. Eduard Gröller",
                "AuthorNames": "Ivan Viola;Miquel Feixas;Mateu Sbert;Meister Eduard Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Institute of Informatics and Applications, University of Girona, Spain;Institute of Informatics and Applications, University of Girona, Spain;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2005.1532856;10.1109/visual.2005.1532834;10.1109/infvis.2001.963286;10.1109/visual.2005.1532833",
                "AuthorKeywords": "Illustrative visualization, volume visualization, interacting with volumetric datasets, characteristic viewpoint estimation, focus+context techniques",
                "AminerCitationCount": 259,
                "CitationCountCrossRef": 130,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 1068,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2270,
                "i": [
                    2270
                ]
            }
        },
        {
            "name": "Mateu Sbert",
            "value": 71,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1311,
            "x": 16.92475712906498,
            "y": -361.75067739552634,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Importance-Driven Focus of Attention",
                "DOI": "10.1109/tvcg.2006.152",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.152",
                "FirstPage": 933,
                "LastPage": 940,
                "PaperType": "J",
                "Abstract": "This paper introduces a concept for automatic focusing on features within a volumetric data set. The user selects a focus, i.e., object of interest, from a set of pre-defined features. Our system automatically determines the most expressive view on this feature. A characteristic viewpoint is estimated by a novel information-theoretic framework which is based on the mutual information measure. Viewpoints change smoothly by switching the focus from one feature to another one. This mechanism is controlled by changes in the importance distribution among features in the volume. The highest importance is assigned to the feature in focus. Apart from viewpoint selection, the focusing mechanism also steers visual emphasis by assigning a visually more prominent representation. To allow a clear view on features that are normally occluded by other parts of the volume, the focusing for example incorporates cut-away views",
                "AuthorNamesDeduped": "Ivan Viola;Miquel Feixas;Mateu Sbert;M. Eduard Gröller",
                "AuthorNames": "Ivan Viola;Miquel Feixas;Mateu Sbert;Meister Eduard Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Institute of Informatics and Applications, University of Girona, Spain;Institute of Informatics and Applications, University of Girona, Spain;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2005.1532856;10.1109/visual.2005.1532834;10.1109/infvis.2001.963286;10.1109/visual.2005.1532833",
                "AuthorKeywords": "Illustrative visualization, volume visualization, interacting with volumetric datasets, characteristic viewpoint estimation, focus+context techniques",
                "AminerCitationCount": 259,
                "CitationCountCrossRef": 130,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 1068,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2270,
                "i": [
                    2270
                ]
            }
        },
        {
            "name": "Thomas Schultz 0001",
            "value": 134,
            "numPapers": 53,
            "cluster": "11",
            "visible": 1,
            "index": 1312,
            "x": 231.9676678210991,
            "y": 278.28223278829756,
            "vy": 0,
            "vx": 0,
            "r": 1.1542890040299367,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Superquadric Glyphs for Symmetric Second-Order Tensors",
                "DOI": "10.1109/tvcg.2010.199",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.199",
                "FirstPage": 1595,
                "LastPage": 1604,
                "PaperType": "J",
                "Abstract": "Symmetric second-order tensor fields play a central role in scientific and biomedical studies as well as in image analysis and feature-extraction methods. The utility of displaying tensor field samples has driven the development of visualization techniques that encode the tensor shape and orientation into the geometry of a tensor glyph. With some exceptions, these methods work only for positive-definite tensors (i.e. having positive eigenvalues, such as diffusion tensors). We expand the scope of tensor glyphs to all symmetric second-order tensors in two and three dimensions, gracefully and unambiguously depicting any combination of positive and negative eigenvalues. We generalize a previous method of superquadric glyphs for positive-definite tensors by drawing upon a larger portion of the superquadric shape space, supplemented with a coloring that indicates the tensor's quadratic form. We show that encoding arbitrary eigenvalue sign combinations requires design choices that differ fundamentally from those in previous work on traceless tensors (arising in the study of liquid crystals). Our method starts with a design of 2-D tensor glyphs guided by principles of symmetry and continuity, and creates 3-D glyphs that include the 2-D glyphs in their axis-aligned cross-sections. A key ingredient of our method is a novel way of mapping from the shape space of three-dimensional symmetric second-order tensors to the unit square. We apply our new glyphs to stress tensors from mechanics, geometry tensors and Hessians from image analysis, and rate-of-deformation tensors in computational fluid dynamics.",
                "AuthorNamesDeduped": "Thomas Schultz 0001;Gordon L. Kindlmann",
                "AuthorNames": "Thomas Schultz;Gordon L. Kindlmann",
                "AuthorAffiliation": "Computer Science Department, Computation Institute, University of Chicago, USA;Computer Science Department, Computation Institute, University of Chicago, USA",
                "InternalReferences": "0.1109/visual.1999.809905;10.1109/tvcg.2006.134;10.1109/visual.1998.745294;10.1109/tvcg.2006.181;10.1109/visual.1991.175773;10.1109/tvcg.2009.184;10.1109/tvcg.2006.182;10.1109/tvcg.2009.177;10.1109/tvcg.2010.166;10.1109/visual.1993.398849;10.1109/visual.2003.1250414;10.1109/visual.1994.346326;10.1109/visual.1997.663929;10.1109/visual.2002.1183797;10.1109/visual.2003.1250376;10.1109/visual.2005.1532774;10.1109/visual.2004.80;10.1109/tvcg.2006.115",
                "AuthorKeywords": "Tensor Glyphs, Stress Tensors, Rate-of-Deformation Tensors, Geometry Tensors, Glyph Design",
                "AminerCitationCount": 109,
                "CitationCountCrossRef": 91,
                "PubsCitedCrossRef": 61,
                "DownloadsXplore": 872,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1770,
                "i": [
                    1770
                ]
            }
        },
        {
            "name": "Brian D. Fisher",
            "value": 106,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 1313,
            "x": -359.1594307911523,
            "y": -48.52322406616683,
            "vy": 0,
            "vx": 0,
            "r": 1.122049510650547,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Visual analytic roadblocks for novice investigators",
                "DOI": "10.1109/vast.2011.6102435",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102435",
                "FirstPage": 3,
                "LastPage": 11,
                "PaperType": "C",
                "Abstract": "We have observed increasing interest in visual analytics tools and their applications in investigative analysis. Despite the growing interest and substantial studies regarding the topic, understanding the major roadblocks of using such tools from novice users' perspectives is still limited. Therefore, we attempted to identify such “visual analytic roadblocks” for novice users in an investigative analysis scenario. To achieve this goal, we reviewed the existing models, theories, and frameworks that could explain the cognitive processes of human-visualization interaction in investigative analysis. Then, we conducted a qualitative experiment with six novice participants, using a slightly modified version of pair analytics, and analyzed the results through the open-coding method. As a result, we came up with four visual analytic roadblocks and explained these roadblocks using existing cognitive models and theories. We also provided design suggestions to overcome these roadblocks.",
                "AuthorNamesDeduped": "Bum Chul Kwon;Brian D. Fisher;Ji Soo Yi",
                "AuthorNames": "Bum chul Kwon;Brian Fisher;Ji Soo Yi",
                "AuthorAffiliation": "Purdue University, USA;Simon Fraser University, Canada;Purdue University, USA",
                "InternalReferences": "0.1109/infvis.2004.10;10.1109/vast.2007.4389006;10.1109/tvcg.2010.164;10.1109/vast.2009.5333878;10.1109/tvcg.2010.179;10.1109/tvcg.2008.121;10.1109/infvis.2004.5;10.1109/tvcg.2007.70515;10.1109/tvcg.2010.177;10.1109/vast.2006.261416;10.1109/tvcg.2008.171;10.1109/tvcg.2008.109;10.1109/tvcg.2007.70535;10.1109/vast.2008.4677361;10.1109/tvcg.2007.70589;10.1109/tvcg.2007.70594",
                "AuthorKeywords": "Visual analytics, investigative analysis, cognitive model, framework, roadblock, qualitative experiment",
                "AminerCitationCount": 47,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 660,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1597,
                "i": [
                    1597
                ]
            }
        },
        {
            "name": "Fang-Xin Ou-Yang",
            "value": 55,
            "numPapers": 23,
            "cluster": "1",
            "visible": 1,
            "index": 1314,
            "x": 297.7232207874875,
            "y": -206.90791140970174,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "An Interactive Method to Improve Crowdsourced Annotations",
                "DOI": "10.1109/tvcg.2018.2864843",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864843",
                "FirstPage": 235,
                "LastPage": 245,
                "PaperType": "J",
                "Abstract": "In order to effectively infer correct labels from noisy crowdsourced annotations, learning-from-crowds models have introduced expert validation. However, little research has been done on facilitating the validation procedure. In this paper, we propose an interactive method to assist experts in verifying uncertain instance labels and unreliable workers. Given the instance labels and worker reliability inferred from a learning-from-crowds model, candidate instances and workers are selected for expert validation. The influence of verified results is propagated to relevant instances and workers through the learning-from-crowds model. To facilitate the validation of annotations, we have developed a confusion visualization to indicate the confusing classes for further exploration, a constrained projection method to show the uncertain labels in context, and a scatter-plot-based visualization to illustrate worker reliability. The three visualizations are tightly integrated with the learning-from-crowds model to provide an iterative and progressive environment for data validation. Two case studies were conducted that demonstrate our approach offers an efficient method for validating and improving crowdsourced annotations.",
                "AuthorNamesDeduped": "Shixia Liu;Changjian Chen;Yafeng Lu;Fang-Xin Ou-Yang;Bin Wang 0021",
                "AuthorNames": "Shixia Liu;Changjian Chen;Yafeng Lu;Fangxin Ouyang;Bin Wang",
                "AuthorAffiliation": "Tsinghua University, Beijing, Beijing, CN;Tsinghua University, Beijing, Beijing, CN;Arizona State University, Tempe, AZ, US;Tsinghua University, Beijing, Beijing, CN;Tsinghua University, Beijing, Beijing, CN",
                "InternalReferences": "0.1109/tvcg.2016.2598592;10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744818;10.1109/vast.2016.7883520;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.212;10.1109/tvcg.2011.239;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2016.2598445;10.1109/tvcg.2015.2467622;10.1109/tvcg.2015.2467554;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/vast.2016.7883508;10.1109/tvcg.2009.139;10.1109/tvcg.2016.2598829;10.1109/tvcg.2017.2745078;10.1109/vast.2014.7042494;10.1109/tvcg.2017.2744685;10.1109/tvcg.2013.164;10.1109/vast.2016.7883514",
                "AuthorKeywords": "Crowdsourcing,learning-from-crowds,interactive visualization,focus + context",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 1538,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 741,
                "i": [
                    741
                ]
            }
        },
        {
            "name": "Bin Wang 0021",
            "value": 55,
            "numPapers": 23,
            "cluster": "1",
            "visible": 1,
            "index": 1315,
            "x": -79.79788696250976,
            "y": 353.8111038906474,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "An Interactive Method to Improve Crowdsourced Annotations",
                "DOI": "10.1109/tvcg.2018.2864843",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864843",
                "FirstPage": 235,
                "LastPage": 245,
                "PaperType": "J",
                "Abstract": "In order to effectively infer correct labels from noisy crowdsourced annotations, learning-from-crowds models have introduced expert validation. However, little research has been done on facilitating the validation procedure. In this paper, we propose an interactive method to assist experts in verifying uncertain instance labels and unreliable workers. Given the instance labels and worker reliability inferred from a learning-from-crowds model, candidate instances and workers are selected for expert validation. The influence of verified results is propagated to relevant instances and workers through the learning-from-crowds model. To facilitate the validation of annotations, we have developed a confusion visualization to indicate the confusing classes for further exploration, a constrained projection method to show the uncertain labels in context, and a scatter-plot-based visualization to illustrate worker reliability. The three visualizations are tightly integrated with the learning-from-crowds model to provide an iterative and progressive environment for data validation. Two case studies were conducted that demonstrate our approach offers an efficient method for validating and improving crowdsourced annotations.",
                "AuthorNamesDeduped": "Shixia Liu;Changjian Chen;Yafeng Lu;Fang-Xin Ou-Yang;Bin Wang 0021",
                "AuthorNames": "Shixia Liu;Changjian Chen;Yafeng Lu;Fangxin Ouyang;Bin Wang",
                "AuthorAffiliation": "Tsinghua University, Beijing, Beijing, CN;Tsinghua University, Beijing, Beijing, CN;Arizona State University, Tempe, AZ, US;Tsinghua University, Beijing, Beijing, CN;Tsinghua University, Beijing, Beijing, CN",
                "InternalReferences": "0.1109/tvcg.2016.2598592;10.1109/vast.2014.7042480;10.1109/tvcg.2017.2744818;10.1109/vast.2016.7883520;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346594;10.1109/tvcg.2013.212;10.1109/tvcg.2011.239;10.1109/tvcg.2012.277;10.1109/vast.2012.6400492;10.1109/tvcg.2016.2598445;10.1109/tvcg.2015.2467622;10.1109/tvcg.2015.2467554;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2017.2744378;10.1109/vast.2016.7883508;10.1109/tvcg.2009.139;10.1109/tvcg.2016.2598829;10.1109/tvcg.2017.2745078;10.1109/vast.2014.7042494;10.1109/tvcg.2017.2744685;10.1109/tvcg.2013.164;10.1109/vast.2016.7883514",
                "AuthorKeywords": "Crowdsourcing,learning-from-crowds,interactive visualization,focus + context",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 42,
                "PubsCitedCrossRef": 65,
                "DownloadsXplore": 1538,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 741,
                "i": [
                    741
                ]
            }
        },
        {
            "name": "Christian Rohrdantz",
            "value": 109,
            "numPapers": 1,
            "cluster": "1",
            "visible": 1,
            "index": 1316,
            "x": -180.22394943937925,
            "y": -314.91161942435855,
            "vy": 0,
            "vx": 0,
            "r": 1.125503742084053,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "Document Cards: A Top Trumps Visualization for Documents",
                "DOI": "10.1109/tvcg.2009.139",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.139",
                "FirstPage": 1145,
                "LastPage": 1152,
                "PaperType": "J",
                "Abstract": "Finding suitable, less space consuming views for a document's main content is crucial to provide convenient access to large document collections on display devices of different size. We present a novel compact visualization which represents the document's key semantic as a mixture of images and important key terms, similar to cards in a top trumps game. The key terms are extracted using an advanced text mining approach based on a fully automatic document structure extraction. The images and their captions are extracted using a graphical heuristic and the captions are used for a semi-semantic image weighting. Furthermore, we use the image color histogram for classification and show at least one representative from each non-empty image class. The approach is demonstrated for the IEEE InfoVis publications of a complete year. The method can easily be applied to other publication collections and sets of documents which contain images.",
                "AuthorNamesDeduped": "Hendrik Strobelt;Daniela Oelke;Christian Rohrdantz;Andreas Stoffel;Daniel A. Keim;Oliver Deussen",
                "AuthorNames": "Hendrik Strobelt;Daniela Oelke;Christian Rohrdantz;Andreas Stoffel;Daniel A. Keim;Oliver Deussen",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;Universitat Konstanz, Konstanz, Baden-Württemberg, DE;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": null,
                "AuthorKeywords": "document visualization, visual summary, content extraction, document collection browsing",
                "AminerCitationCount": 131,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 1733,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1828,
                "i": [
                    1828
                ]
            }
        },
        {
            "name": "Hang Su 0006",
            "value": 93,
            "numPapers": 22,
            "cluster": "1",
            "visible": 1,
            "index": 1317,
            "x": 345.7425103584615,
            "y": 110.50844551901511,
            "vy": 0,
            "vx": 0,
            "r": 1.1070811744386875,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Analyzing the Noise Robustness of Deep Neural Networks",
                "DOI": "10.1109/vast.2018.8802509",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802509",
                "FirstPage": 60,
                "LastPage": 71,
                "PaperType": "C",
                "Abstract": "Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial examples. These examples are intentionally designed by making imperceptible perturbations and often mislead a DNN into making an incorrect prediction. This phenomenon means that there is significant risk in applying DNNs to safety-critical applications, such as driverless cars. To address this issue, we present a visual analytics approach to explain the primary cause of the wrong predictions introduced by adversarial examples. The key is to analyze the datapaths of the adversarial examples and compare them with those of the normal examples. A datapath is a group of critical neurons and their connections. To this end, we formulate the datapath extraction as a subset selection problem and approximately solve it based on back-propagation. A multi-level visualization consisting of a segmented DAG (layer level), an Euler diagram (feature map level), and a heat map (neuron level), has been designed to help experts investigate datapaths from the high-level layers to the detailed neuron activations. Two case studies are conducted that demonstrate the promise of our approach in support of explaining the working mechanism of adversarial examples.",
                "AuthorNamesDeduped": "Mengchen Liu;Shixia Liu;Hang Su 0006;Kelei Cao;Jun Zhu 0001",
                "AuthorNames": "Mengchen Liu;Shixia Liu;Hang Su;Kelei Cao;Jun Zhu",
                "AuthorAffiliation": "School of Software, Tsinghua University;School of Software, Tsinghua University;Dept.of Comp.Sci.Tech., Tsinghua University;School of Software, Tsinghua University;Dept.of Comp.Sci.Tech., Tsinghua University",
                "InternalReferences": "0.1109/tvcg.2015.2467618;10.1109/tvcg.2011.186;10.1109/tvcg.2016.2598496;10.1109/tvcg.2017.2744683;10.1109/tvcg.2014.2346431;10.1109/tvcg.2014.2346433;10.1109/tvcg.2017.2744199;10.1109/tvcg.2017.2744718;10.1109/tvcg.2017.2744938;10.1109/tvcg.2016.2598831;10.1109/tvcg.2013.196;10.1109/tvcg.2011.209;10.1109/tvcg.2017.2744358;10.1109/tvcg.2016.2598838;10.1109/tvcg.2010.210;10.1109/tvcg.2017.2744018;10.1109/tvcg.2011.183;10.1109/tvcg.2017.2744158;10.1109/visual.2005.1532820;10.1109/vast.2014.7042494;10.1109/tvcg.2017.2744878;10.1109/tvcg.2018.2865041;10.1109/vast.2017.8585721",
                "AuthorKeywords": "Deep neural networks,robustness,adversarial examples,back propagation,multi-level visualization.",
                "AminerCitationCount": 55,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 64,
                "DownloadsXplore": 851,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 745,
                "i": [
                    745
                ]
            }
        },
        {
            "name": "Nan Cao",
            "value": 35,
            "numPapers": 19,
            "cluster": "1",
            "visible": 1,
            "index": 1318,
            "x": -329.7122061356715,
            "y": 152.1179184880875,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "EnsembleLens: Ensemble-based Visual Exploration of Anomaly Detection Algorithms with Multidimensional Data",
                "DOI": "10.1109/tvcg.2018.2864825",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864825",
                "FirstPage": 109,
                "LastPage": 119,
                "PaperType": "J",
                "Abstract": "The results of anomaly detection are sensitive to the choice of detection algorithms as they are specialized for different properties of data, especially for multidimensional data. Thus, it is vital to select the algorithm appropriately. To systematically select the algorithms, ensemble analysis techniques have been developed to support the assembly and comparison of heterogeneous algorithms. However, challenges remain due to the absence of the ground truth, interpretation, or evaluation of these anomaly detectors. In this paper, we present a visual analytics system named EnsembleLens that evaluates anomaly detection algorithms based on the ensemble analysis process. The system visualizes the ensemble processes and results by a set of novel visual designs and multiple coordinated contextual views to meet the requirements of correlation analysis, assessment and reasoning of anomaly detection algorithms. We also introduce an interactive analysis workflow that dynamically produces contextualized and interpretable data summaries that allow further refinements of exploration results based on user feedback. We demonstrate the effectiveness of EnsembleLens through a quantitative evaluation, three case studies with real-world data and interviews with two domain experts.",
                "AuthorNamesDeduped": "Ke Xu;Meng Xia;Xing Mu;Yun Wang 0012;Nan Cao",
                "AuthorNames": "Ke Xu;Meng Xia;Xing Mu;Yun Wang;Nan Cao",
                "AuthorAffiliation": "Hong Kong University of Science and Technology, Kowloon, HK;Hong Kong University of Science and Technology, Kowloon, HK;Hong Kong University of Science and Technology, Kowloon, HK;Hong Kong University of Science and Technology, Kowloon, HK;iDV Lab, Tongji University",
                "InternalReferences": "0.1109/scivis.2015.7429487;10.1109/tvcg.2017.2744419;10.1109/tvcg.2014.2346448;10.1109/tvcg.2015.2468093;10.1109/visual.1990.146402;10.1109/tvcg.2017.2745178;10.1109/tvcg.2010.181;10.1109/tvcg.2016.2598830;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "Algorithm Evaluation,Ensemble Analysis,Anomaly Detection,Visual Analysis,Multidimensional Data",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 30,
                "PubsCitedCrossRef": 80,
                "DownloadsXplore": 1398,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 750,
                "i": [
                    750
                ]
            }
        },
        {
            "name": "Michelle Dowling",
            "value": 20,
            "numPapers": 26,
            "cluster": "4",
            "visible": 1,
            "index": 1319,
            "x": 140.41856109394817,
            "y": -335.0113844338803,
            "vy": 0,
            "vx": 0,
            "r": 1.023028209556707,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SIRIUS: Dual, Symmetric, Interactive Dimension Reductions",
                "DOI": "10.1109/tvcg.2018.2865047",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865047",
                "FirstPage": 172,
                "LastPage": 182,
                "PaperType": "J",
                "Abstract": "Much research has been done regarding how to visualize and interact with observations and attributes of high-dimensional data for exploratory data analysis. From the analyst's perceptual and cognitive perspective, current visualization approaches typically treat the observations of the high-dimensional dataset very differently from the attributes. Often, the attributes are treated as inputs (e.g., sliders), and observations as outputs (e.g., projection plots), thus emphasizing investigation of the observations. However, there are many cases in which analysts wish to investigate both the observations and the attributes of the dataset, suggesting a symmetry between how analysts think about attributes and observations. To address this, we define SIRIUS (Symmetric Interactive Representations In a Unified System), a symmetric, dual projection technique to support exploratory data analysis of high-dimensional data. We provide an example implementation of SIRIUS and demonstrate how this symmetry affords additional insights.",
                "AuthorNamesDeduped": "Michelle Dowling;John E. Wenskovitch;J. T. Fry;Scotland Leman;Leanna House;Chris North 0001",
                "AuthorNames": "Michelle Dowling;John Wenskovitch;J.T. Fry;Scotland Leman;Leanna House;Chris North",
                "AuthorAffiliation": "Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US",
                "InternalReferences": "0.1109/vast.2012.6400493;10.1109/infvis.2005.1532136;10.1109/vast.2014.7042492;10.1109/vast.2012.6400486;10.1109/tvcg.2015.2467552;10.1109/vast.2010.5652443;10.1109/tvcg.2012.260;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/tvcg.2016.2598445;10.1109/tvcg.2016.2598446;10.1109/infvis.2003.1249020;10.1109/tvcg.2008.173;10.1109/tvcg.2011.178;10.1109/tvcg.2012.256;10.1109/tvcg.2016.2598479;10.1109/tvcg.2013.150",
                "AuthorKeywords": "Dimension reduction,semantic interaction,exploratory data analysis,observation projection,attribute projection",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1064,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 753,
                "i": [
                    753
                ]
            }
        },
        {
            "name": "J. T. Fry",
            "value": 18,
            "numPapers": 16,
            "cluster": "4",
            "visible": 1,
            "index": 1320,
            "x": 122.80316934242961,
            "y": 342.0078677449608,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SIRIUS: Dual, Symmetric, Interactive Dimension Reductions",
                "DOI": "10.1109/tvcg.2018.2865047",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865047",
                "FirstPage": 172,
                "LastPage": 182,
                "PaperType": "J",
                "Abstract": "Much research has been done regarding how to visualize and interact with observations and attributes of high-dimensional data for exploratory data analysis. From the analyst's perceptual and cognitive perspective, current visualization approaches typically treat the observations of the high-dimensional dataset very differently from the attributes. Often, the attributes are treated as inputs (e.g., sliders), and observations as outputs (e.g., projection plots), thus emphasizing investigation of the observations. However, there are many cases in which analysts wish to investigate both the observations and the attributes of the dataset, suggesting a symmetry between how analysts think about attributes and observations. To address this, we define SIRIUS (Symmetric Interactive Representations In a Unified System), a symmetric, dual projection technique to support exploratory data analysis of high-dimensional data. We provide an example implementation of SIRIUS and demonstrate how this symmetry affords additional insights.",
                "AuthorNamesDeduped": "Michelle Dowling;John E. Wenskovitch;J. T. Fry;Scotland Leman;Leanna House;Chris North 0001",
                "AuthorNames": "Michelle Dowling;John Wenskovitch;J.T. Fry;Scotland Leman;Leanna House;Chris North",
                "AuthorAffiliation": "Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US;Virginia Polytechnic Institute and State University, Blacksburg, VA, US",
                "InternalReferences": "0.1109/vast.2012.6400493;10.1109/infvis.2005.1532136;10.1109/vast.2014.7042492;10.1109/vast.2012.6400486;10.1109/tvcg.2015.2467552;10.1109/vast.2010.5652443;10.1109/tvcg.2012.260;10.1109/vast.2011.6102449;10.1109/tvcg.2011.220;10.1109/tvcg.2016.2598445;10.1109/tvcg.2016.2598446;10.1109/infvis.2003.1249020;10.1109/tvcg.2008.173;10.1109/tvcg.2011.178;10.1109/tvcg.2012.256;10.1109/tvcg.2016.2598479;10.1109/tvcg.2013.150",
                "AuthorKeywords": "Dimension reduction,semantic interaction,exploratory data analysis,observation projection,attribute projection",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1064,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 753,
                "i": [
                    753
                ]
            }
        },
        {
            "name": "Holger Stitz",
            "value": 56,
            "numPapers": 22,
            "cluster": "4",
            "visible": 1,
            "index": 1321,
            "x": -321.69596968023666,
            "y": -169.29767597782399,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "KnowledgePearls: Provenance-Based Visualization Retrieval",
                "DOI": "10.1109/tvcg.2018.2865024",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865024",
                "FirstPage": 120,
                "LastPage": 130,
                "PaperType": "J",
                "Abstract": "Storing analytical provenance generates a knowledge base with a large potential for recalling previous results and guiding users in future analyses. However, without extensive manual creation of meta information and annotations by the users, search and retrieval of analysis states can become tedious. We present KnowledgePearls, a solution for efficient retrieval of analysis states that are structured as provenance graphs containing automatically recorded user interactions and visualizations. As a core component, we describe a visual interface for querying and exploring analysis states based on their similarity to a partial definition of a requested analysis state. Depending on the use case, this definition may be provided explicitly by the user by formulating a search query or inferred from given reference states. We explain our approach using the example of efficient retrieval of demographic analyses by Hans Rosling and discuss our implementation for a fast look-up of previous states. Our approach is independent of the underlying visualization framework. We discuss the applicability for visualizations which are based on the declarative grammar Vega and we use a Vega-based implementation of Gapminder as guiding example. We additionally present a biomedical case study to illustrate how KnowledgePearls facilitates the exploration process by recalling states from earlier analyses.",
                "AuthorNamesDeduped": "Holger Stitz;Samuel Gratzl;Harald Piringer;Thomas Zichner;Marc Streit",
                "AuthorNames": "Holger Stitz;Samuel Gratzl;Harald Piringer;Thomas Zichner;Marc Streit",
                "AuthorAffiliation": "Johannes Kepler Universitat Linz, Linz, AT;Johannes Kepler Universitat Linz, Linz, AT;VRVis Research Center, Austria;Boehringer Ingelheim RCV GmbH & Co KG, Austria;Johannes Kepler Universitat Linz, Linz, AT",
                "InternalReferences": "0.1109/visual.2005.1532788;10.1109/tvcg.2011.229;10.1109/tvcg.2013.155;10.1109/tvcg.2009.176;10.1109/vast.2008.4677365;10.1109/infvis.2004.2;10.1109/tvcg.2012.271;10.1109/tvcg.2016.2598589;10.1109/tvcg.2017.2744320;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2006.142;10.1109/tvcg.2017.2745219;10.1109/infvis.2000.885086;10.1109/infvis.2005.1532142;10.1109/tvcg.2013.173;10.1109/tvcg.2008.137;10.1109/tvcg.2010.184",
                "AuthorKeywords": "Visualization provenance,interaction provenance,retrieval",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1942,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 754,
                "i": [
                    754
                ]
            }
        },
        {
            "name": "Thomas Zichner",
            "value": 28,
            "numPapers": 18,
            "cluster": "5",
            "visible": 1,
            "index": 1322,
            "x": 351.70052641765096,
            "y": -92.50264708400064,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "KnowledgePearls: Provenance-Based Visualization Retrieval",
                "DOI": "10.1109/tvcg.2018.2865024",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865024",
                "FirstPage": 120,
                "LastPage": 130,
                "PaperType": "J",
                "Abstract": "Storing analytical provenance generates a knowledge base with a large potential for recalling previous results and guiding users in future analyses. However, without extensive manual creation of meta information and annotations by the users, search and retrieval of analysis states can become tedious. We present KnowledgePearls, a solution for efficient retrieval of analysis states that are structured as provenance graphs containing automatically recorded user interactions and visualizations. As a core component, we describe a visual interface for querying and exploring analysis states based on their similarity to a partial definition of a requested analysis state. Depending on the use case, this definition may be provided explicitly by the user by formulating a search query or inferred from given reference states. We explain our approach using the example of efficient retrieval of demographic analyses by Hans Rosling and discuss our implementation for a fast look-up of previous states. Our approach is independent of the underlying visualization framework. We discuss the applicability for visualizations which are based on the declarative grammar Vega and we use a Vega-based implementation of Gapminder as guiding example. We additionally present a biomedical case study to illustrate how KnowledgePearls facilitates the exploration process by recalling states from earlier analyses.",
                "AuthorNamesDeduped": "Holger Stitz;Samuel Gratzl;Harald Piringer;Thomas Zichner;Marc Streit",
                "AuthorNames": "Holger Stitz;Samuel Gratzl;Harald Piringer;Thomas Zichner;Marc Streit",
                "AuthorAffiliation": "Johannes Kepler Universitat Linz, Linz, AT;Johannes Kepler Universitat Linz, Linz, AT;VRVis Research Center, Austria;Boehringer Ingelheim RCV GmbH & Co KG, Austria;Johannes Kepler Universitat Linz, Linz, AT",
                "InternalReferences": "0.1109/visual.2005.1532788;10.1109/tvcg.2011.229;10.1109/tvcg.2013.155;10.1109/tvcg.2009.176;10.1109/vast.2008.4677365;10.1109/infvis.2004.2;10.1109/tvcg.2012.271;10.1109/tvcg.2016.2598589;10.1109/tvcg.2017.2744320;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2599030;10.1109/tvcg.2015.2467091;10.1109/tvcg.2006.142;10.1109/tvcg.2017.2745219;10.1109/infvis.2000.885086;10.1109/infvis.2005.1532142;10.1109/tvcg.2013.173;10.1109/tvcg.2008.137;10.1109/tvcg.2010.184",
                "AuthorKeywords": "Visualization provenance,interaction provenance,retrieval",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1942,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 754,
                "i": [
                    754
                ]
            }
        },
        {
            "name": "Alexander Kumpf",
            "value": 54,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1323,
            "x": -196.92279105096708,
            "y": 305.89444971214687,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Analysis of the Temporal Evolution of Ensemble Forecast Sensitivities",
                "DOI": "10.1109/tvcg.2018.2864901",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864901",
                "FirstPage": 98,
                "LastPage": 108,
                "PaperType": "J",
                "Abstract": "Ensemble sensitivity analysis (ESA) has been established in the atmospheric sciences as a correlation-based approach to determine the sensitivity of a scalar forecast quantity computed by a numerical weather prediction model to changes in another model variable at a different model state. Its applications include determining the origin of forecast errors and placing targeted observations to improve future forecasts. We - a team of visualization scientists and meteorologists - present a visual analysis framework to improve upon current practice of ESA. We support the user in selecting regions to compute a meaningful target forecast quantity by embedding correlation-based grid-point clustering to obtain statistically coherent regions. The evolution of sensitivity features computed via ESA are then traced through time, by integrating a quantitative measure of feature matching into optical-flow-based feature assignment, and displayed by means of a swipe-path showing the geo-spatial evolution of the sensitivities. Visualization of the internal correlation structure of computed features guides the user towards those features robustly predicting a certain weather event. We demonstrate the use of our method by application to real-world 2D and 3D cases that occurred during the 2016 NAWDEX field campaign, showing the interactive generation of hypothesis chains to explore how atmospheric processes sensitive to each other are interrelated.",
                "AuthorNamesDeduped": "Alexander Kumpf;Marc Rautenhaus;Michael Riemer;Rüdiger Westermann",
                "AuthorNames": "Alexander Kumpf;Marc Rautenhaus;Michael Riemer;Rüdiger Westermann",
                "AuthorAffiliation": "Technische Universitat Munchen, Munchen, Bayern, DE;Technische Universitat Munchen, Munchen, Bayern, DE;Universitat Hamburg, Hamburg, Hamburg, DE;Technische Universitat Munchen, Munchen, Bayern, DE",
                "InternalReferences": "0.1109/tvcg.2013.131;10.1109/visual.2004.46;10.1109/tvcg.2017.2743989;10.1109/tvcg.2017.2745178;10.1109/tvcg.2006.165",
                "AuthorKeywords": "Correlation,clustering,tracking,ensemble visualization",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 875,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 758,
                "i": [
                    758
                ]
            }
        },
        {
            "name": "Michael Riemer",
            "value": 54,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1324,
            "x": -61.4471950978418,
            "y": -358.7119209262605,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Visual Analysis of the Temporal Evolution of Ensemble Forecast Sensitivities",
                "DOI": "10.1109/tvcg.2018.2864901",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864901",
                "FirstPage": 98,
                "LastPage": 108,
                "PaperType": "J",
                "Abstract": "Ensemble sensitivity analysis (ESA) has been established in the atmospheric sciences as a correlation-based approach to determine the sensitivity of a scalar forecast quantity computed by a numerical weather prediction model to changes in another model variable at a different model state. Its applications include determining the origin of forecast errors and placing targeted observations to improve future forecasts. We - a team of visualization scientists and meteorologists - present a visual analysis framework to improve upon current practice of ESA. We support the user in selecting regions to compute a meaningful target forecast quantity by embedding correlation-based grid-point clustering to obtain statistically coherent regions. The evolution of sensitivity features computed via ESA are then traced through time, by integrating a quantitative measure of feature matching into optical-flow-based feature assignment, and displayed by means of a swipe-path showing the geo-spatial evolution of the sensitivities. Visualization of the internal correlation structure of computed features guides the user towards those features robustly predicting a certain weather event. We demonstrate the use of our method by application to real-world 2D and 3D cases that occurred during the 2016 NAWDEX field campaign, showing the interactive generation of hypothesis chains to explore how atmospheric processes sensitive to each other are interrelated.",
                "AuthorNamesDeduped": "Alexander Kumpf;Marc Rautenhaus;Michael Riemer;Rüdiger Westermann",
                "AuthorNames": "Alexander Kumpf;Marc Rautenhaus;Michael Riemer;Rüdiger Westermann",
                "AuthorAffiliation": "Technische Universitat Munchen, Munchen, Bayern, DE;Technische Universitat Munchen, Munchen, Bayern, DE;Universitat Hamburg, Hamburg, Hamburg, DE;Technische Universitat Munchen, Munchen, Bayern, DE",
                "InternalReferences": "0.1109/tvcg.2013.131;10.1109/visual.2004.46;10.1109/tvcg.2017.2743989;10.1109/tvcg.2017.2745178;10.1109/tvcg.2006.165",
                "AuthorKeywords": "Correlation,clustering,tracking,ensemble visualization",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 875,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 758,
                "i": [
                    758
                ]
            }
        },
        {
            "name": "Hong Wang",
            "value": 44,
            "numPapers": 40,
            "cluster": "1",
            "visible": 1,
            "index": 1325,
            "x": 287.7242250330541,
            "y": 223.08018811433797,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "A Visual Analytics Framework for Spatiotemporal Trade Network Analysis",
                "DOI": "10.1109/tvcg.2018.2864844",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864844",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Economic globalization is increasing connectedness among regions of the world, creating complex interdependencies within various supply chains. Recent studies have indicated that changes and disruptions within such networks can serve as indicators for increased risks of violence and armed conflicts. This is especially true of countries that may not be able to compete for scarce commodities during supply shocks. Thus, network-induced vulnerability to supply disruption is typically exported from wealthier populations to disadvantaged populations. As such, researchers and stakeholders concerned with supply chains, political science, environmental studies, etc. need tools to explore the complex dynamics within global trade networks and how the structure of these networks relates to regional instability. However, the multivariate, spatiotemporal nature of the network structure creates a bottleneck in the extraction and analysis of correlations and anomalies for exploratory data analysis and hypothesis generation. Working closely with experts in political science and sustainability, we have developed a highly coordinated, multi-view framework that utilizes anomaly detection, network analytics, and spatiotemporal visualization methods for exploring the relationship between global trade networks and regional instability. Requirements for analysis and initial research questions to be investigated are elicited from domain experts, and a variety of visual encoding techniques for rapid assessment of analysis and correlations between trade goods, network patterns, and time series signatures are explored. We demonstrate the application of our framework through case studies focusing on armed conflicts in Africa, regional instability measures, and their relationship to international global trade.",
                "AuthorNamesDeduped": "Hong Wang;Yafeng Lu;Shade T. Shutters;Michael Steptoe;Feng Wang 0012;Steven Landis;Ross Maciejewski",
                "AuthorNames": "Hong Wang;Yafeng Lu;Shade T. Shutters;Michael Steptoe;Feng Wang;Steven Landis;Ross Maciejewski",
                "AuthorAffiliation": "Arizona State University;Arizona State University;Arizona State University;Arizona State University;GE Global Research;University of Nevada;Arizona State University",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2011.202;10.1109/vast.2012.6400557;10.1109/tvcg.2008.135;10.1109/vast.2012.6400485;10.1109/tvcg.2014.2346682;10.1109/tvcg.2009.143;10.1109/tvcg.2014.2346271;10.1109/tvcg.2015.2467991;10.1109/vast.2012.6400491;10.1109/infvis.1996.559226;10.1109/infvis.2005.1532150;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864887",
                "AuthorKeywords": "Global trade network,anomaly detection,visual analytics",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 82,
                "DownloadsXplore": 1441,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 759,
                "i": [
                    759
                ]
            }
        },
        {
            "name": "Michael Steptoe",
            "value": 44,
            "numPapers": 27,
            "cluster": "1",
            "visible": 1,
            "index": 1326,
            "x": -362.9842370104094,
            "y": 29.873795908302384,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "A Visual Analytics Framework for Spatiotemporal Trade Network Analysis",
                "DOI": "10.1109/tvcg.2018.2864844",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864844",
                "FirstPage": 331,
                "LastPage": 341,
                "PaperType": "J",
                "Abstract": "Economic globalization is increasing connectedness among regions of the world, creating complex interdependencies within various supply chains. Recent studies have indicated that changes and disruptions within such networks can serve as indicators for increased risks of violence and armed conflicts. This is especially true of countries that may not be able to compete for scarce commodities during supply shocks. Thus, network-induced vulnerability to supply disruption is typically exported from wealthier populations to disadvantaged populations. As such, researchers and stakeholders concerned with supply chains, political science, environmental studies, etc. need tools to explore the complex dynamics within global trade networks and how the structure of these networks relates to regional instability. However, the multivariate, spatiotemporal nature of the network structure creates a bottleneck in the extraction and analysis of correlations and anomalies for exploratory data analysis and hypothesis generation. Working closely with experts in political science and sustainability, we have developed a highly coordinated, multi-view framework that utilizes anomaly detection, network analytics, and spatiotemporal visualization methods for exploring the relationship between global trade networks and regional instability. Requirements for analysis and initial research questions to be investigated are elicited from domain experts, and a variety of visual encoding techniques for rapid assessment of analysis and correlations between trade goods, network patterns, and time series signatures are explored. We demonstrate the application of our framework through case studies focusing on armed conflicts in Africa, regional instability measures, and their relationship to international global trade.",
                "AuthorNamesDeduped": "Hong Wang;Yafeng Lu;Shade T. Shutters;Michael Steptoe;Feng Wang 0012;Steven Landis;Ross Maciejewski",
                "AuthorNames": "Hong Wang;Yafeng Lu;Shade T. Shutters;Michael Steptoe;Feng Wang;Steven Landis;Ross Maciejewski",
                "AuthorAffiliation": "Arizona State University;Arizona State University;Arizona State University;Arizona State University;GE Global Research;University of Nevada;Arizona State University",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2011.202;10.1109/vast.2012.6400557;10.1109/tvcg.2008.135;10.1109/vast.2012.6400485;10.1109/tvcg.2014.2346682;10.1109/tvcg.2009.143;10.1109/tvcg.2014.2346271;10.1109/tvcg.2015.2467991;10.1109/vast.2012.6400491;10.1109/infvis.1996.559226;10.1109/infvis.2005.1532150;10.1109/tvcg.2016.2598885;10.1109/tvcg.2018.2864887",
                "AuthorKeywords": "Global trade network,anomaly detection,visual analytics",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 82,
                "DownloadsXplore": 1441,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 759,
                "i": [
                    759
                ]
            }
        },
        {
            "name": "Abish Malik",
            "value": 141,
            "numPapers": 26,
            "cluster": "6",
            "visible": 1,
            "index": 1327,
            "x": 247.56708358853956,
            "y": -267.32104130327093,
            "vy": 0,
            "vx": 0,
            "r": 1.162348877374784,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Proactive Spatiotemporal Resource Allocation and Predictive Visual Analytics for Community Policing and Law Enforcement",
                "DOI": "10.1109/tvcg.2014.2346926",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346926",
                "FirstPage": 1863,
                "LastPage": 1872,
                "PaperType": "J",
                "Abstract": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
                "AuthorNamesDeduped": "Abish Malik;Ross Maciejewski;Sherry Towers;Sean McCullough;David S. Ebert",
                "AuthorNames": "Abish Malik;Ross Maciejewski;Sherry Towers;Sean McCullough;David S. Ebert",
                "AuthorAffiliation": "Purdue University;Arizona State University;Arizona State University;Purdue University;Purdue University",
                "InternalReferences": "0.1109/tvcg.2013.125;10.1109/tvcg.2013.206;10.1109/vast.2012.6400491;10.1109/vast.2007.4389006;10.1109/tvcg.2013.200",
                "AuthorKeywords": "Visual Analytics, Natural Scales, Seasonal Trend decomposition based on Loess (STL), Law Enforcement",
                "AminerCitationCount": 97,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1909,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1257,
                "i": [
                    1257
                ]
            }
        },
        {
            "name": "Daniel Orban",
            "value": 38,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1328,
            "x": -1.9762377479577065,
            "y": 364.48058176583777,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Drag and Track: A Direct Manipulation Interface for Contextualizing Data Instances within a Continuous Parameter Space",
                "DOI": "10.1109/tvcg.2018.2865051",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865051",
                "FirstPage": 256,
                "LastPage": 266,
                "PaperType": "J",
                "Abstract": "We present a direct manipulation technique that allows material scientists to interactively highlight relevant parameterized simulation instances located in dimensionally reduced spaces, enabling a user-defined understanding of a continuous parameter space. Our goals are two-fold: first, to build a user-directed intuition of dimensionally reduced data, and second, to provide a mechanism for creatively exploring parameter relationships in parameterized simulation sets, called ensembles. We start by visualizing ensemble data instances in dimensionally reduced scatter plots. To understand these abstract views, we employ user-defined virtual data instances that, through direct manipulation, search an ensemble for similar instances. Users can create multiple of these direct manipulation queries to visually annotate the spaces with sets of highlighted ensemble data instances. User-defined goals are therefore translated into custom illustrations that are projected onto the dimensionally reduced spaces. Combined forward and inverse searches of the parameter space follow naturally allowing for continuous parameter space prediction and visual query comparison in the context of an ensemble. The potential for this visualization technique is confirmed via expert user feedback for a shock physics application and synthetic model analysis.",
                "AuthorNamesDeduped": "Daniel Orban;Daniel F. Keefe;Ayan Biswas;James P. Ahrens;David H. Rogers 0001",
                "AuthorNames": "Daniel Orban;Daniel F. Keefe;Ayan Biswas;James Ahrens;David Rogers",
                "AuthorAffiliation": "University of Minnesota, Minneapolis, MN, US;University of Minnesota, Minneapolis, MN, US;Los Alamos National Labs;Los Alamos National Labs;Los Alamos National Labs",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2016.2598869;10.1109/vast.2012.6400486;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/vast.2012.6400489;10.1109/tvcg.2015.2467436;10.1109/tvcg.2012.260;10.1109/vast.2011.6102449;10.1109/tvcg.2015.2467204;10.1109/tvcg.2013.141;10.1109/tvcg.2017.2745178;10.1109/tvcg.2014.2346455;10.1109/tvcg.2016.2598589;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598839;10.1109/tvcg.2014.2346321;10.1109/tvcg.2011.248;10.1109/tvcg.2016.2598830;10.1109/tvcg.2010.223",
                "AuthorKeywords": "Visual Parameter Space Analysis,Ensemble Visualization,Semantic Interaction,Direct Manipulation,Shock Physics",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 708,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 760,
                "i": [
                    760
                ]
            }
        },
        {
            "name": "David H. Rogers 0001",
            "value": 73,
            "numPapers": 38,
            "cluster": "6",
            "visible": 1,
            "index": 1329,
            "x": -244.8379755075375,
            "y": -270.1932007830146,
            "vy": 0,
            "vx": 0,
            "r": 1.0840529648819806,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Drag and Track: A Direct Manipulation Interface for Contextualizing Data Instances within a Continuous Parameter Space",
                "DOI": "10.1109/tvcg.2018.2865051",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865051",
                "FirstPage": 256,
                "LastPage": 266,
                "PaperType": "J",
                "Abstract": "We present a direct manipulation technique that allows material scientists to interactively highlight relevant parameterized simulation instances located in dimensionally reduced spaces, enabling a user-defined understanding of a continuous parameter space. Our goals are two-fold: first, to build a user-directed intuition of dimensionally reduced data, and second, to provide a mechanism for creatively exploring parameter relationships in parameterized simulation sets, called ensembles. We start by visualizing ensemble data instances in dimensionally reduced scatter plots. To understand these abstract views, we employ user-defined virtual data instances that, through direct manipulation, search an ensemble for similar instances. Users can create multiple of these direct manipulation queries to visually annotate the spaces with sets of highlighted ensemble data instances. User-defined goals are therefore translated into custom illustrations that are projected onto the dimensionally reduced spaces. Combined forward and inverse searches of the parameter space follow naturally allowing for continuous parameter space prediction and visual query comparison in the context of an ensemble. The potential for this visualization technique is confirmed via expert user feedback for a shock physics application and synthetic model analysis.",
                "AuthorNamesDeduped": "Daniel Orban;Daniel F. Keefe;Ayan Biswas;James P. Ahrens;David H. Rogers 0001",
                "AuthorNames": "Daniel Orban;Daniel F. Keefe;Ayan Biswas;James Ahrens;David Rogers",
                "AuthorAffiliation": "University of Minnesota, Minneapolis, MN, US;University of Minnesota, Minneapolis, MN, US;Los Alamos National Labs;Los Alamos National Labs;Los Alamos National Labs",
                "InternalReferences": "0.1109/tvcg.2013.133;10.1109/tvcg.2016.2598869;10.1109/vast.2012.6400486;10.1109/tvcg.2010.190;10.1109/tvcg.2013.147;10.1109/vast.2012.6400489;10.1109/tvcg.2015.2467436;10.1109/tvcg.2012.260;10.1109/vast.2011.6102449;10.1109/tvcg.2015.2467204;10.1109/tvcg.2013.141;10.1109/tvcg.2017.2745178;10.1109/tvcg.2014.2346455;10.1109/tvcg.2016.2598589;10.1109/tvcg.2016.2598495;10.1109/tvcg.2016.2598839;10.1109/tvcg.2014.2346321;10.1109/tvcg.2011.248;10.1109/tvcg.2016.2598830;10.1109/tvcg.2010.223",
                "AuthorKeywords": "Visual Parameter Space Analysis,Ensemble Visualization,Semantic Interaction,Direct Manipulation,Shock Physics",
                "AminerCitationCount": 23,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 708,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 760,
                "i": [
                    760
                ]
            }
        },
        {
            "name": "Po-Ming Law",
            "value": 63,
            "numPapers": 41,
            "cluster": "1",
            "visible": 1,
            "index": 1330,
            "x": 363.1852981867441,
            "y": 33.85910779990727,
            "vy": 0,
            "vx": 0,
            "r": 1.072538860103627,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Duet: Helping Data Analysis Novices Conduct Pairwise Comparisons by Minimal Specification",
                "DOI": "10.1109/tvcg.2018.2864526",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864526",
                "FirstPage": 427,
                "LastPage": 437,
                "PaperType": "J",
                "Abstract": "Data analysis novices often encounter barriers in executing low-level operations for pairwise comparisons. They may also run into barriers in interpreting the artifacts (e.g., visualizations) created as a result of the operations. We developed Duet, a visual analysis system designed to help data analysis novices conduct pairwise comparisons by addressing execution and interpretation barriers. To reduce the barriers in executing low-level operations during pairwise comparison, Duet employs minimal specification: when one object group (i.e. a group of records in a data table) is specified, Duet recommends object groups that are similar to or different from the specified one; when two object groups are specified, Duet recommends similar and different attributes between them. To lower the barriers in interpreting its recommendations, Duet explains the recommended groups and attributes using both visualizations and textual descriptions. We conducted a qualitative evaluation with eight participants to understand the effectiveness of Duet. The results suggest that minimal specification is easy to use and Duet's explanations are helpful for interpreting the recommendations despite some usability issues.",
                "AuthorNamesDeduped": "Po-Ming Law;Rahul C. Basole;Yanhong Wu",
                "AuthorNames": "Po-Ming Law;Rahul C. Basole;Yanhong Wu",
                "AuthorAffiliation": "Georgia Institute of Technology;Georgia Institute of Technology;Visa Research",
                "InternalReferences": "0.1109/tvcg.2011.188;10.1109/tvcg.2016.2598468;10.1109/vast.2011.6102435;10.1109/tvcg.2017.2744199;10.1109/tvcg.2010.164;10.1109/tvcg.2017.2744684;10.1109/tvcg.2008.109;10.1109/tvcg.2015.2467195;10.1109/tvcg.2017.2745219;10.1109/tvcg.2015.2467191",
                "AuthorKeywords": "Pairwise comparison,novices,data analysis,automatic insight generation",
                "AminerCitationCount": 33,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 614,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 756,
                "i": [
                    756
                ]
            }
        },
        {
            "name": "Jose Manuel Cordero Garcia",
            "value": 28,
            "numPapers": 11,
            "cluster": "3",
            "visible": 1,
            "index": 1331,
            "x": -290.78224854360764,
            "y": 220.4442875919527,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Analysis of Flight Variability: a Systematic Approach",
                "DOI": "10.1109/tvcg.2018.2864811",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864811",
                "FirstPage": 54,
                "LastPage": 64,
                "PaperType": "J",
                "Abstract": "In movement data analysis, there exists a problem of comparing multiple trajectories of moving objects to common or distinct reference trajectories. We introduce a general conceptual framework for comparative analysis of trajectories and an analytical procedure, which consists of (1) finding corresponding points in pairs of trajectories, (2) computation of pairwise difference measures, and (3) interactive visual analysis of the distributions of the differences with respect to space, time, set of moving objects, trajectory structures, and spatio-temporal context. We propose a combination of visualisation, interaction, and data transformation techniques supporting the analysis and demonstrate the use of our approach for solving a challenging problem from the aviation domain.",
                "AuthorNamesDeduped": "Natalia V. Andrienko;Gennady L. Andrienko;Jose Manuel Cordero Garcia;David Scarlatti",
                "AuthorNames": "Natalia Andrienko;Gennady Andrienko;Jose Manuel Cordero Garcia;David Scarlatti",
                "AuthorAffiliation": "Fraunhofer IAIS, City, University of London;Fraunhofer IAIS, City, University of London;CRIDA (Reference Center for Research, Development and Innovation in ATM);Boeing Research & Development Europe",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/vast.2010.5653580;10.1109/tvcg.2017.2744322;10.1109/tvcg.2013.193;10.1109/tvcg.2015.2467851;10.1109/tvcg.2011.233;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468078",
                "AuthorKeywords": "Visual analytics,movement data,flight trajectories",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 881,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 763,
                "i": [
                    763
                ]
            }
        },
        {
            "name": "David Scarlatti",
            "value": 17,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 1332,
            "x": 65.53039756224938,
            "y": -359.1041172074382,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Analysis of Flight Variability: a Systematic Approach",
                "DOI": "10.1109/tvcg.2018.2864811",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864811",
                "FirstPage": 54,
                "LastPage": 64,
                "PaperType": "J",
                "Abstract": "In movement data analysis, there exists a problem of comparing multiple trajectories of moving objects to common or distinct reference trajectories. We introduce a general conceptual framework for comparative analysis of trajectories and an analytical procedure, which consists of (1) finding corresponding points in pairs of trajectories, (2) computation of pairwise difference measures, and (3) interactive visual analysis of the distributions of the differences with respect to space, time, set of moving objects, trajectory structures, and spatio-temporal context. We propose a combination of visualisation, interaction, and data transformation techniques supporting the analysis and demonstrate the use of our approach for solving a challenging problem from the aviation domain.",
                "AuthorNamesDeduped": "Natalia V. Andrienko;Gennady L. Andrienko;Jose Manuel Cordero Garcia;David Scarlatti",
                "AuthorNames": "Natalia Andrienko;Gennady Andrienko;Jose Manuel Cordero Garcia;David Scarlatti",
                "AuthorAffiliation": "Fraunhofer IAIS, City, University of London;Fraunhofer IAIS, City, University of London;CRIDA (Reference Center for Research, Development and Innovation in ATM);Boeing Research & Development Europe",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/vast.2010.5653580;10.1109/tvcg.2017.2744322;10.1109/tvcg.2013.193;10.1109/tvcg.2015.2467851;10.1109/tvcg.2011.233;10.1109/tvcg.2012.265;10.1109/tvcg.2015.2468078",
                "AuthorKeywords": "Visual analytics,movement data,flight trajectories",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 881,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 763,
                "i": [
                    763
                ]
            }
        },
        {
            "name": "Akhilesh Camisetty",
            "value": 0,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 1333,
            "x": 194.32414618461334,
            "y": 309.1732947872778,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Enhancing Web-based Analytics Applications through Provenance",
                "DOI": "10.1109/tvcg.2018.2865039",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865039",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Visual analytics systems continue to integrate new technologies and leverage modern environments for exploration and collaboration, making tools and techniques available to a wide audience through web browsers. Many of these systems have been developed with rich interactions, offering users the opportunity to examine details and explore hypotheses that have not been directly encoded by a designer. Understanding is enhanced when users can replay and revisit the steps in the sensemaking process, and in collaborative settings, it is especially important to be able to review not only the current state but also what decisions were made along the way. Unfortunately, many web-based systems lack the ability to capture such reasoning, and the path to a result is transient, forgotten when a user moves to a new view. This paper explores the requirements to augment existing client-side web applications with support for capturing, reviewing, sharing, and reusing steps in the reasoning process. Furthermore, it considers situations where decisions are made with streaming data, and the insights gained from revisiting those choices when more data is available. It presents a proof of concept, the Shareable Interactive Manipulation Provenance framework (SIMProv.js), that addresses these requirements in a modern, client-side JavaScript library, and describes how it can be integrated with existing frameworks.",
                "AuthorNamesDeduped": "Akhilesh Camisetty;Chaitanya Chandurkar;Maoyuan Sun;David Koop",
                "AuthorNames": "Akhilesh Camisetty;Chaitanya Chandurkar;Maoyuan Sun;David Koop",
                "AuthorAffiliation": "UMass Dartmouth;UMass Dartmouth;UMass Dartmouth;UMass Dartmouth",
                "InternalReferences": "0.1109/visual.1993.398857;10.1109/vast.2011.6102447;10.1109/vast.2010.5652932;10.1109/tvcg.2016.2598471;10.1109/tvcg.2016.2599058;10.1109/vast.2008.4677365;10.1109/tvcg.2013.197;10.1109/vast.2007.4389011;10.1109/visual.1999.809871;10.1109/tvcg.2014.2346573;10.1109/tvcg.2015.2467551;10.1109/tvcg.2015.2467191;10.1109/tvcg.2017.2745279;10.1109/tvcg.2008.137;10.1109/tvcg.2007.70589;10.1109/tvcg.2014.2346574;10.1109/tvcg.2015.2467611",
                "AuthorKeywords": "Collaboration,provenance,streaming data,history,web",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1032,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 764,
                "i": [
                    764
                ]
            }
        },
        {
            "name": "Chaitanya Chandurkar",
            "value": 0,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 1334,
            "x": -352.26414583295593,
            "y": -96.7469460012974,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "Enhancing Web-based Analytics Applications through Provenance",
                "DOI": "10.1109/tvcg.2018.2865039",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2865039",
                "FirstPage": 131,
                "LastPage": 141,
                "PaperType": "J",
                "Abstract": "Visual analytics systems continue to integrate new technologies and leverage modern environments for exploration and collaboration, making tools and techniques available to a wide audience through web browsers. Many of these systems have been developed with rich interactions, offering users the opportunity to examine details and explore hypotheses that have not been directly encoded by a designer. Understanding is enhanced when users can replay and revisit the steps in the sensemaking process, and in collaborative settings, it is especially important to be able to review not only the current state but also what decisions were made along the way. Unfortunately, many web-based systems lack the ability to capture such reasoning, and the path to a result is transient, forgotten when a user moves to a new view. This paper explores the requirements to augment existing client-side web applications with support for capturing, reviewing, sharing, and reusing steps in the reasoning process. Furthermore, it considers situations where decisions are made with streaming data, and the insights gained from revisiting those choices when more data is available. It presents a proof of concept, the Shareable Interactive Manipulation Provenance framework (SIMProv.js), that addresses these requirements in a modern, client-side JavaScript library, and describes how it can be integrated with existing frameworks.",
                "AuthorNamesDeduped": "Akhilesh Camisetty;Chaitanya Chandurkar;Maoyuan Sun;David Koop",
                "AuthorNames": "Akhilesh Camisetty;Chaitanya Chandurkar;Maoyuan Sun;David Koop",
                "AuthorAffiliation": "UMass Dartmouth;UMass Dartmouth;UMass Dartmouth;UMass Dartmouth",
                "InternalReferences": "0.1109/visual.1993.398857;10.1109/vast.2011.6102447;10.1109/vast.2010.5652932;10.1109/tvcg.2016.2598471;10.1109/tvcg.2016.2599058;10.1109/vast.2008.4677365;10.1109/tvcg.2013.197;10.1109/vast.2007.4389011;10.1109/visual.1999.809871;10.1109/tvcg.2014.2346573;10.1109/tvcg.2015.2467551;10.1109/tvcg.2015.2467191;10.1109/tvcg.2017.2745279;10.1109/tvcg.2008.137;10.1109/tvcg.2007.70589;10.1109/tvcg.2014.2346574;10.1109/tvcg.2015.2467611",
                "AuthorKeywords": "Collaboration,provenance,streaming data,history,web",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1032,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 764,
                "i": [
                    764
                ]
            }
        },
        {
            "name": "David Koop",
            "value": 85,
            "numPapers": 28,
            "cluster": "5",
            "visible": 1,
            "index": 1335,
            "x": 325.22202422339467,
            "y": -166.67523799304348,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Baseball4D: A Tool for Baseball Game Reconstruction & Visualization",
                "DOI": "10.1109/vast.2014.7042478",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042478",
                "FirstPage": 23,
                "LastPage": 32,
                "PaperType": "C",
                "Abstract": "While many sports use statistics and video to analyze and improve game play, baseball has led the charge throughout its history. With the advent of new technologies that allow all players and the ball to be tracked across the entire field, it is now possible to bring this understanding to another level. From discrete positions across time, we present techniques to reconstruct entire baseball games and visually explore each play. This provides opportunities to not only derive new metrics for the game, but also allow us to investigate existing measures with targeted visualizations. In addition, our techniques allow users to filter on demand so specific situations can be analyzed both in general and according to those situations. We show that gameplay can be accurately reconstructed from the raw position data and discuss how visualization and statistical methods can combine to better inform baseball analyses.",
                "AuthorNamesDeduped": "Carlos A. Dietrich;David Koop;Huy T. Vo;Cláudio T. Silva",
                "AuthorNames": "Carlos Dietrich;David Koop;Huy T. Vo;Cláudio T. Silva",
                "AuthorAffiliation": "Independent consultant;NYU;NYU;NYU",
                "InternalReferences": "0.1109/tvcg.2012.263;10.1109/tvcg.2013.192;10.1109/tvcg.2012.225;10.1109/visual.2001.964496",
                "AuthorKeywords": "sports visualization, sports analytics, baseball, game reconstruction, baseball metrics, event data",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 955,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1268,
                "i": [
                    1268
                ]
            }
        },
        {
            "name": "Mark A. Whiting",
            "value": 80,
            "numPapers": 7,
            "cluster": "4",
            "visible": 1,
            "index": 1336,
            "x": -127.26871516754886,
            "y": 342.71369120535775,
            "vy": 0,
            "vx": 0,
            "r": 1.092112838226828,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "IN-SPIRE InfoVis 2004 Contest Entry",
                "DOI": "10.1109/infvis.2004.37",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.37",
                "FirstPage": null,
                "LastPage": null,
                "PaperType": "M",
                "Abstract": "This is the first part (summary) of a three-part contest entry submitted to IEEE InfoVis 2004. The contest topic is visualizing InfoVis symposium papers from 1995 to 2002 and their references. The paper introduces the visualization tool IN-SPIRE, the visualization process and results, and presents lessons learned.",
                "AuthorNamesDeduped": "Pak Chung Wong;Elizabeth G. Hetzler;Christian Posse;Mark A. Whiting;Susan Havre;Nick Cramer;Anuj R. Shah;Mudita Singhal;Alan Turner;James J. Thomas",
                "AuthorNames": "Pak Chung Wong;B. Hetzler;C. Posse;M. Whiting;S. Havre;N. Cramer;Anuj Shah;M. Singhal;A. Turner;J. Thomas",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;;Pacific Northwest National Laboratory, USA",
                "InternalReferences": "10.1109/infvis.1995.528686",
                "AuthorKeywords": null,
                "AminerCitationCount": 72,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 5,
                "DownloadsXplore": 234,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2472,
                "i": [
                    2472
                ]
            }
        },
        {
            "name": "Georgia Albuquerque",
            "value": 181,
            "numPapers": 19,
            "cluster": "2",
            "visible": 1,
            "index": 1337,
            "x": -137.7072713818902,
            "y": -338.80187043249106,
            "vy": 0,
            "vx": 0,
            "r": 1.208405296488198,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Perception-based visual quality measures",
                "DOI": "10.1109/vast.2011.6102437",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102437",
                "FirstPage": 13,
                "LastPage": 20,
                "PaperType": "C",
                "Abstract": "In recent years diverse quality measures to support the exploration of high-dimensional data sets have been proposed. Such measures can be very useful to rank and select information-bearing projections of very high dimensional data, when the visual exploration of all possible projections becomes unfeasible. But even though a ranking of the low dimensional projections may support the user in the visual exploration task, different measures deliver different distances between the views that do not necessarily match the expectations of human perception. As an alternative solution, we propose a perception-based approach that, similar to the existing measures, can be used to select information bearing projections of the data. Specifically, we construct a perceptual embedding for the different projections based on the data from a psychophysics study and multi-dimensional scaling. This embedding together with a ranking function is then used to estimate the value of the projections for a specific user task in a perceptual sense.",
                "AuthorNamesDeduped": "Georgia Albuquerque;Martin Eisemann;Marcus A. Magnor",
                "AuthorNames": "Georgia Albuquerque;Martin Eisemann;Marcus Magnor",
                "AuthorAffiliation": "Technical University of Braunschweig, Germany;Technical University of Braunschweig, Germany;Technical University of Braunschweig, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532142;10.1109/vast.2010.5652433;10.1109/vast.2006.261423;10.1109/vast.2009.5332628;10.1109/tvcg.2010.184;10.1109/tvcg.2009.153",
                "AuthorKeywords": null,
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 634,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1590,
                "i": [
                    1590
                ]
            }
        },
        {
            "name": "Martin Eisemann",
            "value": 178,
            "numPapers": 32,
            "cluster": "2",
            "visible": 1,
            "index": 1338,
            "x": 330.5219217111753,
            "y": 156.86063645271773,
            "vy": 0,
            "vx": 0,
            "r": 1.204951065054692,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Perception-based visual quality measures",
                "DOI": "10.1109/vast.2011.6102437",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102437",
                "FirstPage": 13,
                "LastPage": 20,
                "PaperType": "C",
                "Abstract": "In recent years diverse quality measures to support the exploration of high-dimensional data sets have been proposed. Such measures can be very useful to rank and select information-bearing projections of very high dimensional data, when the visual exploration of all possible projections becomes unfeasible. But even though a ranking of the low dimensional projections may support the user in the visual exploration task, different measures deliver different distances between the views that do not necessarily match the expectations of human perception. As an alternative solution, we propose a perception-based approach that, similar to the existing measures, can be used to select information bearing projections of the data. Specifically, we construct a perceptual embedding for the different projections based on the data from a psychophysics study and multi-dimensional scaling. This embedding together with a ranking function is then used to estimate the value of the projections for a specific user task in a perceptual sense.",
                "AuthorNamesDeduped": "Georgia Albuquerque;Martin Eisemann;Marcus A. Magnor",
                "AuthorNames": "Georgia Albuquerque;Martin Eisemann;Marcus Magnor",
                "AuthorAffiliation": "Technical University of Braunschweig, Germany;Technical University of Braunschweig, Germany;Technical University of Braunschweig, Germany",
                "InternalReferences": "0.1109/infvis.2005.1532142;10.1109/vast.2010.5652433;10.1109/vast.2006.261423;10.1109/vast.2009.5332628;10.1109/tvcg.2010.184;10.1109/tvcg.2009.153",
                "AuthorKeywords": null,
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 634,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1590,
                "i": [
                    1590
                ]
            }
        },
        {
            "name": "Mihael Ankerst",
            "value": 187,
            "numPapers": 7,
            "cluster": "3",
            "visible": 1,
            "index": 1339,
            "x": -349.8050134267832,
            "y": 107.64038545772699,
            "vy": 0,
            "vx": 0,
            "r": 1.2153137593552101,
            "node": {
                "Conference": "InfoVis",
                "Year": 1998,
                "Title": "Similarity clustering of dimensions for an enhanced visualization of multidimensional data",
                "DOI": "10.1109/infvis.1998.729559",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1998.729559",
                "FirstPage": 52,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "The order and arrangement of dimensions (variates) is crucial for the effectiveness of a large number of visualization techniques such as parallel coordinates, scatterplots, recursive pattern, and many others. We describe a systematic approach to arrange the dimensions according to their similarity. The basic idea is to rearrange the data dimensions such that dimensions showing a similar behavior are positioned next to each other. For the similarity clustering of dimensions, we need to define similarity measures which determine the partial or global similarity of dimensions. We then consider the problem of finding an optimal one- or two-dimensional arrangement of the dimensions based on their similarity. Theoretical considerations show that both, the one- and the two-dimensional arrangement problem are surprisingly hard problems, i.e. they are NP complete. Our solution of the problem is therefore based on heuristic algorithms. An empirical evaluation using a number of different visualization techniques shows the high impact of our similarity clustering of dimensions on the visualization results.",
                "AuthorNamesDeduped": "Mihael Ankerst;Stefan Berchtold;Daniel A. Keim",
                "AuthorNames": "M. Ankerst;S. Berchtold;D.A. Keim",
                "AuthorAffiliation": "University of Munich (LMU), Munich, Germany;AT and T Bell Laboratories, Inc., Florham Park, NJ, USA;Martin Luther University of Halle-Wittenberg, Halle, Germany",
                "InternalReferences": "0.1109/visual.1990.146402;10.1109/visual.1994.346302;10.1109/visual.1995.485140",
                "AuthorKeywords": null,
                "AminerCitationCount": 351,
                "CitationCountCrossRef": 128,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 975,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3122,
                "i": [
                    3122
                ]
            }
        },
        {
            "name": "Stefan Berchtold",
            "value": 130,
            "numPapers": 2,
            "cluster": "3",
            "visible": 1,
            "index": 1340,
            "x": 185.29442146452945,
            "y": -315.77836748917,
            "vy": 0,
            "vx": 0,
            "r": 1.1496833621185953,
            "node": {
                "Conference": "InfoVis",
                "Year": 1998,
                "Title": "Similarity clustering of dimensions for an enhanced visualization of multidimensional data",
                "DOI": "10.1109/infvis.1998.729559",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1998.729559",
                "FirstPage": 52,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "The order and arrangement of dimensions (variates) is crucial for the effectiveness of a large number of visualization techniques such as parallel coordinates, scatterplots, recursive pattern, and many others. We describe a systematic approach to arrange the dimensions according to their similarity. The basic idea is to rearrange the data dimensions such that dimensions showing a similar behavior are positioned next to each other. For the similarity clustering of dimensions, we need to define similarity measures which determine the partial or global similarity of dimensions. We then consider the problem of finding an optimal one- or two-dimensional arrangement of the dimensions based on their similarity. Theoretical considerations show that both, the one- and the two-dimensional arrangement problem are surprisingly hard problems, i.e. they are NP complete. Our solution of the problem is therefore based on heuristic algorithms. An empirical evaluation using a number of different visualization techniques shows the high impact of our similarity clustering of dimensions on the visualization results.",
                "AuthorNamesDeduped": "Mihael Ankerst;Stefan Berchtold;Daniel A. Keim",
                "AuthorNames": "M. Ankerst;S. Berchtold;D.A. Keim",
                "AuthorAffiliation": "University of Munich (LMU), Munich, Germany;AT and T Bell Laboratories, Inc., Florham Park, NJ, USA;Martin Luther University of Halle-Wittenberg, Halle, Germany",
                "InternalReferences": "0.1109/visual.1990.146402;10.1109/visual.1994.346302;10.1109/visual.1995.485140",
                "AuthorKeywords": null,
                "AminerCitationCount": 351,
                "CitationCountCrossRef": 128,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 975,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3122,
                "i": [
                    3122
                ]
            }
        },
        {
            "name": "Michael Blumenschein",
            "value": 9,
            "numPapers": 19,
            "cluster": "4",
            "visible": 1,
            "index": 1341,
            "x": 76.70347673494632,
            "y": 358.14323483317617,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach",
                "DOI": "10.1109/vast.2018.8802486",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802486",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst's trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing high-dimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.",
                "AuthorNamesDeduped": "Michael Blumenschein;Michael Behrisch 0001;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorNames": "Michael Blumenschein;Michael Behrisch;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Harvard University, USA;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.46;10.1109/vast.2010.5652433;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2743978;10.1109/tvcg.2011.188;10.1109/vast.2009.5332611;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.138;10.1109/infvis.2004.15;10.1109/tvcg.2014.2346279;10.1109/infvis.2003.1249016;10.1109/vast.2009.5332628;10.1109/tvcg.2015.2468078;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744098;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,visual exploration,pattern-driven analysis,tabular visualization,subspace,aggregation",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 658,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 767,
                "i": [
                    767
                ]
            }
        },
        {
            "name": "Stefanie Schmid",
            "value": 9,
            "numPapers": 19,
            "cluster": "4",
            "visible": 1,
            "index": 1342,
            "x": -298.5922639338354,
            "y": -212.3503235666637,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach",
                "DOI": "10.1109/vast.2018.8802486",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802486",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst's trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing high-dimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.",
                "AuthorNamesDeduped": "Michael Blumenschein;Michael Behrisch 0001;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorNames": "Michael Blumenschein;Michael Behrisch;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Harvard University, USA;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.46;10.1109/vast.2010.5652433;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2743978;10.1109/tvcg.2011.188;10.1109/vast.2009.5332611;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.138;10.1109/infvis.2004.15;10.1109/tvcg.2014.2346279;10.1109/infvis.2003.1249016;10.1109/vast.2009.5332628;10.1109/tvcg.2015.2468078;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744098;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,visual exploration,pattern-driven analysis,tabular visualization,subspace,aggregation",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 658,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 767,
                "i": [
                    767
                ]
            }
        },
        {
            "name": "Simon Butscher",
            "value": 9,
            "numPapers": 19,
            "cluster": "4",
            "visible": 1,
            "index": 1343,
            "x": 363.7486239231189,
            "y": -45.132456104641996,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach",
                "DOI": "10.1109/vast.2018.8802486",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802486",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst's trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing high-dimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.",
                "AuthorNamesDeduped": "Michael Blumenschein;Michael Behrisch 0001;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorNames": "Michael Blumenschein;Michael Behrisch;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Harvard University, USA;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.46;10.1109/vast.2010.5652433;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2743978;10.1109/tvcg.2011.188;10.1109/vast.2009.5332611;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.138;10.1109/infvis.2004.15;10.1109/tvcg.2014.2346279;10.1109/infvis.2003.1249016;10.1109/vast.2009.5332628;10.1109/tvcg.2015.2468078;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744098;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,visual exploration,pattern-driven analysis,tabular visualization,subspace,aggregation",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 658,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 767,
                "i": [
                    767
                ]
            }
        },
        {
            "name": "Deborah R. Wahl",
            "value": 9,
            "numPapers": 19,
            "cluster": "4",
            "visible": 1,
            "index": 1344,
            "x": -237.81883648823376,
            "y": 279.0917430010833,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach",
                "DOI": "10.1109/vast.2018.8802486",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802486",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst's trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing high-dimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.",
                "AuthorNamesDeduped": "Michael Blumenschein;Michael Behrisch 0001;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorNames": "Michael Blumenschein;Michael Behrisch;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Harvard University, USA;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.46;10.1109/vast.2010.5652433;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2743978;10.1109/tvcg.2011.188;10.1109/vast.2009.5332611;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.138;10.1109/infvis.2004.15;10.1109/tvcg.2014.2346279;10.1109/infvis.2003.1249016;10.1109/vast.2009.5332628;10.1109/tvcg.2015.2468078;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744098;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,visual exploration,pattern-driven analysis,tabular visualization,subspace,aggregation",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 658,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 767,
                "i": [
                    767
                ]
            }
        },
        {
            "name": "Karoline Villinger",
            "value": 9,
            "numPapers": 19,
            "cluster": "4",
            "visible": 1,
            "index": 1345,
            "x": -13.168449383310199,
            "y": -366.5741288482306,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach",
                "DOI": "10.1109/vast.2018.8802486",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802486",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst's trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing high-dimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.",
                "AuthorNamesDeduped": "Michael Blumenschein;Michael Behrisch 0001;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorNames": "Michael Blumenschein;Michael Behrisch;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Harvard University, USA;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.46;10.1109/vast.2010.5652433;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2743978;10.1109/tvcg.2011.188;10.1109/vast.2009.5332611;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.138;10.1109/infvis.2004.15;10.1109/tvcg.2014.2346279;10.1109/infvis.2003.1249016;10.1109/vast.2009.5332628;10.1109/tvcg.2015.2468078;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744098;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,visual exploration,pattern-driven analysis,tabular visualization,subspace,aggregation",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 658,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 767,
                "i": [
                    767
                ]
            }
        },
        {
            "name": "Britta Renner",
            "value": 9,
            "numPapers": 19,
            "cluster": "4",
            "visible": 1,
            "index": 1346,
            "x": 257.4228782982933,
            "y": 261.5023168704669,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2018,
                "Title": "SMARTexplore: Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach",
                "DOI": "10.1109/vast.2018.8802486",
                "Link": "http://dx.doi.org/10.1109/VAST.2018.8802486",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst's trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing high-dimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.",
                "AuthorNamesDeduped": "Michael Blumenschein;Michael Behrisch 0001;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorNames": "Michael Blumenschein;Michael Behrisch;Stefanie Schmid;Simon Butscher;Deborah R. Wahl;Karoline Villinger;Britta Renner;Harald Reiterer;Daniel A. Keim",
                "AuthorAffiliation": "University of Konstanz, Germany;Harvard University, USA;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.2004.46;10.1109/vast.2010.5652433;10.1109/infvis.1998.729559;10.1109/tvcg.2017.2743978;10.1109/tvcg.2011.188;10.1109/vast.2009.5332611;10.1109/tvcg.2010.184;10.1109/tvcg.2014.2346260;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467553;10.1109/tvcg.2014.2346248;10.1109/tvcg.2010.138;10.1109/infvis.2004.15;10.1109/tvcg.2014.2346279;10.1109/infvis.2003.1249016;10.1109/vast.2009.5332628;10.1109/tvcg.2015.2468078;10.1109/tvcg.2017.2745078;10.1109/tvcg.2017.2744098;10.1109/tvcg.2013.150",
                "AuthorKeywords": "High-dimensional data,visual exploration,pattern-driven analysis,tabular visualization,subspace,aggregation",
                "AminerCitationCount": 19,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 73,
                "DownloadsXplore": 658,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 767,
                "i": [
                    767
                ]
            }
        },
        {
            "name": "Harald Reiterer",
            "value": 67,
            "numPapers": 23,
            "cluster": "4",
            "visible": 1,
            "index": 1347,
            "x": -366.59394862096815,
            "y": -18.9440448291005,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "InfoVis",
                "Year": 2006,
                "Title": "User Interaction with Scatterplots on Small Screens - A Comparative Evaluation of Geometric-Semantic Zoom and Fisheye Distortion",
                "DOI": "10.1109/tvcg.2006.187",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.187",
                "FirstPage": 829,
                "LastPage": 836,
                "PaperType": "J",
                "Abstract": "Existing information-visualization techniques that target small screens are usually limited to exploring a few hundred items. In this article we present a scatterplot tool for Personal Digital Assistants that allows the handling of many thousands of items. The application's scalability is achieved by incorporating two alternative interaction techniques: a geometric-semantic zoom that provides smooth transition between overview and detail, and a fisheye distortion that displays the focus and context regions of the scatterplot in a single view. A user study with 24 participants was conducted to compare the usability and efficiency of both techniques when searching a book database containing 7500 items. The study was run on a pen-driven Wacom board simulating a PDA interface. While the results showed no significant difference in task-completion times, a clear majority of 20 users preferred the fisheye view over the zoom interaction. In addition, other dependent variables such as user satisfaction and subjective rating of orientation and navigation support revealed a preference for the fisheye distortion. These findings partly contradict related research and indicate that, when using a small screen, users place higher value on the ability to preserve navigational context than they do on the ease of use of a simplistic, metaphor-based interaction style.",
                "AuthorNamesDeduped": "Thorsten Büring;Jens Gerken;Harald Reiterer",
                "AuthorNames": "Thorsten Buering;Jens Gerken;Harald Reiterer",
                "AuthorAffiliation": "University of Konstanz;University of Konstanz, Inc., Germany;University of Konstanz, Germany",
                "InternalReferences": "0.1109/infvis.1999.801854;10.1109/infvis.2002.1173156",
                "AuthorKeywords": "Small screen, PDA, scatterplot, zoom, fisheye, focus+context",
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 920,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2232,
                "i": [
                    2232
                ]
            }
        },
        {
            "name": "Cydney B. Nielsen",
            "value": 63,
            "numPapers": 8,
            "cluster": "5",
            "visible": 1,
            "index": 1348,
            "x": 283.2165181829053,
            "y": -233.74859107244274,
            "vy": 0,
            "vx": 0,
            "r": 1.072538860103627,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Variant View: Visualizing Sequence Variants in their Gene Context",
                "DOI": "10.1109/tvcg.2013.214",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.214",
                "FirstPage": 2546,
                "LastPage": 2555,
                "PaperType": "J",
                "Abstract": "Scientists use DNA sequence differences between an individual's genome and a standard reference genome to study the genetic basis of disease. Such differences are called sequence variants, and determining their impact in the cell is difficult because it requires reasoning about both the type and location of the variant across several levels of biological context. In this design study, we worked with four analysts to design a visualization tool supporting variant impact assessment for three different tasks. We contribute data and task abstractions for the problem of variant impact assessment, and the carefully justified design and implementation of the Variant View tool. Variant View features an information-dense visual encoding that provides maximal information at the overview level, in contrast to the extensive navigation required by currently-prevalent genome browsers. We provide initial evidence that the tool simplified and accelerated workflows for these three tasks through three case studies. Finally, we reflect on the lessons learned in creating and refining data and task abstractions that allow for concise overviews of sprawling information spaces that can reduce or remove the need for the memory-intensive use of navigation.",
                "AuthorNamesDeduped": "Joel A. Ferstay;Cydney B. Nielsen;Tamara Munzner",
                "AuthorNames": "Joel A. Ferstay;Cydney B. Nielsen;Tamara Munzner",
                "AuthorAffiliation": "University of British Columbia, Canada;University of British Columbia, Canada;University of British Columbia, Canada",
                "InternalReferences": "0.1109/tvcg.2009.111;10.1109/tvcg.2008.109;10.1109/tvcg.2012.213;10.1109/tvcg.2011.185;10.1109/infvis.2003.1249023;10.1109/tvcg.2009.116;10.1109/tvcg.2009.167;10.1109/tvcg.2011.209;10.1109/tvcg.2010.137",
                "AuthorKeywords": "Information visualization, design study, bioinformatics, genetic variants",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 814,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1328,
                "i": [
                    1328
                ]
            }
        },
        {
            "name": "Shaun D. Jackman",
            "value": 35,
            "numPapers": 0,
            "cluster": "5",
            "visible": 1,
            "index": 1349,
            "x": -50.95902590420782,
            "y": 363.8037625958454,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "ABySS-Explorer: Visualizing Genome Sequence Assemblies",
                "DOI": "10.1109/tvcg.2009.116",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.116",
                "FirstPage": 881,
                "LastPage": 888,
                "PaperType": "J",
                "Abstract": "One bottleneck in large-scale genome sequencing projects is reconstructing the full genome sequence from the short subsequences produced by current technologies. The final stages of the genome assembly process inevitably require manual inspection of data inconsistencies and could be greatly aided by visualization. This paper presents our design decisions in translating key data features identified through discussions with analysts into a concise visual encoding. Current visualization tools in this domain focus on local sequence errors making high-level inspection of the assembly difficult if not impossible. We present a novel interactive graph display, ABySS-Explorer, that emphasizes the global assembly structure while also integrating salient data features such as sequence length. Our tool replaces manual and in some cases pen-and-paper based analysis tasks, and we discuss how user feedback was incorporated into iterative design refinements. Finally, we touch on applications of this representation not initially considered in our design phase, suggesting the generality of this encoding for DNA sequence data.",
                "AuthorNamesDeduped": "Cydney B. Nielsen;Shaun D. Jackman;Inanç Birol;Steven J. M. Jones",
                "AuthorNames": "Cydney B. Nielsen;Shaun D. Jackman;Inanç Birol;Steven J.M. Jones",
                "AuthorAffiliation": "Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada",
                "InternalReferences": "0.1109/tvcg.2006.147",
                "AuthorKeywords": "Bioinformatics visualization, design study, DNA sequence, genome assembly",
                "AminerCitationCount": 95,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 1775,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1831,
                "i": [
                    1831
                ]
            }
        },
        {
            "name": "Inanç Birol",
            "value": 35,
            "numPapers": 0,
            "cluster": "5",
            "visible": 1,
            "index": 1350,
            "x": -208.2474252886,
            "y": -302.79202410345783,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "ABySS-Explorer: Visualizing Genome Sequence Assemblies",
                "DOI": "10.1109/tvcg.2009.116",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.116",
                "FirstPage": 881,
                "LastPage": 888,
                "PaperType": "J",
                "Abstract": "One bottleneck in large-scale genome sequencing projects is reconstructing the full genome sequence from the short subsequences produced by current technologies. The final stages of the genome assembly process inevitably require manual inspection of data inconsistencies and could be greatly aided by visualization. This paper presents our design decisions in translating key data features identified through discussions with analysts into a concise visual encoding. Current visualization tools in this domain focus on local sequence errors making high-level inspection of the assembly difficult if not impossible. We present a novel interactive graph display, ABySS-Explorer, that emphasizes the global assembly structure while also integrating salient data features such as sequence length. Our tool replaces manual and in some cases pen-and-paper based analysis tasks, and we discuss how user feedback was incorporated into iterative design refinements. Finally, we touch on applications of this representation not initially considered in our design phase, suggesting the generality of this encoding for DNA sequence data.",
                "AuthorNamesDeduped": "Cydney B. Nielsen;Shaun D. Jackman;Inanç Birol;Steven J. M. Jones",
                "AuthorNames": "Cydney B. Nielsen;Shaun D. Jackman;Inanç Birol;Steven J.M. Jones",
                "AuthorAffiliation": "Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada",
                "InternalReferences": "0.1109/tvcg.2006.147",
                "AuthorKeywords": "Bioinformatics visualization, design study, DNA sequence, genome assembly",
                "AminerCitationCount": 95,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 1775,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1831,
                "i": [
                    1831
                ]
            }
        },
        {
            "name": "Steven J. M. Jones",
            "value": 35,
            "numPapers": 0,
            "cluster": "5",
            "visible": 1,
            "index": 1351,
            "x": 358.22079544966783,
            "y": 82.63087623526256,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "ABySS-Explorer: Visualizing Genome Sequence Assemblies",
                "DOI": "10.1109/tvcg.2009.116",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.116",
                "FirstPage": 881,
                "LastPage": 888,
                "PaperType": "J",
                "Abstract": "One bottleneck in large-scale genome sequencing projects is reconstructing the full genome sequence from the short subsequences produced by current technologies. The final stages of the genome assembly process inevitably require manual inspection of data inconsistencies and could be greatly aided by visualization. This paper presents our design decisions in translating key data features identified through discussions with analysts into a concise visual encoding. Current visualization tools in this domain focus on local sequence errors making high-level inspection of the assembly difficult if not impossible. We present a novel interactive graph display, ABySS-Explorer, that emphasizes the global assembly structure while also integrating salient data features such as sequence length. Our tool replaces manual and in some cases pen-and-paper based analysis tasks, and we discuss how user feedback was incorporated into iterative design refinements. Finally, we touch on applications of this representation not initially considered in our design phase, suggesting the generality of this encoding for DNA sequence data.",
                "AuthorNamesDeduped": "Cydney B. Nielsen;Shaun D. Jackman;Inanç Birol;Steven J. M. Jones",
                "AuthorNames": "Cydney B. Nielsen;Shaun D. Jackman;Inanç Birol;Steven J.M. Jones",
                "AuthorAffiliation": "Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada;Genome Sciences Centre, BC Cancer Agency, Canada",
                "InternalReferences": "0.1109/tvcg.2006.147",
                "AuthorKeywords": "Bioinformatics visualization, design study, DNA sequence, genome assembly",
                "AminerCitationCount": 95,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 1775,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1831,
                "i": [
                    1831
                ]
            }
        },
        {
            "name": "Philipp Koytek",
            "value": 18,
            "numPapers": 24,
            "cluster": "4",
            "visible": 1,
            "index": 1352,
            "x": -320.0755702110572,
            "y": 181.11220100828828,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "MyBrush: Brushing and Linking with Personal Agency",
                "DOI": "10.1109/tvcg.2017.2743859",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743859",
                "FirstPage": 605,
                "LastPage": 615,
                "PaperType": "J",
                "Abstract": "We extend the popular brushing and linking technique by incorporating personal agency in the interaction. We map existing research related to brushing and linking into a design space that deconstructs the interaction technique into three components: source (what is being brushed), link (the expression of relationship between source and target), and target (what is revealed as related to the source). Using this design space, we created MyBrush, a unified interface that offers personal agency over brushing and linking by giving people the flexibility to configure the source, link, and target of multiple brushes. The results of three focus groups demonstrate that people with different backgrounds leveraged personal agency in different ways, including performing complex tasks and showing links explicitly. We reflect on these results, paving the way for future research on the role of personal agency in information visualization.",
                "AuthorNamesDeduped": "Philipp Koytek;Charles Perin;Jo Vermeulen;Elisabeth André;Sheelagh Carpendale",
                "AuthorNames": "Philipp Koytek;Charles Perin;Jo Vermeulen;Elisabeth André;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Calgary, Augsburg University;City, University of London, University of Calgary;University of Calgary;Augsburg University;University of Calgary",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/visual.1991.175794;10.1109/infvis.2003.1249024;10.1109/tvcg.2011.201;10.1109/tvcg.2007.70521;10.1109/vast.2009.5333443;10.1109/tvcg.2008.153;10.1109/infvis.2004.64;10.1109/infvis.1999.801858;10.1109/tvcg.2014.2346260;10.1109/visual.2000.885739;10.1109/infvis.2002.1173157;10.1109/vast.2007.4389011;10.1109/tvcg.2006.147;10.1109/tvcg.2008.116;10.1109/tvcg.2013.154;10.1109/tvcg.2010.138;10.1109/visual.1995.485139;10.1109/tvcg.2011.183;10.1109/tvcg.2009.162;10.1109/visual.1994.346302;10.1109/infvis.2004.12;10.1109/visual.1996.567800;10.1109/tvcg.2014.2346279;10.1109/infvis.1996.559216",
                "AuthorKeywords": "Brushing,linking,personal agency,coordinated multiple views,interaction,design space,information visualization",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 82,
                "DownloadsXplore": 939,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 801,
                "i": [
                    801
                ]
            }
        },
        {
            "name": "Jo Vermeulen",
            "value": 39,
            "numPapers": 30,
            "cluster": "4",
            "visible": 1,
            "index": 1353,
            "x": 113.71624613341918,
            "y": -349.88371691938397,
            "vy": 0,
            "vx": 0,
            "r": 1.0449050086355787,
            "node": {
                "Conference": "Vis",
                "Year": 2021,
                "Title": "What's the Situation with Situated Visualization? A Survey and Perspectives on Situatedness",
                "DOI": "10.1109/tvcg.2021.3114835",
                "Link": "http://dx.doi.org/10.1109/TVCG.2021.3114835",
                "FirstPage": 107,
                "LastPage": 117,
                "PaperType": "J",
                "Abstract": "Situated visualization is an emerging concept within visualization, in which data is visualized in situ, where it is relevant to people. The concept has gained interest from multiple research communities, including visualization, human-computer interaction (HCI) and augmented reality. This has led to a range of explorations and applications of the concept, however, this early work has focused on the operational aspect of situatedness leading to inconsistent adoption of the concept and terminology. First, we contribute a literature survey in which we analyze 44 papers that explicitly use the term “situated visualization” to provide an overview of the research area, how it defines situated visualization, common application areas and technology used, as well as type of data and type of visualizations. Our survey shows that research on situated visualization has focused on technology-centric approaches that foreground a spatial understanding of situatedness. Secondly, we contribute five perspectives on situatedness (space, time, place, activity, and community) that together expand on the prevalent notion of situatedness in the corpus. We draw from six case studies and prior theoretical developments in HCI. Each perspective develops a generative way of looking at and working with situatedness in design and research. We outline future directions, including considering technology, material and aesthetics, leveraging the perspectives for design, and methods for stronger engagement with target audiences. We conclude with opportunities to consolidate situated visualization research.",
                "AuthorNamesDeduped": "Nathalie Bressa;Henrik Korsgaard;Aurélien Tabard;Steven Houben;Jo Vermeulen",
                "AuthorNames": "Nathalie Bressa;Henrik Korsgaard;Aurélien Tabard;Steven Houben;Jo Vermeulen",
                "AuthorAffiliation": "Aarhus University, Denmark;Aarhus University, Denmark;Université Claude Bernard Lyon 1, LIRIS, CNRS UMR5205, France;Eindhoven University of Technology, Netherlands;Autodesk Research, Canada and Aarhus University, Denmark",
                "InternalReferences": "0.1109/tvcg.2020.3030472;10.1109/tvcg.2007.70541;10.1109/tvcg.2011.196;10.1109/tvcg.2018.2865152;10.1109/tvcg.2020.3030400;10.1109/tvcg.2019.2934282;10.1109/tvcg.2016.2598608",
                "AuthorKeywords": "Situated visualization,literature survey,situatedness",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 48,
                "PubsCitedCrossRef": 108,
                "DownloadsXplore": 2054,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 253,
                "i": [
                    253
                ]
            }
        },
        {
            "name": "Elisabeth André",
            "value": 18,
            "numPapers": 24,
            "cluster": "4",
            "visible": 1,
            "index": 1354,
            "x": 152.54855622521336,
            "y": 334.931243680853,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "MyBrush: Brushing and Linking with Personal Agency",
                "DOI": "10.1109/tvcg.2017.2743859",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743859",
                "FirstPage": 605,
                "LastPage": 615,
                "PaperType": "J",
                "Abstract": "We extend the popular brushing and linking technique by incorporating personal agency in the interaction. We map existing research related to brushing and linking into a design space that deconstructs the interaction technique into three components: source (what is being brushed), link (the expression of relationship between source and target), and target (what is revealed as related to the source). Using this design space, we created MyBrush, a unified interface that offers personal agency over brushing and linking by giving people the flexibility to configure the source, link, and target of multiple brushes. The results of three focus groups demonstrate that people with different backgrounds leveraged personal agency in different ways, including performing complex tasks and showing links explicitly. We reflect on these results, paving the way for future research on the role of personal agency in information visualization.",
                "AuthorNamesDeduped": "Philipp Koytek;Charles Perin;Jo Vermeulen;Elisabeth André;Sheelagh Carpendale",
                "AuthorNames": "Philipp Koytek;Charles Perin;Jo Vermeulen;Elisabeth André;Sheelagh Carpendale",
                "AuthorAffiliation": "University of Calgary, Augsburg University;City, University of London, University of Calgary;University of Calgary;Augsburg University;University of Calgary",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/visual.1991.175794;10.1109/infvis.2003.1249024;10.1109/tvcg.2011.201;10.1109/tvcg.2007.70521;10.1109/vast.2009.5333443;10.1109/tvcg.2008.153;10.1109/infvis.2004.64;10.1109/infvis.1999.801858;10.1109/tvcg.2014.2346260;10.1109/visual.2000.885739;10.1109/infvis.2002.1173157;10.1109/vast.2007.4389011;10.1109/tvcg.2006.147;10.1109/tvcg.2008.116;10.1109/tvcg.2013.154;10.1109/tvcg.2010.138;10.1109/visual.1995.485139;10.1109/tvcg.2011.183;10.1109/tvcg.2009.162;10.1109/visual.1994.346302;10.1109/infvis.2004.12;10.1109/visual.1996.567800;10.1109/tvcg.2014.2346279;10.1109/infvis.1996.559216",
                "AuthorKeywords": "Brushing,linking,personal agency,coordinated multiple views,interaction,design space,information visualization",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 82,
                "DownloadsXplore": 939,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 801,
                "i": [
                    801
                ]
            }
        },
        {
            "name": "Tiffany Wun",
            "value": 22,
            "numPapers": 38,
            "cluster": "3",
            "visible": 1,
            "index": 1355,
            "x": -338.8523768238458,
            "y": -143.97592410132458,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Assessing the Graphical Perception of Time and Speed on 2D+Time Trajectories",
                "DOI": "10.1109/tvcg.2017.2743918",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743918",
                "FirstPage": 698,
                "LastPage": 708,
                "PaperType": "J",
                "Abstract": "We empirically evaluate the extent to which people perceive non-constant time and speed encoded on 2D paths. In our graphical perception study, we evaluate nine encodings from the literature for both straight and curved paths. Visualizing time and speed information is a challenge when the x and y axes already encode other data dimensions, for example when plotting a trip on a map. This is particularly true in disciplines such as time-geography and movement analytics that often require visualizing spatio-temporal trajectories. A common approach is to use 2D+time trajectories, which are 2D paths for which time is an additional dimension. However, there are currently no guidelines regarding how to represent time and speed on such paths. Our study results provide InfoVis designers with clear guidance regarding which encodings to use and which ones to avoid; in particular, we suggest using color value to encode speed and segment length to encode time whenever possible.",
                "AuthorNamesDeduped": "Charles Perin;Tiffany Wun;Richard Pusch;Sheelagh Carpendale",
                "AuthorNames": "Charles Perin;Tiffany Wun;Richard Pusch;Sheelagh Carpendale",
                "AuthorAffiliation": "City, University of London, University of Calgary;University of Calgary;University of Calgary;University of Calgary",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467851;10.1109/tvcg.2012.251;10.1109/tvcg.2012.220;10.1109/tvcg.2014.2346424;10.1109/tvcg.2014.2346298;10.1109/tvcg.2016.2598594;10.1109/tvcg.2015.2467752;10.1109/tvcg.2015.2467951;10.1109/tvcg.2015.2467951;10.1109/vast.2008.4677355;10.1109/tvcg.2014.2346250;10.1109/tvcg.2012.229;10.1109/tvcg.2009.126;10.1109/tvcg.2007.70594;10.1109/tvcg.2014.2346279;10.1109/tvcg.2013.192;10.1109/tvcg.2009.114;10.1109/tvcg.2014.2346320;10.1109/tvcg.2012.265",
                "AuthorKeywords": "Trajectory visualization,visual encoding,movement data,graphical perception,quantitative evaluation",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 754,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 809,
                "i": [
                    809
                ]
            }
        },
        {
            "name": "Richard Pusch",
            "value": 22,
            "numPapers": 38,
            "cluster": "3",
            "visible": 1,
            "index": 1356,
            "x": 347.24155159346043,
            "y": -122.77338818720517,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Assessing the Graphical Perception of Time and Speed on 2D+Time Trajectories",
                "DOI": "10.1109/tvcg.2017.2743918",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2743918",
                "FirstPage": 698,
                "LastPage": 708,
                "PaperType": "J",
                "Abstract": "We empirically evaluate the extent to which people perceive non-constant time and speed encoded on 2D paths. In our graphical perception study, we evaluate nine encodings from the literature for both straight and curved paths. Visualizing time and speed information is a challenge when the x and y axes already encode other data dimensions, for example when plotting a trip on a map. This is particularly true in disciplines such as time-geography and movement analytics that often require visualizing spatio-temporal trajectories. A common approach is to use 2D+time trajectories, which are 2D paths for which time is an additional dimension. However, there are currently no guidelines regarding how to represent time and speed on such paths. Our study results provide InfoVis designers with clear guidance regarding which encodings to use and which ones to avoid; in particular, we suggest using color value to encode speed and segment length to encode time whenever possible.",
                "AuthorNamesDeduped": "Charles Perin;Tiffany Wun;Richard Pusch;Sheelagh Carpendale",
                "AuthorNames": "Charles Perin;Tiffany Wun;Richard Pusch;Sheelagh Carpendale",
                "AuthorAffiliation": "City, University of London, University of Calgary;University of Calgary;University of Calgary;University of Calgary",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2015.2467851;10.1109/tvcg.2012.251;10.1109/tvcg.2012.220;10.1109/tvcg.2014.2346424;10.1109/tvcg.2014.2346298;10.1109/tvcg.2016.2598594;10.1109/tvcg.2015.2467752;10.1109/tvcg.2015.2467951;10.1109/tvcg.2015.2467951;10.1109/vast.2008.4677355;10.1109/tvcg.2014.2346250;10.1109/tvcg.2012.229;10.1109/tvcg.2009.126;10.1109/tvcg.2007.70594;10.1109/tvcg.2014.2346279;10.1109/tvcg.2013.192;10.1109/tvcg.2009.114;10.1109/tvcg.2014.2346320;10.1109/tvcg.2012.265",
                "AuthorKeywords": "Trajectory visualization,visual encoding,movement data,graphical perception,quantitative evaluation",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 79,
                "DownloadsXplore": 754,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 809,
                "i": [
                    809
                ]
            }
        },
        {
            "name": "Doantam Phan",
            "value": 143,
            "numPapers": 1,
            "cluster": "3",
            "visible": 1,
            "index": 1357,
            "x": -173.17667810619298,
            "y": 325.207377161257,
            "vy": 0,
            "vx": 0,
            "r": 1.164651698330455,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "Flow map layout",
                "DOI": "10.1109/infvis.2005.1532150",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532150",
                "FirstPage": 219,
                "LastPage": 224,
                "PaperType": "C",
                "Abstract": "Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network. The advantage of flow maps is that they reduce visual clutter by merging edges. Most flow maps are drawn by hand and there are few computer algorithms available. We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes. Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another. We demonstrate our technique by producing flow maps for network traffic, census data, and trade data.",
                "AuthorNamesDeduped": "Doantam Phan;Ling Xiao 0005;Ron B. Yeh;Pat Hanrahan;Terry Winograd",
                "AuthorNames": "Doantam Phan;Ling Xiao;R. Yeh;P. Hanrahan;Terry Winograd",
                "AuthorAffiliation": "Stanford University;Stanford University;Stanford University;Stanford University;Stanford University",
                "InternalReferences": "0.1109/infvis.1995.528697;10.1109/infvis.1996.559226",
                "AuthorKeywords": "flow maps, GIS, hierarchical clustering",
                "AminerCitationCount": 423,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2337,
                "i": [
                    2337
                ]
            }
        },
        {
            "name": "Ling Xiao 0005",
            "value": 199,
            "numPapers": 2,
            "cluster": "3",
            "visible": 1,
            "index": 1358,
            "x": -92.01320602115082,
            "y": -356.90834946483,
            "vy": 0,
            "vx": 0,
            "r": 1.2291306850892343,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "Flow map layout",
                "DOI": "10.1109/infvis.2005.1532150",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532150",
                "FirstPage": 219,
                "LastPage": 224,
                "PaperType": "C",
                "Abstract": "Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network. The advantage of flow maps is that they reduce visual clutter by merging edges. Most flow maps are drawn by hand and there are few computer algorithms available. We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes. Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another. We demonstrate our technique by producing flow maps for network traffic, census data, and trade data.",
                "AuthorNamesDeduped": "Doantam Phan;Ling Xiao 0005;Ron B. Yeh;Pat Hanrahan;Terry Winograd",
                "AuthorNames": "Doantam Phan;Ling Xiao;R. Yeh;P. Hanrahan;Terry Winograd",
                "AuthorAffiliation": "Stanford University;Stanford University;Stanford University;Stanford University;Stanford University",
                "InternalReferences": "0.1109/infvis.1995.528697;10.1109/infvis.1996.559226",
                "AuthorKeywords": "flow maps, GIS, hierarchical clustering",
                "AminerCitationCount": 423,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2337,
                "i": [
                    2337
                ]
            }
        },
        {
            "name": "Ron B. Yeh",
            "value": 143,
            "numPapers": 1,
            "cluster": "3",
            "visible": 1,
            "index": 1359,
            "x": 309.04948432767526,
            "y": 201.0930536761478,
            "vy": 0,
            "vx": 0,
            "r": 1.164651698330455,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "Flow map layout",
                "DOI": "10.1109/infvis.2005.1532150",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532150",
                "FirstPage": 219,
                "LastPage": 224,
                "PaperType": "C",
                "Abstract": "Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network. The advantage of flow maps is that they reduce visual clutter by merging edges. Most flow maps are drawn by hand and there are few computer algorithms available. We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes. Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another. We demonstrate our technique by producing flow maps for network traffic, census data, and trade data.",
                "AuthorNamesDeduped": "Doantam Phan;Ling Xiao 0005;Ron B. Yeh;Pat Hanrahan;Terry Winograd",
                "AuthorNames": "Doantam Phan;Ling Xiao;R. Yeh;P. Hanrahan;Terry Winograd",
                "AuthorAffiliation": "Stanford University;Stanford University;Stanford University;Stanford University;Stanford University",
                "InternalReferences": "0.1109/infvis.1995.528697;10.1109/infvis.1996.559226",
                "AuthorKeywords": "flow maps, GIS, hierarchical clustering",
                "AminerCitationCount": 423,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2337,
                "i": [
                    2337
                ]
            }
        },
        {
            "name": "Terry Winograd",
            "value": 143,
            "numPapers": 1,
            "cluster": "3",
            "visible": 1,
            "index": 1360,
            "x": -363.8536226642625,
            "y": 60.5024071760166,
            "vy": 0,
            "vx": 0,
            "r": 1.164651698330455,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "Flow map layout",
                "DOI": "10.1109/infvis.2005.1532150",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532150",
                "FirstPage": 219,
                "LastPage": 224,
                "PaperType": "C",
                "Abstract": "Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network. The advantage of flow maps is that they reduce visual clutter by merging edges. Most flow maps are drawn by hand and there are few computer algorithms available. We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes. Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another. We demonstrate our technique by producing flow maps for network traffic, census data, and trade data.",
                "AuthorNamesDeduped": "Doantam Phan;Ling Xiao 0005;Ron B. Yeh;Pat Hanrahan;Terry Winograd",
                "AuthorNames": "Doantam Phan;Ling Xiao;R. Yeh;P. Hanrahan;Terry Winograd",
                "AuthorAffiliation": "Stanford University;Stanford University;Stanford University;Stanford University;Stanford University",
                "InternalReferences": "0.1109/infvis.1995.528697;10.1109/infvis.1996.559226",
                "AuthorKeywords": "flow maps, GIS, hierarchical clustering",
                "AminerCitationCount": 423,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1668,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2337,
                "i": [
                    2337
                ]
            }
        },
        {
            "name": "Stefan Müller Arisona",
            "value": 41,
            "numPapers": 24,
            "cluster": "1",
            "visible": 1,
            "index": 1361,
            "x": 227.509114954395,
            "y": -290.4988857339523,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "StreetVizor: Visual Exploration of Human-Scale Urban Forms Based on Street Views",
                "DOI": "10.1109/tvcg.2017.2744159",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744159",
                "FirstPage": 1004,
                "LastPage": 1013,
                "PaperType": "J",
                "Abstract": "Urban forms at human-scale, i.e., urban environments that individuals can sense (e.g., sight, smell, and touch) in their daily lives, can provide unprecedented insights on a variety of applications, such as urban planning and environment auditing. The analysis of urban forms can help planners develop high-quality urban spaces through evidence-based design. However, such analysis is complex because of the involvement of spatial, multi-scale (i.e., city, region, and street), and multivariate (e.g., greenery and sky ratios) natures of urban forms. In addition, current methods either lack quantitative measurements or are limited to a small area. The primary contribution of this work is the design of StreetVizor, an interactive visual analytics system that helps planners leverage their domain knowledge in exploring human-scale urban forms based on street view images. Our system presents two-stage visual exploration: 1) an AOI Explorer for the visual comparison of spatial distributions and quantitative measurements in two areas-of-interest (AOIs) at city- and region-scales; 2) and a Street Explorer with a novel parallel coordinate plot for the exploration of the fine-grained details of the urban forms at the street-scale. We integrate visualization techniques with machine learning models to facilitate the detection of street view patterns. We illustrate the applicability of our approach with case studies on the real-world datasets of four cities, i.e., Hong Kong, Singapore, Greater London and New York City. Interviews with domain experts demonstrate the effectiveness of our system in facilitating various analytical tasks.",
                "AuthorNamesDeduped": "Qiaomu Shen;Wei Zeng 0004;Yu Ye 0002;Stefan Müller Arisona;Simon Schubiger;Remo Burkhard;Huamin Qu",
                "AuthorNames": "Qiaomu Shen;Wei Zeng;Yu Ye;Stefan Müller Arisona;Simon Schubiger;Remo Burkhard;Huamin Qu",
                "AuthorAffiliation": "Hong Kong University of Science and Technology;Future Cities Laboratory, ETH Zurich;Tongji University;University of Applied Sciences and Arts Northwestern Switzerland FHNW;University of Applied Sciences and Arts Northwestern Switzerland FHNW;Future Cities Laboratory, ETH Zurich;Hong Kong University of Science and Technology",
                "InternalReferences": "0.1109/tvcg.2014.2346446;10.1109/tvcg.2008.166;10.1109/tvcg.2014.2346594;10.1109/tvcg.2015.2467619;10.1109/tvcg.2011.176;10.1109/tvcg.2013.226;10.1109/visual.1999.809866;10.1109/tvcg.2015.2467199;10.1109/tvcg.2013.179;10.1109/tvcg.2016.2598432;10.1109/tvcg.2007.70523;10.1109/tvcg.2011.181;10.1109/tvcg.2014.2346265;10.1109/tvcg.2016.2598694;10.1109/tvcg.2013.228;10.1109/tvcg.2013.221;10.1109/tvcg.2016.2598472",
                "AuthorKeywords": "Urban forms,human scale,street view,visual analytics",
                "AminerCitationCount": 78,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 2590,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 814,
                "i": [
                    814
                ]
            }
        },
        {
            "name": "Lei Shi 0002",
            "value": 105,
            "numPapers": 18,
            "cluster": "1",
            "visible": 1,
            "index": 1362,
            "x": 28.4814908022761,
            "y": 368.02011450691094,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Blockwise Human Brain Network Visual Comparison Using NodeTrix Representation",
                "DOI": "10.1109/tvcg.2016.2598472",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598472",
                "FirstPage": 181,
                "LastPage": 190,
                "PaperType": "J",
                "Abstract": "Visually comparing human brain networks from multiple population groups serves as an important task in the field of brain connectomics. The commonly used brain network representation, consisting of nodes and edges, may not be able to reveal the most compelling network differences when the reconstructed networks are dense and homogeneous. In this paper, we leveraged the block information on the Region Of Interest (ROI) based brain networks and studied the problem of blockwise brain network visual comparison. An integrated visual analytics framework was proposed. In the first stage, a two-level ROI block hierarchy was detected by optimizing the anatomical structure and the predictive comparison performance simultaneously. In the second stage, the NodeTrix representation was adopted and customized to visualize the brain network with block information. We conducted controlled user experiments and case studies to evaluate our proposed solution. Results indicated that our visual analytics method outperformed the commonly used node-link graph and adjacency matrix design in the blockwise network comparison tasks. We have shown compelling findings from two real-world brain network data sets, which are consistent with the prior connectomics studies.",
                "AuthorNamesDeduped": "Xinsong Yang;Lei Shi 0002;Madelaine Daianu;Hanghang Tong;Qingsong Liu;Paul M. Thompson",
                "AuthorNames": "Xinsong Yang;Lei Shi;Madelaine Daianu;Hanghang Tong;Qingsong Liu;Paul Thompson",
                "AuthorAffiliation": "Imaging Genetics Center, University of Southern California and Chinese Academy of Sciences, Institute of Software;Chinese Academy of Sciences, Institute of Software;Imaging Genetics Center, University of Southern California;School of Computing, Informatics and Decision Systems Engineering, Arizona State University;Chinese Academy of Sciences, Institute of Software;Imaging Genetics Center, University of Southern California",
                "InternalReferences": "0.1109/tvcg.2014.2346312;10.1109/visual.2005.1532773;10.1109/tvcg.2007.70582",
                "AuthorKeywords": "Brain Network;Visual Comparison;Hybrid Representation",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 1029,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 983,
                "i": [
                    983
                ]
            }
        },
        {
            "name": "Jiang Zhang 0002",
            "value": 12,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1363,
            "x": -269.6942962994566,
            "y": -252.22011526351525,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing",
                "DOI": "10.1109/tvcg.2017.2744059",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744059",
                "FirstPage": 954,
                "LastPage": 963,
                "PaperType": "J",
                "Abstract": "We propose a dynamically load-balanced algorithm for parallel particle tracing, which periodically attempts to evenly redistribute particles across processes based on k-d tree decomposition. Each process is assigned with (1) a statically partitioned, axis-aligned data block that partially overlaps with neighboring blocks in other processes and (2) a dynamically determined k-d tree leaf node that bounds the active particles for computation; the bounds of the k-d tree nodes are constrained by the geometries of data blocks. Given a certain degree of overlap between blocks, our method can balance the number of particles as much as possible. Compared with other load-balancing algorithms for parallel particle tracing, the proposed method does not require any preanalysis, does not use any heuristics based on flow features, does not make any assumptions about seed distribution, does not move any data blocks during the run, and does not need any master process for work redistribution. Based on a comprehensive performance study up to 8K processes on a Blue Gene/Q system, the proposed algorithm outperforms baseline approaches in both load balance and scalability on various flow visualization and analysis problems.",
                "AuthorNamesDeduped": "Jiang Zhang 0002;Hanqi Guo 0001;Fan Hong;Xiaoru Yuan;Tom Peterka",
                "AuthorNames": "Jiang Zhang;Hanqi Guo;Fan Hong;Xiaoru Yuan;Tom Peterka",
                "AuthorAffiliation": "Ministry of Education, Peking University;Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, USA;Ministry of Education, Peking University;Ministry of Education, Peking University;Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, USA",
                "InternalReferences": "0.1109/tvcg.2013.128;10.1109/tvcg.2007.70551;10.1109/tvcg.2013.144;10.1109/tvcg.2011.219;10.1109/visual.1997.663898;10.1109/tvcg.2017.2744059",
                "AuthorKeywords": "Parallel particle tracing,dynamic load balancing,k-d trees,performance analysis",
                "AminerCitationCount": 37,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 986,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 823,
                "i": [
                    823
                ]
            }
        },
        {
            "name": "Fan Hong",
            "value": 9,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1364,
            "x": 369.37179597528456,
            "y": 3.80477831059037,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "SciVis",
                "Year": 2017,
                "Title": "Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing",
                "DOI": "10.1109/tvcg.2017.2744059",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744059",
                "FirstPage": 954,
                "LastPage": 963,
                "PaperType": "J",
                "Abstract": "We propose a dynamically load-balanced algorithm for parallel particle tracing, which periodically attempts to evenly redistribute particles across processes based on k-d tree decomposition. Each process is assigned with (1) a statically partitioned, axis-aligned data block that partially overlaps with neighboring blocks in other processes and (2) a dynamically determined k-d tree leaf node that bounds the active particles for computation; the bounds of the k-d tree nodes are constrained by the geometries of data blocks. Given a certain degree of overlap between blocks, our method can balance the number of particles as much as possible. Compared with other load-balancing algorithms for parallel particle tracing, the proposed method does not require any preanalysis, does not use any heuristics based on flow features, does not make any assumptions about seed distribution, does not move any data blocks during the run, and does not need any master process for work redistribution. Based on a comprehensive performance study up to 8K processes on a Blue Gene/Q system, the proposed algorithm outperforms baseline approaches in both load balance and scalability on various flow visualization and analysis problems.",
                "AuthorNamesDeduped": "Jiang Zhang 0002;Hanqi Guo 0001;Fan Hong;Xiaoru Yuan;Tom Peterka",
                "AuthorNames": "Jiang Zhang;Hanqi Guo;Fan Hong;Xiaoru Yuan;Tom Peterka",
                "AuthorAffiliation": "Ministry of Education, Peking University;Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, USA;Ministry of Education, Peking University;Ministry of Education, Peking University;Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, USA",
                "InternalReferences": "0.1109/tvcg.2013.128;10.1109/tvcg.2007.70551;10.1109/tvcg.2013.144;10.1109/tvcg.2011.219;10.1109/visual.1997.663898;10.1109/tvcg.2017.2744059",
                "AuthorKeywords": "Parallel particle tracing,dynamic load balancing,k-d trees,performance analysis",
                "AminerCitationCount": 37,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 986,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 823,
                "i": [
                    823
                ]
            }
        },
        {
            "name": "Joe Bruce",
            "value": 69,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1365,
            "x": -275.0340842537739,
            "y": 246.7919214615583,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Mixed-initiative visual analytics using task-driven recommendations",
                "DOI": "10.1109/vast.2015.7347625",
                "Link": "http://dx.doi.org/10.1109/VAST.2015.7347625",
                "FirstPage": 9,
                "LastPage": 16,
                "PaperType": "C",
                "Abstract": "Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support discovery and sensemaking tasks, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with analytic models, such as inferring data models from user interactions to steer the underlying models of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Sensemaking researchers have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present candidate design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences about user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.",
                "AuthorNamesDeduped": "Kristin A. Cook;Nick Cramer;David J. Israel;Michael Wolverton;Joe Bruce;Russ Burtner;Alex Endert",
                "AuthorNames": "Kristin Cook;Nick Cramer;David Israel;Michael Wolverton;Joe Bruce;Russ Burtner;Alex Endert",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;SRI International;SRI International;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Georgia Institute of Technology",
                "InternalReferences": "0.1109/vast.2012.6400486;10.1109/vast.2011.6102438;10.1109/vast.2012.6400559;10.1109/tvcg.2014.2346573;10.1109/vast.2014.7042492;10.1109/tvcg.2008.174;10.1109/tvcg.2013.225",
                "AuthorKeywords": "mixed-initiative visual analytics, task modeling, recommender systems, sensemaking",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 769,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1128,
                "i": [
                    1128
                ]
            }
        },
        {
            "name": "Deokgun Park 0001",
            "value": 81,
            "numPapers": 16,
            "cluster": "1",
            "visible": 1,
            "index": 1366,
            "x": 36.109241217398335,
            "y": -367.8941732328792,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections",
                "DOI": "10.1109/tvcg.2016.2598445",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598445",
                "FirstPage": 151,
                "LastPage": 160,
                "PaperType": "J",
                "Abstract": "Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.",
                "AuthorNamesDeduped": "Minjeong Kim;Kyeongpil Kang;Deokgun Park 0001;Jaegul Choo;Niklas Elmqvist",
                "AuthorNames": "Minjeong Kim;Kyeongpil Kang;Deokgun Park;Jaegul Choo;Niklas Elmqvist",
                "AuthorAffiliation": "Korea University;Korea University;University of Maryland, College Park, MD, USA;Korea University;University of Maryland, College Park, MD, USA",
                "InternalReferences": "0.1109/infvis.2003.1249014;10.1109/tvcg.2013.212;10.1109/infvis.2003.1249008;10.1109/infvis.2004.43;10.1109/tvcg.2014.2346574;10.1109/tvcg.2011.239;10.1109/tvcg.2010.154;10.1109/vast.2014.7042494",
                "AuthorKeywords": "topic modeling;nonnegative matrix factorization;t-distributed stochastic neighbor embedding;magic lens;text analytics",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 2200,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 967,
                "i": [
                    967
                ]
            }
        },
        {
            "name": "Subhajit Das 0002",
            "value": 102,
            "numPapers": 16,
            "cluster": "5",
            "visible": 1,
            "index": 1367,
            "x": 221.96428445119022,
            "y": 295.77331933098884,
            "vy": 0,
            "vx": 0,
            "r": 1.1174438687392054,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Podium: Ranking Data Using Mixed-Initiative Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745078",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745078",
                "FirstPage": 288,
                "LastPage": 297,
                "PaperType": "J",
                "Abstract": "People often rank and order data points as a vital part of making decisions. Multi-attribute ranking systems are a common tool used to make these data-driven decisions. Such systems often take the form of a table-based visualization in which users assign weights to the attributes representing the quantifiable importance of each attribute to a decision, which the system then uses to compute a ranking of the data. However, these systems assume that users are able to quantify their conceptual understanding of how important particular attributes are to a decision. This is not always easy or even possible for users to do. Rather, people often have a more holistic understanding of the data. They form opinions that data point A is better than data point B but do not necessarily know which attributes are important. To address these challenges, we present a visual analytic application to help people rank multi-variate data points. We developed a prototype system, Podium, that allows users to drag rows in the table to rank order data points based on their perception of the relative value of the data. Podium then infers a weighting model using Ranking SVM that satisfies the user's data preferences as closely as possible. Whereas past systems help users understand the relationships between data points based on changes to attribute weights, our approach helps users to understand the attributes that might inform their understanding of the data. We present two usage scenarios to describe some of the potential uses of our proposed technique: (1) understanding which attributes contribute to a user's subjective preferences for data, and (2) deconstructing attributes of importance for existing rankings. Our proposed approach makes powerful machine learning techniques more usable to those who may not have expertise in these areas.",
                "AuthorNamesDeduped": "Emily Wall;Subhajit Das 0002;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorNames": "Emily Wall;Subhajit Das;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;DePaul University, Chicago, IL, USA;Georgia Institute of Technology, Atlanta, GA, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2015.7347625;10.1109/tvcg.2016.2598594;10.1109/vast.2011.6102449;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.253;10.1109/vast.2017.8585669",
                "AuthorKeywords": "Mixed-initiative visual analytics,multi-attribute ranking,user interaction",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1419,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 858,
                "i": [
                    858
                ]
            }
        },
        {
            "name": "Ravish Chawla",
            "value": 95,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1368,
            "x": -363.5944303329006,
            "y": -68.1842374078754,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Podium: Ranking Data Using Mixed-Initiative Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745078",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745078",
                "FirstPage": 288,
                "LastPage": 297,
                "PaperType": "J",
                "Abstract": "People often rank and order data points as a vital part of making decisions. Multi-attribute ranking systems are a common tool used to make these data-driven decisions. Such systems often take the form of a table-based visualization in which users assign weights to the attributes representing the quantifiable importance of each attribute to a decision, which the system then uses to compute a ranking of the data. However, these systems assume that users are able to quantify their conceptual understanding of how important particular attributes are to a decision. This is not always easy or even possible for users to do. Rather, people often have a more holistic understanding of the data. They form opinions that data point A is better than data point B but do not necessarily know which attributes are important. To address these challenges, we present a visual analytic application to help people rank multi-variate data points. We developed a prototype system, Podium, that allows users to drag rows in the table to rank order data points based on their perception of the relative value of the data. Podium then infers a weighting model using Ranking SVM that satisfies the user's data preferences as closely as possible. Whereas past systems help users understand the relationships between data points based on changes to attribute weights, our approach helps users to understand the attributes that might inform their understanding of the data. We present two usage scenarios to describe some of the potential uses of our proposed technique: (1) understanding which attributes contribute to a user's subjective preferences for data, and (2) deconstructing attributes of importance for existing rankings. Our proposed approach makes powerful machine learning techniques more usable to those who may not have expertise in these areas.",
                "AuthorNamesDeduped": "Emily Wall;Subhajit Das 0002;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorNames": "Emily Wall;Subhajit Das;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;DePaul University, Chicago, IL, USA;Georgia Institute of Technology, Atlanta, GA, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2015.7347625;10.1109/tvcg.2016.2598594;10.1109/vast.2011.6102449;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.253;10.1109/vast.2017.8585669",
                "AuthorKeywords": "Mixed-initiative visual analytics,multi-attribute ranking,user interaction",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1419,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 858,
                "i": [
                    858
                ]
            }
        },
        {
            "name": "Bharath Kalidindi",
            "value": 95,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1369,
            "x": 314.27576977653564,
            "y": -195.39892663821354,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "Podium: Ranking Data Using Mixed-Initiative Visual Analytics",
                "DOI": "10.1109/tvcg.2017.2745078",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745078",
                "FirstPage": 288,
                "LastPage": 297,
                "PaperType": "J",
                "Abstract": "People often rank and order data points as a vital part of making decisions. Multi-attribute ranking systems are a common tool used to make these data-driven decisions. Such systems often take the form of a table-based visualization in which users assign weights to the attributes representing the quantifiable importance of each attribute to a decision, which the system then uses to compute a ranking of the data. However, these systems assume that users are able to quantify their conceptual understanding of how important particular attributes are to a decision. This is not always easy or even possible for users to do. Rather, people often have a more holistic understanding of the data. They form opinions that data point A is better than data point B but do not necessarily know which attributes are important. To address these challenges, we present a visual analytic application to help people rank multi-variate data points. We developed a prototype system, Podium, that allows users to drag rows in the table to rank order data points based on their perception of the relative value of the data. Podium then infers a weighting model using Ranking SVM that satisfies the user's data preferences as closely as possible. Whereas past systems help users understand the relationships between data points based on changes to attribute weights, our approach helps users to understand the attributes that might inform their understanding of the data. We present two usage scenarios to describe some of the potential uses of our proposed technique: (1) understanding which attributes contribute to a user's subjective preferences for data, and (2) deconstructing attributes of importance for existing rankings. Our proposed approach makes powerful machine learning techniques more usable to those who may not have expertise in these areas.",
                "AuthorNamesDeduped": "Emily Wall;Subhajit Das 0002;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorNames": "Emily Wall;Subhajit Das;Ravish Chawla;Bharath Kalidindi;Eli T. Brown;Alex Endert",
                "AuthorAffiliation": "Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;DePaul University, Chicago, IL, USA;Georgia Institute of Technology, Atlanta, GA, USA",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/vast.2012.6400486;10.1109/tvcg.2014.2346575;10.1109/vast.2015.7347625;10.1109/tvcg.2016.2598594;10.1109/vast.2011.6102449;10.1109/tvcg.2013.173;10.1109/tvcg.2015.2467615;10.1109/tvcg.2016.2598446;10.1109/tvcg.2015.2467551;10.1109/tvcg.2016.2598839;10.1109/tvcg.2012.253;10.1109/vast.2017.8585669",
                "AuthorKeywords": "Mixed-initiative visual analytics,multi-attribute ranking,user interaction",
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1419,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 858,
                "i": [
                    858
                ]
            }
        },
        {
            "name": "Isaac Cho",
            "value": 67,
            "numPapers": 23,
            "cluster": "5",
            "visible": 1,
            "index": 1370,
            "x": -99.78350400824694,
            "y": 356.5014057866198,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "VAiRoma: A Visual Analytics System for Making Sense of Places, Times, and Events in Roman History",
                "DOI": "10.1109/tvcg.2015.2467971",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467971",
                "FirstPage": 210,
                "LastPage": 219,
                "PaperType": "J",
                "Abstract": "Learning and gaining knowledge of Roman history is an area of interest for students and citizens at large. This is an example of a subject with great sweep (with many interrelated sub-topics over, in this case, a 3,000 year history) that is hard to grasp by any individual and, in its full detail, is not available as a coherent story. In this paper, we propose a visual analytics approach to construct a data driven view of Roman history based on a large collection of Wikipedia articles. Extracting and enabling the discovery of useful knowledge on events, places, times, and their connections from large amounts of textual data has always been a challenging task. To this aim, we introduce VAiRoma, a visual analytics system that couples state-of-the-art text analysis methods with an intuitive visual interface to help users make sense of events, places, times, and more importantly, the relationships between them. VAiRoma goes beyond textual content exploration, as it permits users to compare, make connections, and externalize the findings all within the visual interface. As a result, VAiRoma allows users to learn and create new knowledge regarding Roman history in an informed way. We evaluated VAiRoma with 16 participants through a user study, with the task being to learn about roman piazzas through finding relevant articles and new relationships. Our study results showed that the VAiRoma system enables the participants to find more relevant articles and connections compared to Web searches and literature search conducted in a roman library. Subjective feedback on VAiRoma was also very positive. In addition, we ran two case studies that demonstrate how VAiRoma can be used for deeper analysis, permitting the rapid discovery and analysis of a small number of key documents even when the original collection contains hundreds of thousands of documents.",
                "AuthorNamesDeduped": "Isaac Cho;Wenwen Dou;Derek Xiaoyu Wang;Eric Sauda;William Ribarsky",
                "AuthorNames": "Isaac Cho;Wewnen Dou;Derek Xiaoyu Wang;Eric Sauda;William Ribarsky",
                "AuthorAffiliation": "UNC Charlotte;UNC Charlotte;;;",
                "InternalReferences": "0.1109/vast.2014.7042493;10.1109/vast.2007.4389012;10.1109/tvcg.2014.2346431;10.1109/tvcg.2007.70617;10.1109/tvcg.2008.178;10.1109/vast.2010.5652885;10.1109/tvcg.2011.239;10.1109/vast.2012.6400485;10.1109/tvcg.2013.162;10.1109/infvis.2000.885098;10.1109/tvcg.2011.179;10.1109/tvcg.2014.2346481;10.1109/infvis.2000.885091",
                "AuthorKeywords": "Visual Analytics, Text Analytics, Wikipedia",
                "AminerCitationCount": 69,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 2132,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1118,
                "i": [
                    1118
                ]
            }
        },
        {
            "name": "Lorenz Linhardt",
            "value": 30,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1371,
            "x": -167.29699073765474,
            "y": -330.39630277913983,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees",
                "DOI": "10.1109/tvcg.2017.2745158",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2745158",
                "FirstPage": 174,
                "LastPage": 183,
                "PaperType": "J",
                "Abstract": "Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.",
                "AuthorNamesDeduped": "Thomas Mühlbacher;Lorenz Linhardt;Torsten Möller;Harald Piringer",
                "AuthorNames": "Thomas Mühlbacher;Lorenz Linhardt;Torsten Möller;Harald Piringer",
                "AuthorAffiliation": "VRVis Research Center;ETH Zurich;University of Vienna;VRVis Research Center",
                "InternalReferences": "0.1109/vast.2011.6102457;10.1109/tvcg.2010.190;10.1109/tvcg.2008.145;10.1109/tvcg.2014.2346578;10.1109/tvcg.2016.2598589;10.1109/tvcg.2009.110;10.1109/tvcg.2014.2346321;10.1109/tvcg.2010.130;10.1109/tvcg.2011.248;10.1109/vast.2011.6102453",
                "AuthorKeywords": "Model selection,classification trees,visual parameter search,sensitivity analysis,Pareto optimality",
                "AminerCitationCount": 50,
                "CitationCountCrossRef": 38,
                "PubsCitedCrossRef": 51,
                "DownloadsXplore": 1068,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 863,
                "i": [
                    863
                ]
            }
        },
        {
            "name": "Philipp Muigg",
            "value": 149,
            "numPapers": 39,
            "cluster": "6",
            "visible": 1,
            "index": 1372,
            "x": 346.6654029295611,
            "y": 130.6640670256558,
            "vy": 0,
            "vx": 0,
            "r": 1.1715601611974669,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "A Multi-Threading Architecture to Support Interactive Visual Exploration",
                "DOI": "10.1109/tvcg.2009.110",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.110",
                "FirstPage": 1113,
                "LastPage": 1120,
                "PaperType": "J",
                "Abstract": "During continuous user interaction, it is hard to provide rich visual feedback at interactive rates for datasets containing millions of entries. The contribution of this paper is a generic architecture that ensures responsiveness of the application even when dealing with large data and that is applicable to most types of information visualizations. Our architecture builds on the separation of the main application thread and the visualization thread, which can be cancelled early due to user interaction. In combination with a layer mechanism, our architecture facilitates generating previews incrementally to provide rich visual feedback quickly. To help avoiding common pitfalls of multi-threading, we discuss synchronization and communication in detail. We explicitly denote design choices to control trade-offs. A quantitative evaluation based on the system VI S P L ORE shows fast visual feedback during continuous interaction even for millions of entries. We describe instantiations of our architecture in additional tools.",
                "AuthorNamesDeduped": "Harald Piringer;Christian Tominski;Philipp Muigg;Wolfgang Berger",
                "AuthorNames": "Harald Piringer;Christian Tominski;Philipp Muigg;Wolfgang Berger",
                "AuthorAffiliation": "VRVis Research Center, Vienna, Austria;Institute for Computer Science, University of Rostock, Germany;University of Technology, Vienna, Vienna, Austria;VRVis Research Center, Vienna, Austria",
                "InternalReferences": "0.1109/visual.1999.809891;10.1109/tvcg.2006.138;10.1109/infvis.1997.636790;10.1109/tvcg.2006.171;10.1109/infvis.2004.12;10.1109/infvis.2002.1173156;10.1109/tvcg.2007.70540;10.1109/tvcg.2006.178;10.1109/tvcg.2006.170;10.1109/infvis.2004.64;10.1109/infvis.2000.885092;10.1109/vast.2008.4677357",
                "AuthorKeywords": "Information visualization architecture, continuous interaction, multi-threading, layer, preview",
                "AminerCitationCount": 81,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 648,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1836,
                "i": [
                    1836
                ]
            }
        },
        {
            "name": "Wolfgang Berger",
            "value": 97,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1373,
            "x": -344.0078414891307,
            "y": 137.871697581444,
            "vy": 0,
            "vx": 0,
            "r": 1.1116868163500289,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "A Multi-Threading Architecture to Support Interactive Visual Exploration",
                "DOI": "10.1109/tvcg.2009.110",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.110",
                "FirstPage": 1113,
                "LastPage": 1120,
                "PaperType": "J",
                "Abstract": "During continuous user interaction, it is hard to provide rich visual feedback at interactive rates for datasets containing millions of entries. The contribution of this paper is a generic architecture that ensures responsiveness of the application even when dealing with large data and that is applicable to most types of information visualizations. Our architecture builds on the separation of the main application thread and the visualization thread, which can be cancelled early due to user interaction. In combination with a layer mechanism, our architecture facilitates generating previews incrementally to provide rich visual feedback quickly. To help avoiding common pitfalls of multi-threading, we discuss synchronization and communication in detail. We explicitly denote design choices to control trade-offs. A quantitative evaluation based on the system VI S P L ORE shows fast visual feedback during continuous interaction even for millions of entries. We describe instantiations of our architecture in additional tools.",
                "AuthorNamesDeduped": "Harald Piringer;Christian Tominski;Philipp Muigg;Wolfgang Berger",
                "AuthorNames": "Harald Piringer;Christian Tominski;Philipp Muigg;Wolfgang Berger",
                "AuthorAffiliation": "VRVis Research Center, Vienna, Austria;Institute for Computer Science, University of Rostock, Germany;University of Technology, Vienna, Vienna, Austria;VRVis Research Center, Vienna, Austria",
                "InternalReferences": "0.1109/visual.1999.809891;10.1109/tvcg.2006.138;10.1109/infvis.1997.636790;10.1109/tvcg.2006.171;10.1109/infvis.2004.12;10.1109/infvis.2002.1173156;10.1109/tvcg.2007.70540;10.1109/tvcg.2006.178;10.1109/tvcg.2006.170;10.1109/infvis.2004.64;10.1109/infvis.2000.885092;10.1109/vast.2008.4677357",
                "AuthorKeywords": "Information visualization architecture, continuous interaction, multi-threading, layer, preview",
                "AminerCitationCount": 81,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 648,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1836,
                "i": [
                    1836
                ]
            }
        },
        {
            "name": "Ahmed Saad",
            "value": 96,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1374,
            "x": 160.5881101072672,
            "y": -334.1578352996923,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Tuner: Principled Parameter finding for Image Segmentation Algorithms Using Visual Response Surface Exploration",
                "DOI": "10.1109/tvcg.2011.248",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.248",
                "FirstPage": 1892,
                "LastPage": 1901,
                "PaperType": "J",
                "Abstract": "In this paper we address the difficult problem of parameter-finding in image segmentation. We replace a tedious manual process that is often based on guess-work and luck by a principled approach that systematically explores the parameter space. Our core idea is the following two-stage technique: We start with a sparse sampling of the parameter space and apply a statistical model to estimate the response of the segmentation algorithm. The statistical model incorporates a model of uncertainty of the estimation which we use in conjunction with the actual estimate in (visually) guiding the user towards areas that need refinement by placing additional sample points. In the second stage the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high. In our exploration we rely on existing ground-truth images in order to evaluate the \"goodness\" of an image segmentation technique. We evaluate its usefulness by demonstrating this technique on two image segmentation algorithms: a three parameter model to detect microtubules in electron tomograms and an eight parameter model to identify functional regions in dynamic Positron Emission Tomography scans.",
                "AuthorNamesDeduped": "Thomas Torsney-Weir;Ahmed Saad;Torsten Möller;Hans-Christian Hege;Britta Weber;Jean-Marc Verbavatz;Steven Bergner",
                "AuthorNames": "Thomas Torsney-Weir;Ahmed Saad;Torsten Moller;Hans-Christian Hege;Britta Weber;Jean-Marc Verbavatz;Steven Bergner",
                "AuthorAffiliation": "GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;Zuse Institute Berlin, Berlin, Germany;Zuse Institute Berlin, Berlin, Germany;Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden, Germany;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70584;10.1109/tvcg.2008.119;10.1109/tvcg.2010.223;10.1109/tvcg.2010.190;10.1109/visual.1994.346302;10.1109/tvcg.2010.130;10.1109/visual.1993.398859;10.1109/visual.1999.809871;10.1109/tvcg.2011.253;10.1109/visual.2000.885678;10.1109/vast.2010.5651694",
                "AuthorKeywords": "Parameter exploration, Image segmentation, Gaussian Process Model",
                "AminerCitationCount": 146,
                "CitationCountCrossRef": 88,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1607,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1635,
                "i": [
                    1635
                ]
            }
        },
        {
            "name": "Britta Weber",
            "value": 96,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1375,
            "x": 107.3467279929603,
            "y": 355.00236617409377,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Tuner: Principled Parameter finding for Image Segmentation Algorithms Using Visual Response Surface Exploration",
                "DOI": "10.1109/tvcg.2011.248",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.248",
                "FirstPage": 1892,
                "LastPage": 1901,
                "PaperType": "J",
                "Abstract": "In this paper we address the difficult problem of parameter-finding in image segmentation. We replace a tedious manual process that is often based on guess-work and luck by a principled approach that systematically explores the parameter space. Our core idea is the following two-stage technique: We start with a sparse sampling of the parameter space and apply a statistical model to estimate the response of the segmentation algorithm. The statistical model incorporates a model of uncertainty of the estimation which we use in conjunction with the actual estimate in (visually) guiding the user towards areas that need refinement by placing additional sample points. In the second stage the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high. In our exploration we rely on existing ground-truth images in order to evaluate the \"goodness\" of an image segmentation technique. We evaluate its usefulness by demonstrating this technique on two image segmentation algorithms: a three parameter model to detect microtubules in electron tomograms and an eight parameter model to identify functional regions in dynamic Positron Emission Tomography scans.",
                "AuthorNamesDeduped": "Thomas Torsney-Weir;Ahmed Saad;Torsten Möller;Hans-Christian Hege;Britta Weber;Jean-Marc Verbavatz;Steven Bergner",
                "AuthorNames": "Thomas Torsney-Weir;Ahmed Saad;Torsten Moller;Hans-Christian Hege;Britta Weber;Jean-Marc Verbavatz;Steven Bergner",
                "AuthorAffiliation": "GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;Zuse Institute Berlin, Berlin, Germany;Zuse Institute Berlin, Berlin, Germany;Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden, Germany;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70584;10.1109/tvcg.2008.119;10.1109/tvcg.2010.223;10.1109/tvcg.2010.190;10.1109/visual.1994.346302;10.1109/tvcg.2010.130;10.1109/visual.1993.398859;10.1109/visual.1999.809871;10.1109/tvcg.2011.253;10.1109/visual.2000.885678;10.1109/vast.2010.5651694",
                "AuthorKeywords": "Parameter exploration, Image segmentation, Gaussian Process Model",
                "AminerCitationCount": 146,
                "CitationCountCrossRef": 88,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1607,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1635,
                "i": [
                    1635
                ]
            }
        },
        {
            "name": "Jean-Marc Verbavatz",
            "value": 96,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1376,
            "x": -319.07070951027964,
            "y": -189.32480643752626,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Tuner: Principled Parameter finding for Image Segmentation Algorithms Using Visual Response Surface Exploration",
                "DOI": "10.1109/tvcg.2011.248",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.248",
                "FirstPage": 1892,
                "LastPage": 1901,
                "PaperType": "J",
                "Abstract": "In this paper we address the difficult problem of parameter-finding in image segmentation. We replace a tedious manual process that is often based on guess-work and luck by a principled approach that systematically explores the parameter space. Our core idea is the following two-stage technique: We start with a sparse sampling of the parameter space and apply a statistical model to estimate the response of the segmentation algorithm. The statistical model incorporates a model of uncertainty of the estimation which we use in conjunction with the actual estimate in (visually) guiding the user towards areas that need refinement by placing additional sample points. In the second stage the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high. In our exploration we rely on existing ground-truth images in order to evaluate the \"goodness\" of an image segmentation technique. We evaluate its usefulness by demonstrating this technique on two image segmentation algorithms: a three parameter model to detect microtubules in electron tomograms and an eight parameter model to identify functional regions in dynamic Positron Emission Tomography scans.",
                "AuthorNamesDeduped": "Thomas Torsney-Weir;Ahmed Saad;Torsten Möller;Hans-Christian Hege;Britta Weber;Jean-Marc Verbavatz;Steven Bergner",
                "AuthorNames": "Thomas Torsney-Weir;Ahmed Saad;Torsten Moller;Hans-Christian Hege;Britta Weber;Jean-Marc Verbavatz;Steven Bergner",
                "AuthorAffiliation": "GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada;Zuse Institute Berlin, Berlin, Germany;Zuse Institute Berlin, Berlin, Germany;Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden, Germany;GrUVi (Graphics, Usability, and Visualization Laboratory), Simon Fraser University, Burnaby, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70584;10.1109/tvcg.2008.119;10.1109/tvcg.2010.223;10.1109/tvcg.2010.190;10.1109/visual.1994.346302;10.1109/tvcg.2010.130;10.1109/visual.1993.398859;10.1109/visual.1999.809871;10.1109/tvcg.2011.253;10.1109/visual.2000.885678;10.1109/vast.2010.5651694",
                "AuthorKeywords": "Parameter exploration, Image segmentation, Gaussian Process Model",
                "AminerCitationCount": 146,
                "CitationCountCrossRef": 88,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1607,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1635,
                "i": [
                    1635
                ]
            }
        },
        {
            "name": "Mathias Hummel",
            "value": 88,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1377,
            "x": 363.29177059302833,
            "y": -75.95452204696215,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Comparative Visual Analysis of Lagrangian Transport in CFD Ensembles",
                "DOI": "10.1109/tvcg.2013.141",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.141",
                "FirstPage": 2743,
                "LastPage": 2752,
                "PaperType": "J",
                "Abstract": "Sets of simulation runs based on parameter and model variation, so-called ensembles, are increasingly used to model physical behaviors whose parameter space is too large or complex to be explored automatically. Visualization plays a key role in conveying important properties in ensembles, such as the degree to which members of the ensemble agree or disagree in their behavior. For ensembles of time-varying vector fields, there are numerous challenges for providing an expressive comparative visualization, among which is the requirement to relate the effect of individual flow divergence to joint transport characteristics of the ensemble. Yet, techniques developed for scalar ensembles are of little use in this context, as the notion of transport induced by a vector field cannot be modeled using such tools. We develop a Lagrangian framework for the comparison of flow fields in an ensemble. Our techniques evaluate individual and joint transport variance and introduce a classification space that facilitates incorporation of these properties into a common ensemble visualization. Variances of Lagrangian neighborhoods are computed using pathline integration and Principal Components Analysis. This allows for an inclusion of uncertainty measurements into the visualization and analysis approach. Our results demonstrate the usefulness and expressiveness of the presented method on several practical examples.",
                "AuthorNamesDeduped": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorNames": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorAffiliation": "University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA;University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA",
                "InternalReferences": "0.1109/tvcg.2011.203;10.1109/visual.1996.568116;10.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2007.70551",
                "AuthorKeywords": "Ensemble, flow field, time-varying, comparison, visualization, Lagrangian, variance, principal components analysis",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 793,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1338,
                "i": [
                    1338
                ]
            }
        },
        {
            "name": "Harald Obermaier",
            "value": 76,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 1378,
            "x": -216.65209965928108,
            "y": 301.5159493513153,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Comparative Visual Analysis of Lagrangian Transport in CFD Ensembles",
                "DOI": "10.1109/tvcg.2013.141",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.141",
                "FirstPage": 2743,
                "LastPage": 2752,
                "PaperType": "J",
                "Abstract": "Sets of simulation runs based on parameter and model variation, so-called ensembles, are increasingly used to model physical behaviors whose parameter space is too large or complex to be explored automatically. Visualization plays a key role in conveying important properties in ensembles, such as the degree to which members of the ensemble agree or disagree in their behavior. For ensembles of time-varying vector fields, there are numerous challenges for providing an expressive comparative visualization, among which is the requirement to relate the effect of individual flow divergence to joint transport characteristics of the ensemble. Yet, techniques developed for scalar ensembles are of little use in this context, as the notion of transport induced by a vector field cannot be modeled using such tools. We develop a Lagrangian framework for the comparison of flow fields in an ensemble. Our techniques evaluate individual and joint transport variance and introduce a classification space that facilitates incorporation of these properties into a common ensemble visualization. Variances of Lagrangian neighborhoods are computed using pathline integration and Principal Components Analysis. This allows for an inclusion of uncertainty measurements into the visualization and analysis approach. Our results demonstrate the usefulness and expressiveness of the presented method on several practical examples.",
                "AuthorNamesDeduped": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorNames": "Mathias Hummel;Harald Obermaier;Christoph Garth;Kenneth I. Joy",
                "AuthorAffiliation": "University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA;University of Kaiserslautern, Germany;Institute for Data Analysis and Visualization (IDAV), University of California, Davis, USA",
                "InternalReferences": "0.1109/tvcg.2011.203;10.1109/visual.1996.568116;10.1109/tvcg.2010.190;10.1109/tvcg.2010.181;10.1109/tvcg.2007.70551",
                "AuthorKeywords": "Ensemble, flow field, time-varying, comparison, visualization, Lagrangian, variance, principal components analysis",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 793,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1338,
                "i": [
                    1338
                ]
            }
        },
        {
            "name": "Lijing Lin",
            "value": 9,
            "numPapers": 26,
            "cluster": "1",
            "visible": 1,
            "index": 1379,
            "x": -43.9345086494986,
            "y": -368.80856680631365,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "E-Map: A Visual Analytics Approach for Exploring Significant Event Evolutions in Social Media",
                "DOI": "10.1109/vast.2017.8585638",
                "Link": "http://dx.doi.org/10.1109/VAST.2017.8585638",
                "FirstPage": 36,
                "LastPage": 47,
                "PaperType": "C",
                "Abstract": "Significant events are often discussed and spread through social media, involving many people. Reposting activities and opinions expressed in social media offer good opportunities to understand the evolution of events. However, the dynamics of reposting activities and the diversity of user comments pose challenges to understand event-related social media data. We propose E-Map, a visual analytics approach that uses map-like visualization tools to help multi-faceted analysis of social media data on a significant event and in-depth understanding of the development of the event. E-Map transforms extracted keywords, messages, and reposting behaviors into map features such as cities, towns, and rivers to build a structured and semantic space for users to explore. It also visualizes complex posting and reposting behaviors as simple trajectories and connections that can be easily followed. By supporting multi-level spatial temporal exploration, E-Map helps to reveal the patterns of event development and key players in an event, disclosing the ways they shape and affect the development of the event. Two cases analysing real-world events confirm the capacities of E-Map in facilitating the analysis of event evolution with social media data.",
                "AuthorNamesDeduped": "Siming Chen 0001;Shuai Chen 0001;Lijing Lin;Xiaoru Yuan;Jie Liang 0004;Xiaolong Zhang 0001",
                "AuthorNames": "Siming Chen;Shuai Chen;Lijing Lin;Xiaoru Yuan;Jie Liang;Xiaolong Zhang",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education) and School of EECS, Peking University, China;Faculty of Engineer and Information Technology, The University of Technology, Sydney, Australia;College of Information Sciences and Technology, Pennsylvania State University, USA",
                "InternalReferences": "0.1109/vast.2008.4677356;10.1109/tvcg.2013.186;10.1109/tvcg.2011.185;10.1109/tvcg.2012.291;10.1109/vast.2012.6400557;10.1109/vast.2016.7883510;10.1109/tvcg.2015.2467619;10.1109/tvcg.2014.2346433;10.1109/tvcg.2010.129;10.1109/vast.2012.6400485;10.1109/tvcg.2013.162;10.1109/infvis.2005.1532126;10.1109/tvcg.2007.70582;10.1109/tvcg.2016.2598590;10.1109/tvcg.2015.2467554;10.1109/vast.2015.7347632;10.1109/tvcg.2013.196;10.1109/vast.2011.6102456;10.1109/tvcg.2016.2598919;10.1109/tvcg.2009.171;10.1109/vast.2016.7883511;10.1109/tvcg.2015.2467691;10.1109/tvcg.2014.2346920;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346922;10.1109/vast.2014.7042496;10.1109/tvcg.2014.2346919",
                "AuthorKeywords": "Social Media,Event Analysis,Map-like Visual Metaphor,Spatial Temporal Visual Analytics",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 994,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 871,
                "i": [
                    871
                ]
            }
        },
        {
            "name": "Daniel Wigdor",
            "value": 47,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 1380,
            "x": 281.6245660778016,
            "y": 242.3584200754947,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "PhenoStacks: Cross-Sectional Cohort Phenotype Comparison Visualizations",
                "DOI": "10.1109/tvcg.2016.2598469",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598469",
                "FirstPage": 191,
                "LastPage": 200,
                "PaperType": "J",
                "Abstract": "Cross-sectional phenotype studies are used by genetics researchers to better understand how phenotypes vary across patients with genetic diseases, both within and between cohorts. Analyses within cohorts identify patterns between phenotypes and patients (e.g., co-occurrence) and isolate special cases (e.g., potential outliers). Comparing the variation of phenotypes between two cohorts can help distinguish how different factors affect disease manifestation (e.g., causal genes, age of onset, etc.). PhenoStacks is a novel visual analytics tool that supports the exploration of phenotype variation within and between cross-sectional patient cohorts. By leveraging the semantic hierarchy of the Human Phenotype Ontology, phenotypes are presented in context, can be grouped and clustered, and are summarized via overviews to support the exploration of phenotype distributions. The design of PhenoStacks was motivated by formative interviews with genetics researchers: we distil high-level tasks, present an algorithm for simplifying ontology topologies for visualization, and report the results of a deployment evaluation with four expert genetics researchers. The results suggest that PhenoStacks can help identify phenotype patterns, investigate data quality issues, and inform data collection design.",
                "AuthorNamesDeduped": "Michael Glueck;Alina Gvozdik;Fanny Chevalier;Azam Khan;Michael Brudno;Daniel Wigdor",
                "AuthorNames": "Michael Glueck;Alina Gvozdik;Fanny Chevalier;Azam Khan;Michael Brudno;Daniel Wigdor",
                "AuthorAffiliation": "Autodesk Research, University of Toronto;University of Toronto;Inria;Autodesk Research;Hospital for Sick Children, University of Toronto, Toronto;University of Toronto",
                "InternalReferences": "0.1109/tvcg.2014.2346248;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346279;10.1109/tvcg.2009.167;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467622;10.1109/tvcg.2015.2467733;10.1109/tvcg.2009.116",
                "AuthorKeywords": "Cross-sectional cohort analysis;Phenotypes;Human Phenotype Ontology (HPO)",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 19,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 872,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 991,
                "i": [
                    991
                ]
            }
        },
        {
            "name": "Michael Hund",
            "value": 19,
            "numPapers": 30,
            "cluster": "3",
            "visible": 1,
            "index": 1381,
            "x": -371.50643279266427,
            "y": 11.531278926018237,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration",
                "DOI": "10.1109/tvcg.2016.2598467",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598467",
                "FirstPage": 31,
                "LastPage": 40,
                "PaperType": "J",
                "Abstract": "In this work we address the problem of retrieving potentially interesting matrix views to support the exploration of networks. We introduce Matrix Diagnostics (or Magnostics), following in spirit related approaches for rating and ranking other visualization techniques, such as Scagnostics for scatter plots. Our approach ranks matrix views according to the appearance of specific visual patterns, such as blocks and lines, indicating the existence of topological motifs in the data, such as clusters, bi-graphs, or central nodes. Magnostics can be used to analyze, query, or search for visually similar matrices in large collections, or to assess the quality of matrix reordering algorithms. While many feature descriptors for image analyzes exist, there is no evidence how they perform for detecting patterns in matrices. In order to make an informed choice of feature descriptors for matrix diagnostics, we evaluate 30 feature descriptors-27 existing ones and three new descriptors that we designed specifically for MAGNOSTICS-with respect to four criteria: pattern response, pattern variability, pattern sensibility, and pattern discrimination. We conclude with an informed set of six descriptors as most appropriate for Magnostics and demonstrate their application in two scenarios; exploring a large collection of matrices and analyzing temporal networks.",
                "AuthorNamesDeduped": "Michael Behrisch 0001;Benjamin Bach;Michael Hund;Michael Delz;Laura von Rüden;Jean-Daniel Fekete;Tobias Schreck",
                "AuthorNames": "Michael Behrisch;Benjamin Bach;Michael Hund;Michael Delz;Laura Von Rüden;Jean-Daniel Fekete;Tobias Schreck",
                "AuthorAffiliation": "University of Konstanz, Germany;Microsoft Research-Inria Joint Centre, Saclay, France;University of Konstanz, Germany;University of Konstanz, Germany;Capgemini, RWTH Aachen University;Inria, Saclay, France;Graz University of Technology, Austria",
                "InternalReferences": "0.1109/vast.2012.6400488;10.1109/infvis.2004.15;10.1109/vast.2014.7042480;10.1109/vast.2010.5652433;10.1109/tvcg.2010.184;10.1109/vast.2006.261423;10.1109/tvcg.2007.70582;10.1109/tvcg.2007.70535;10.1109/tvcg.2011.229;10.1109/vast.2010.5652392;10.1109/infvis.2005.1532142;10.1109/infvis.2004.3",
                "AuthorKeywords": "Matrix Visualization;Visual Quality Measures;Quality Metrics;Feature Detection/Selection;Relational Data",
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 1031,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 982,
                "i": [
                    982
                ]
            }
        },
        {
            "name": "Hans-Peter Kriegel",
            "value": 61,
            "numPapers": 8,
            "cluster": "3",
            "visible": 1,
            "index": 1382,
            "x": 266.2443228653624,
            "y": -259.54568103122926,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "Recursive pattern: a technique for visualizing very large amounts of data",
                "DOI": "10.1109/visual.1995.485140",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.485140",
                "FirstPage": 279,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "An important goal of visualization technology is to support the exploration and analysis of very large amounts of data. In this paper, we propose a new visualization technique called a 'recursive pattern', which has been developed for visualizing large amounts of multidimensional data. The technique is based on a generic recursive scheme which generalizes a wide range of pixel-oriented arrangements for displaying large data sets. By instantiating the technique with adequate data- and application-dependent parameters, the user may greatly influence the structure of the resulting visualizations. Since the technique uses one pixel for presenting each data value, the amount of data which can be displayed is only limited by the resolution of current display technology and by the limitations of human perceptibility. Beside describing the basic idea of the 'recursive pattern' technique, we provide several examples of useful parameter settings for the various recursion levels. We further show that our 'recursive pattern' technique is particularly advantageous for the large class of data sets which have a natural order according to one dimension (e.g. time series data). We demonstrate the usefulness of our technique by using a stock market application.",
                "AuthorNamesDeduped": "Daniel A. Keim;Mihael Ankerst;Hans-Peter Kriegel",
                "AuthorNames": "D.A. Keim;H.-P. Kriegel;M. Ankerst",
                "AuthorAffiliation": "Institute for Computer Science, University of Munich (LMU), Munich, Germany;Institute for Computer Science, University of Munich (LMU), Munich, Germany;Institute for Computer Science, University of Munich (LMU), Munich, Germany",
                "InternalReferences": "0.1109/visual.1990.146402;10.1109/infvis.1995.528688;10.1109/visual.1991.175809;10.1109/visual.1990.146386;10.1109/visual.1990.146387;10.1109/visual.1990.146389",
                "AuthorKeywords": "Visualizing Large Data Sets, Visualizing Multidimensional and Multivariate Data, Visualizing Large Sequential Data Sets, Recursive Visualization Techniques, Interfaces to Databases",
                "AminerCitationCount": 367,
                "CitationCountCrossRef": 72,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 451,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3416,
                "i": [
                    3416
                ]
            }
        },
        {
            "name": "Georges G. Grinstein",
            "value": 177,
            "numPapers": 11,
            "cluster": "11",
            "visible": 1,
            "index": 1383,
            "x": -21.007282643273744,
            "y": 371.36059844300337,
            "vy": 0,
            "vx": 0,
            "r": 1.2037996545768566,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "DNA visual and analytic data mining",
                "DOI": "10.1109/visual.1997.663916",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663916",
                "FirstPage": 437,
                "LastPage": 441,
                "PaperType": "C",
                "Abstract": "Describes data exploration techniques designed to classify DNA sequences. Several visualization and data mining techniques were used to validate and attempt to discover new methods for distinguishing coding DNA sequences (exons) from non-coding DNA sequences (introns). The goal of the data mining was to see whether some other, possibly non-linear combination of the fundamental position-dependent DNA nucleotide frequency values could be a better predictor than the AMI (average mutual information). We tried many different classification techniques including rule-based classifiers and neural networks. We also used visualization of both the original data and the results of the data mining to help verify patterns and to understand the distinction between the different types of data and classifications. In particular, the visualization helped us develop refinements to neural network classifiers, which have accuracies as high as any known method. Finally, we discuss the interactions between visualization and data mining and suggest an integrated approach.",
                "AuthorNamesDeduped": "Patrick Hoffman;Georges G. Grinstein;Kenneth A. Marx;Ivo Grosse;Eugene Stanley",
                "AuthorNames": "P. Hoffman;G. Grinstein;K. Marx;I. Grosse;E. Stanley",
                "AuthorAffiliation": "Institute for Visualization and Perception ResearchDepartment of Computer Science, University of Massachusetts, Lowell, Lowell, MA, USA;Institute for Visualization and Perception ResearchDepartment of Computer Science, University of Massachusetts, Lowell, Lowell, MA, USA;Center for Intelligent BiomaterialsDepartment of Chemistry, University of Massachusetts, Lowell, Lowell, MA, USA;Center for Polymer Studies and Department of Physics, Boston University, Boston, MA, USA;Center for Polymer Studies and Department of Physics, Boston University, Boston, MA, USA",
                "InternalReferences": "0.1109/visual.1995.485139",
                "AuthorKeywords": null,
                "AminerCitationCount": 439,
                "CitationCountCrossRef": 128,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 1076,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3234,
                "i": [
                    3234
                ]
            }
        },
        {
            "name": "Kenneth A. Marx",
            "value": 101,
            "numPapers": 2,
            "cluster": "11",
            "visible": 1,
            "index": 1384,
            "x": -235.44540788314643,
            "y": -288.12403562830156,
            "vy": 0,
            "vx": 0,
            "r": 1.1162924582613702,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "DNA visual and analytic data mining",
                "DOI": "10.1109/visual.1997.663916",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663916",
                "FirstPage": 437,
                "LastPage": 441,
                "PaperType": "C",
                "Abstract": "Describes data exploration techniques designed to classify DNA sequences. Several visualization and data mining techniques were used to validate and attempt to discover new methods for distinguishing coding DNA sequences (exons) from non-coding DNA sequences (introns). The goal of the data mining was to see whether some other, possibly non-linear combination of the fundamental position-dependent DNA nucleotide frequency values could be a better predictor than the AMI (average mutual information). We tried many different classification techniques including rule-based classifiers and neural networks. We also used visualization of both the original data and the results of the data mining to help verify patterns and to understand the distinction between the different types of data and classifications. In particular, the visualization helped us develop refinements to neural network classifiers, which have accuracies as high as any known method. Finally, we discuss the interactions between visualization and data mining and suggest an integrated approach.",
                "AuthorNamesDeduped": "Patrick Hoffman;Georges G. Grinstein;Kenneth A. Marx;Ivo Grosse;Eugene Stanley",
                "AuthorNames": "P. Hoffman;G. Grinstein;K. Marx;I. Grosse;E. Stanley",
                "AuthorAffiliation": "Institute for Visualization and Perception ResearchDepartment of Computer Science, University of Massachusetts, Lowell, Lowell, MA, USA;Institute for Visualization and Perception ResearchDepartment of Computer Science, University of Massachusetts, Lowell, Lowell, MA, USA;Center for Intelligent BiomaterialsDepartment of Chemistry, University of Massachusetts, Lowell, Lowell, MA, USA;Center for Polymer Studies and Department of Physics, Boston University, Boston, MA, USA;Center for Polymer Studies and Department of Physics, Boston University, Boston, MA, USA",
                "InternalReferences": "0.1109/visual.1995.485139",
                "AuthorKeywords": null,
                "AminerCitationCount": 439,
                "CitationCountCrossRef": 128,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 1076,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3234,
                "i": [
                    3234
                ]
            }
        },
        {
            "name": "Dustin Arendt",
            "value": 6,
            "numPapers": 6,
            "cluster": "1",
            "visible": 1,
            "index": 1385,
            "x": 368.3680667356805,
            "y": 53.431895057327495,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "The \"y\" of it Matters, Even for Storyline Visualization",
                "DOI": "10.1109/vast.2017.8585487",
                "Link": "http://dx.doi.org/10.1109/VAST.2017.8585487",
                "FirstPage": 81,
                "LastPage": 91,
                "PaperType": "C",
                "Abstract": "Storylines are adept at communicating complex change by encoding time on the x-axis and using the proximity of lines in the y direction to represent interaction between entities. The original definition of a storyline visualization requires data defined in terms of explicit interaction groups. Relaxing this definition allows storyline visualization to be applied more generally, but this creates questions about how the y-coordinate should encode interactions when this is tied to a particular place or state. To answer this question, we conducted a design study where we considered two layout algorithm design alternatives within a geo-temporal analysis tool written to solve part of the VAST Challenge 2014. We measured the performance of users at overview and detail oriented tasks between two storyline layout algorithms. To the best of our knowledge, this paper is the first work to question the design principles for storyline visualization, and what we found surprised us. For overview tasks with the alternative layout, which has a consistent encoding for the y-coordinate, users performed moderately better (p &lt;; .05) than the storyline layout based on existing design constraints and aesthetic criteria. Our empirical findings were also supported by first-hand accounts taken from interviews with multiple expert analysts, who suggested that the inconsistent meaning of the y-axis was misleading. These findings led us to design a new storyline layout algorithm that is a “best of both” where the y-axis has a consistent meaning but aesthetic criteria (e.g., line crossings) are considered.",
                "AuthorNamesDeduped": "Dustin Arendt;Meg Pirrung",
                "AuthorNames": "Dustin Arendt;Meg Pirrung",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory",
                "InternalReferences": "0.1109/vast.2009.5332593;10.1109/tvcg.2014.2346433;10.1109/tvcg.2013.196;10.1109/tvcg.2013.221;10.1109/tvcg.2012.212;10.1109/tvcg.2015.2468151;10.1109/tvcg.2014.2346919",
                "AuthorKeywords": "Storyline visualization,layout algorithms,interaction context,geospatial analysis,VAST Challenge",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 547,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 884,
                "i": [
                    884
                ]
            }
        },
        {
            "name": "Meg Pirrung",
            "value": 6,
            "numPapers": 6,
            "cluster": "1",
            "visible": 1,
            "index": 1386,
            "x": -307.8269032471897,
            "y": 209.505602877931,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "The \"y\" of it Matters, Even for Storyline Visualization",
                "DOI": "10.1109/vast.2017.8585487",
                "Link": "http://dx.doi.org/10.1109/VAST.2017.8585487",
                "FirstPage": 81,
                "LastPage": 91,
                "PaperType": "C",
                "Abstract": "Storylines are adept at communicating complex change by encoding time on the x-axis and using the proximity of lines in the y direction to represent interaction between entities. The original definition of a storyline visualization requires data defined in terms of explicit interaction groups. Relaxing this definition allows storyline visualization to be applied more generally, but this creates questions about how the y-coordinate should encode interactions when this is tied to a particular place or state. To answer this question, we conducted a design study where we considered two layout algorithm design alternatives within a geo-temporal analysis tool written to solve part of the VAST Challenge 2014. We measured the performance of users at overview and detail oriented tasks between two storyline layout algorithms. To the best of our knowledge, this paper is the first work to question the design principles for storyline visualization, and what we found surprised us. For overview tasks with the alternative layout, which has a consistent encoding for the y-coordinate, users performed moderately better (p &lt;; .05) than the storyline layout based on existing design constraints and aesthetic criteria. Our empirical findings were also supported by first-hand accounts taken from interviews with multiple expert analysts, who suggested that the inconsistent meaning of the y-axis was misleading. These findings led us to design a new storyline layout algorithm that is a “best of both” where the y-axis has a consistent meaning but aesthetic criteria (e.g., line crossings) are considered.",
                "AuthorNamesDeduped": "Dustin Arendt;Meg Pirrung",
                "AuthorNames": "Dustin Arendt;Meg Pirrung",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory",
                "InternalReferences": "0.1109/vast.2009.5332593;10.1109/tvcg.2014.2346433;10.1109/tvcg.2013.196;10.1109/tvcg.2013.221;10.1109/tvcg.2012.212;10.1109/tvcg.2015.2468151;10.1109/tvcg.2014.2346919",
                "AuthorKeywords": "Storyline visualization,layout algorithms,interaction context,geospatial analysis,VAST Challenge",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 547,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 884,
                "i": [
                    884
                ]
            }
        },
        {
            "name": "Fereshteh Amini",
            "value": 34,
            "numPapers": 11,
            "cluster": "5",
            "visible": 1,
            "index": 1387,
            "x": 85.49379107871026,
            "y": -362.5476681582573,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Authoring Data-Driven Videos with DataClips",
                "DOI": "10.1109/tvcg.2016.2598647",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598647",
                "FirstPage": 501,
                "LastPage": 510,
                "PaperType": "J",
                "Abstract": "Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven “clips” together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.",
                "AuthorNamesDeduped": "Fereshteh Amini;Nathalie Henry Riche;Bongshin Lee;Andrés Monroy-Hernández;Pourang Irani",
                "AuthorNames": "Fereshteh Amini;Nathalie Henry Riche;Bongshin Lee;Andres Monroy-Hernandez;Pourang Irani",
                "AuthorAffiliation": "University of Manitoba, Canada;Microsoft;Microsoft;Microsoft;University of Manitoba, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70539;10.1109/tvcg.2008.137;10.1109/vast.2007.4388992;10.1109/tvcg.2013.234;10.1109/tvcg.2013.119;10.1109/tvcg.2011.255;10.1109/tvcg.2010.179;10.1109/vast.2012.6400487;10.1109/tvcg.2011.185",
                "AuthorKeywords": "data video;narrative visualization;data storytelling;authoring tools;visualization systems",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 77,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 2269,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 894,
                "i": [
                    894
                ]
            }
        },
        {
            "name": "Pourang Irani",
            "value": 93,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1388,
            "x": 181.92249247732528,
            "y": 325.1987188333274,
            "vy": 0,
            "vx": 0,
            "r": 1.1070811744386875,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Authoring Data-Driven Videos with DataClips",
                "DOI": "10.1109/tvcg.2016.2598647",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598647",
                "FirstPage": 501,
                "LastPage": 510,
                "PaperType": "J",
                "Abstract": "Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven “clips” together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.",
                "AuthorNamesDeduped": "Fereshteh Amini;Nathalie Henry Riche;Bongshin Lee;Andrés Monroy-Hernández;Pourang Irani",
                "AuthorNames": "Fereshteh Amini;Nathalie Henry Riche;Bongshin Lee;Andres Monroy-Hernandez;Pourang Irani",
                "AuthorAffiliation": "University of Manitoba, Canada;Microsoft;Microsoft;Microsoft;University of Manitoba, Canada",
                "InternalReferences": "0.1109/tvcg.2007.70539;10.1109/tvcg.2008.137;10.1109/vast.2007.4388992;10.1109/tvcg.2013.234;10.1109/tvcg.2013.119;10.1109/tvcg.2011.255;10.1109/tvcg.2010.179;10.1109/vast.2012.6400487;10.1109/tvcg.2011.185",
                "AuthorKeywords": "data video;narrative visualization;data storytelling;authoring tools;visualization systems",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 77,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 2269,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 894,
                "i": [
                    894
                ]
            }
        },
        {
            "name": "Mengdie Hu",
            "value": 18,
            "numPapers": 22,
            "cluster": "1",
            "visible": 1,
            "index": 1389,
            "x": -353.93994771854824,
            "y": -116.94662632582163,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Visualizing Social Media Content with SentenTree",
                "DOI": "10.1109/tvcg.2016.2598590",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598590",
                "FirstPage": 621,
                "LastPage": 630,
                "PaperType": "J",
                "Abstract": "We introduce SentenTree, a novel technique for visualizing the content of unstructured social media text. SentenTree displays frequent sentence patterns abstracted from a corpus of social media posts. The technique employs design ideas from word clouds and the Word Tree, but overcomes a number of limitations of both those visualizations. SentenTree displays a node-link diagram where nodes are words and links indicate word co-occurrence within the same sentence. The spatial arrangement of nodes gives cues to the syntactic ordering of words while the size of nodes gives cues to their frequency of occurrence. SentenTree can help people gain a rapid understanding of key concepts and opinions in a large social media text collection. It is implemented as a lightweight application that runs in the browser.",
                "AuthorNamesDeduped": "Mengdie Hu;Krist Wongsuphasawat;John T. Stasko",
                "AuthorNames": "Mengdie Hu;Krist Wongsuphasawat;John Stasko",
                "AuthorAffiliation": "Twitter Inc. and Georgia Institute of Technology;Twitter Inc.;Georgia Institute of Technology",
                "InternalReferences": "0.1109/tvcg.2009.171;10.1109/tvcg.2008.172;10.1109/vast.2009.5333443;10.1109/infvis.1995.528686;10.1109/tvcg.2010.154;10.1109/vast.2012.6400485;10.1109/tvcg.2011.179;10.1109/tvcg.2010.194;10.1109/tvcg.2013.221;10.1109/tvcg.2006.156;10.1109/tvcg.2009.165;10.1109/vast.2011.6102488;10.1109/tvcg.2014.2346920;10.1109/tvcg.2015.2467991;10.1109/tvcg.2011.239",
                "AuthorKeywords": "text visualization;social media;natural language processing;word cloud;Twitter",
                "AminerCitationCount": 58,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2798,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 905,
                "i": [
                    905
                ]
            }
        },
        {
            "name": "Caroline Ziemkiewicz",
            "value": 193,
            "numPapers": 37,
            "cluster": "5",
            "visible": 1,
            "index": 1390,
            "x": 340.1029303740974,
            "y": -152.9051887639915,
            "vy": 0,
            "vx": 0,
            "r": 1.2222222222222223,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Improving Bayesian Reasoning: The Effects of Phrasing, Visualization, and Spatial Ability",
                "DOI": "10.1109/tvcg.2015.2467758",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467758",
                "FirstPage": 529,
                "LastPage": 538,
                "PaperType": "J",
                "Abstract": "Decades of research have repeatedly shown that people perform poorly at estimating and understanding conditional probabilities that are inherent in Bayesian reasoning problems. Yet in the medical domain, both physicians and patients make daily, life-critical judgments based on conditional probability. Although there have been a number of attempts to develop more effective ways to facilitate Bayesian reasoning, reports of these findings tend to be inconsistent and sometimes even contradictory. For instance, the reported accuracies for individuals being able to correctly estimate conditional probability range from 6% to 62%. In this work, we show that problem representation can significantly affect accuracies. By controlling the amount of information presented to the user, we demonstrate how text and visualization designs can increase overall accuracies to as high as 77%. Additionally, we found that for users with high spatial ability, our designs can further improve their accuracies to as high as 100%. By and large, our findings provide explanations for the inconsistent reports on accuracy in Bayesian reasoning tasks and show a significant improvement over existing methods. We believe that these findings can have immediate impact on risk communication in health-related fields.",
                "AuthorNamesDeduped": "Alvitta Ottley;Evan M. Peck;Lane T. Harrison;Daniel Afergan;Caroline Ziemkiewicz;Holly A. Taylor;Paul K. J. Han;Remco Chang",
                "AuthorNames": "Alvitta Ottley;Evan M. Peck;Lane T. Harrison;Daniel Afergan;Caroline Ziemkiewicz;Holly A. Taylor;Paul K. J. Han;Remco Chang",
                "AuthorAffiliation": "Tufts University;Bucknell University;Tufts University;Tufts University;Tufts University and Aptima Inc.;Tufts University;Maine Medical Center and Tufts Medical School;Tufts University",
                "InternalReferences": "0.1109/tvcg.2014.2346575;10.1109/vast.2010.5653587;10.1109/tvcg.2011.255;10.1109/tvcg.2013.119;10.1109/tvcg.2012.199;10.1109/tvcg.2010.179;10.1109/visual.2005.1532836",
                "AuthorKeywords": "Bayesian Reasoning, Visualization, Spatial Ability, Individual Differences",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 65,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1326,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1011,
                "i": [
                    1011
                ]
            }
        },
        {
            "name": "Mona Hosseinkhani Loorak",
            "value": 54,
            "numPapers": 26,
            "cluster": "3",
            "visible": 1,
            "index": 1391,
            "x": -147.54837251825936,
            "y": 342.60688517193137,
            "vy": 0,
            "vx": 0,
            "r": 1.0621761658031088,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "TimeSpan: Using Visualization to Explore Temporal Multi-dimensional Data of Stroke Patients",
                "DOI": "10.1109/tvcg.2015.2467325",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467325",
                "FirstPage": 409,
                "LastPage": 418,
                "PaperType": "J",
                "Abstract": "We present TimeSpan, an exploratory visualization tool designed to gain a better understanding of the temporal aspects of the stroke treatment process. Working with stroke experts, we seek to provide a tool to help improve outcomes for stroke victims. Time is of critical importance in the treatment of acute ischemic stroke patients. Every minute that the artery stays blocked, an estimated 1.9 million neurons and 12 km of myelinated axons are destroyed. Consequently, there is a critical need for efficiency of stroke treatment processes. Optimizing time to treatment requires a deep understanding of interval times. Stroke health care professionals must analyze the impact of procedures, events, and patient attributes on time-ultimately, to save lives and improve quality of life after stroke. First, we interviewed eight domain experts, and closely collaborated with two of them to inform the design of TimeSpan. We classify the analytical tasks which a visualization tool should support and extract design goals from the interviews and field observations. Based on these tasks and the understanding gained from the collaboration, we designed TimeSpan, a web-based tool for exploring multi-dimensional and temporal stroke data. We describe how TimeSpan incorporates factors from stacked bar graphs, line charts, histograms, and a matrix visualization to create an interactive hybrid view of temporal data. From feedback collected from domain experts in a focus group session, we reflect on the lessons we learned from abstracting the tasks and iteratively designing TimeSpan.",
                "AuthorNamesDeduped": "Mona Hosseinkhani Loorak;Charles Perin;Noreen Kamal;Michael D. Hill;Sheelagh Carpendale",
                "AuthorNames": "Mona Hosseinkhani Loorak;Charles Perin;Noreen Kamal;Michael Hill;Sheelagh Carpendale",
                "AuthorAffiliation": "Department of Computer Science, University of Calgary;Department of Computer Science, University of Calgary;Department of Clinical Neurosciences, University of Calgary;Department of Clinical Neurosciences, University of Calgary;Department of Computer Science, University of Calgary",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/vast.2006.261421;10.1109/tvcg.2014.2346682;10.1109/tvcg.2013.200;10.1109/tvcg.2014.2346279;10.1109/infvis.2005.1532152;10.1109/tvcg.2009.187;10.1109/tvcg.2012.225;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Multi-dimensional data, Temporal event sequences, Electronic health records",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 1764,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1016,
                "i": [
                    1016
                ]
            }
        },
        {
            "name": "Florian Ferstl",
            "value": 93,
            "numPapers": 33,
            "cluster": "11",
            "visible": 1,
            "index": 1392,
            "x": -122.67410383787345,
            "y": -352.42171364371785,
            "vy": 0,
            "vx": 0,
            "r": 1.1070811744386875,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Streamline Variability Plots for Characterizing the Uncertainty in Vector Field Ensembles",
                "DOI": "10.1109/tvcg.2015.2467204",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467204",
                "FirstPage": 767,
                "LastPage": 776,
                "PaperType": "J",
                "Abstract": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
                "AuthorNamesDeduped": "Florian Ferstl;Kai Bürger;Rüdiger Westermann",
                "AuthorNames": "Florian Ferstl;Kai Bürger;Rüdiger Westermann",
                "AuthorAffiliation": "Computer Graphics and Visualization Group, Technische Universität München;Computer Graphics and Visualization Group, Technische Universität München;Computer Graphics and Visualization Group, Technische Universität München",
                "InternalReferences": "0.1109/tvcg.2007.70595;10.1109/visual.2000.885715;10.1109/visual.1999.809863;10.1109/tvcg.2013.141;10.1109/tvcg.2007.70518;10.1109/tvcg.2014.2346455;10.1109/visual.2005.1532779;10.1109/tvcg.2010.181;10.1109/visual.1999.809865;10.1109/tvcg.2013.143",
                "AuthorKeywords": "Ensemble visualization, uncertainty visualization, flow visualization, streamlines, statistical modeling",
                "AminerCitationCount": 109,
                "CitationCountCrossRef": 79,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 1615,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1041,
                "i": [
                    1041
                ]
            }
        },
        {
            "name": "Lonni Besançon",
            "value": 18,
            "numPapers": 21,
            "cluster": "6",
            "visible": 1,
            "index": 1393,
            "x": 328.63145037329355,
            "y": 177.06318032144765,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "Vis",
                "Year": 2022,
                "Title": "RoboHapalytics: A Robot Assisted Haptic Controller for Immersive Analytics",
                "DOI": "10.1109/tvcg.2022.3209433",
                "Link": "http://dx.doi.org/10.1109/TVCG.2022.3209433",
                "FirstPage": 451,
                "LastPage": 461,
                "PaperType": "J",
                "Abstract": "Immersive environments offer new possibilities for exploring three-dimensional volumetric or abstract data. However, typical mid-air interaction offers little guidance to the user in interacting with the resulting visuals. Previous work has explored the use of haptic controls to give users tangible affordances for interacting with the data, but these controls have either: been limited in their range and resolution; were spatially fixed; or required users to manually align them with the data space. We explore the use of a robot arm with hand tracking to align tangible controls under the user's fingers as they reach out to interact with data affordances. We begin with a study evaluating the effectiveness of a robot-extended slider control compared to a large fixed physical slider and a purely virtual mid-air slider. We find that the robot slider has similar accuracy to the physical slider but is significantly more accurate than mid-air interaction. Further, the robot slider can be arbitrarily reoriented, opening up many new possibilities for tangible haptic interaction with immersive visualisations. We demonstrate these possibilities through three use-cases: selection in a time-series chart; interactive slicing of CT scans; and finally exploration of a scatter plot depicting time-varying socio-economic data.",
                "AuthorNamesDeduped": "Shaozhang Dai;Jim Smiley;Tim Dwyer;Barrett Ens;Lonni Besançon",
                "AuthorNames": "Shaozhang Dai;Jim Smiley;Tim Dwyer;Barrett Ens;Lonni Besancon",
                "AuthorAffiliation": "Monash University, Australia;Monash University, Australia;Monash University, Australia;Monash University, Australia;Linköping University, Sweden",
                "InternalReferences": "0.1109/tvcg.2016.2599217;10.1109/tvcg.2013.121;10.1109/tvcg.2014.2346250;10.1109/visual.2004.47",
                "AuthorKeywords": "Haptic Feedback,Human Centred Interaction,Robotic Arm",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 4,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 506,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 191,
                "i": [
                    191
                ]
            }
        },
        {
            "name": "Paul Issartel",
            "value": 0,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1394,
            "x": -362.0569029507415,
            "y": 91.4592752306585,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Hybrid Tactile/Tangible Interaction for 3D Data Exploration",
                "DOI": "10.1109/tvcg.2016.2599217",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599217",
                "FirstPage": 881,
                "LastPage": 890,
                "PaperType": "J",
                "Abstract": "We present the design and evaluation of an interface that combines tactile and tangible paradigms for 3D visualization. While studies have demonstrated that both tactile and tangible input can be efficient for a subset of 3D manipulation tasks, we reflect here on the possibility to combine the two complementary input types. Based on a field study and follow-up interviews, we present a conceptual framework of the use of these different interaction modalities for visualization both separately and combined-focusing on free exploration as well as precise control. We present a prototypical application of a subset of these combined mappings for fluid dynamics data visualization using a portable, position-aware device which offers both tactile input and tangible sensing. We evaluate our approach with domain experts and report on their qualitative feedback.",
                "AuthorNamesDeduped": "Lonni Besançon;Paul Issartel;Mehdi Ammi;Tobias Isenberg 0001",
                "AuthorNames": "Lonni Besançon;Paul Issartel;Mehdi Ammi;Tobias Isenberg",
                "AuthorAffiliation": "Inria Saclay, Univ. Paris Saclay, France;Univ. Paris Saclay, France;Limsi/CNRS, France;Inria, France",
                "InternalReferences": "0.1109/tvcg.2013.121;10.1109/tvcg.2010.164;10.1109/visual.2004.47;10.1109/tvcg.2007.70515;10.1109/tvcg.2010.157;10.1109/visual.2005.1532846;10.1109/tvcg.2011.224;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467202;10.1109/tvcg.2012.292;10.1109/tvcg.2013.126;10.1109/tvcg.2012.217",
                "AuthorKeywords": "3D data visualization;Interaction;tactile input;tangible input",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1663,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 929,
                "i": [
                    929
                ]
            }
        },
        {
            "name": "Mehdi Ammi",
            "value": 0,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1395,
            "x": 205.26319744925965,
            "y": -312.116996930488,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Hybrid Tactile/Tangible Interaction for 3D Data Exploration",
                "DOI": "10.1109/tvcg.2016.2599217",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599217",
                "FirstPage": 881,
                "LastPage": 890,
                "PaperType": "J",
                "Abstract": "We present the design and evaluation of an interface that combines tactile and tangible paradigms for 3D visualization. While studies have demonstrated that both tactile and tangible input can be efficient for a subset of 3D manipulation tasks, we reflect here on the possibility to combine the two complementary input types. Based on a field study and follow-up interviews, we present a conceptual framework of the use of these different interaction modalities for visualization both separately and combined-focusing on free exploration as well as precise control. We present a prototypical application of a subset of these combined mappings for fluid dynamics data visualization using a portable, position-aware device which offers both tactile input and tangible sensing. We evaluate our approach with domain experts and report on their qualitative feedback.",
                "AuthorNamesDeduped": "Lonni Besançon;Paul Issartel;Mehdi Ammi;Tobias Isenberg 0001",
                "AuthorNames": "Lonni Besançon;Paul Issartel;Mehdi Ammi;Tobias Isenberg",
                "AuthorAffiliation": "Inria Saclay, Univ. Paris Saclay, France;Univ. Paris Saclay, France;Limsi/CNRS, France;Inria, France",
                "InternalReferences": "0.1109/tvcg.2013.121;10.1109/tvcg.2010.164;10.1109/visual.2004.47;10.1109/tvcg.2007.70515;10.1109/tvcg.2010.157;10.1109/visual.2005.1532846;10.1109/tvcg.2011.224;10.1109/tvcg.2013.124;10.1109/tvcg.2015.2467202;10.1109/tvcg.2012.292;10.1109/tvcg.2013.126;10.1109/tvcg.2012.217",
                "AuthorKeywords": "3D data visualization;Interaction;tactile input;tangible input",
                "AminerCitationCount": 64,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 89,
                "DownloadsXplore": 1663,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 929,
                "i": [
                    929
                ]
            }
        },
        {
            "name": "Rocco Gasteiger",
            "value": 76,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1396,
            "x": 59.49861506391885,
            "y": 368.9307723753545,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Semi-Automatic Vortex Extraction in 4D PC-MRI Cardiac Blood Flow Data using Line Predicates",
                "DOI": "10.1109/tvcg.2013.189",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.189",
                "FirstPage": 2773,
                "LastPage": 2782,
                "PaperType": "J",
                "Abstract": "Cardiovascular diseases (CVD) are the leading cause of death worldwide. Their initiation and evolution depends strongly on the blood flow characteristics. In recent years, advances in 4D PC-MRI acquisition enable reliable and time-resolved 3D flow measuring, which allows a qualitative and quantitative analysis of the patient-specific hemodynamics. Currently, medical researchers investigate the relation between characteristic flow patterns like vortices and different pathologies. The manual extraction and evaluation is tedious and requires expert knowledge. Standardized, (semi-)automatic and reliable techniques are necessary to make the analysis of 4D PC-MRI applicable for the clinical routine. In this work, we present an approach for the extraction of vortex flow in the aorta and pulmonary artery incorporating line predicates. We provide an extensive comparison of existent vortex extraction methods to determine the most suitable vortex criterion for cardiac blood flow and apply our approach to ten datasets with different pathologies like coarctations, Tetralogy of Fallot and aneurysms. For two cases we provide a detailed discussion how our results are capable to complement existent diagnosis information. To ensure real-time feedback for the domain experts we implement our method completely on the GPU.",
                "AuthorNamesDeduped": "Benjamin Köhler 0001;Rocco Gasteiger;Uta Preim;Holger Theisel;Matthias Gutberlet;Bernhard Preim",
                "AuthorNames": "Benjamin Köhler;Rocco Gasteiger;Uta Preim;Holger Theisel;Matthias Gutberlet;Bernhard Preim",
                "AuthorAffiliation": "Bernhard Preims Visualization group, Germany;Bernhard Preims Visualization group, Germany;Matthias Gutberlets group, USA;Visual Computing group, department of Simulation and Graphics, University of Magdeburg, Germany;Department of diagnostic and interventional radiology, Herzzentrum, Leipzig, Germany;Visual Computing group, department of Simulation and Graphics, University of Magdeburg, Germany",
                "InternalReferences": "0.1109/tvcg.2011.260;10.1109/visual.1999.809869;10.1109/tvcg.2010.153;10.1109/visual.1999.809896;10.1109/tvcg.2011.243;10.1109/tvcg.2007.70545;10.1109/visual.2004.99;10.1109/tvcg.2010.173",
                "AuthorKeywords": "4D pc-mri, cardiac blood flow, hemodynamics, line predicates, vortex extraction",
                "AminerCitationCount": 94,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 687,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1339,
                "i": [
                    1339
                ]
            }
        },
        {
            "name": "Mathias Neugebauer",
            "value": 41,
            "numPapers": 5,
            "cluster": "6",
            "visible": 1,
            "index": 1397,
            "x": -293.1864985821051,
            "y": -231.9303280064193,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "The FLOWLENS: A Focus-and-Context Visualization Approach for Exploration of Blood Flow in Cerebral Aneurysms",
                "DOI": "10.1109/tvcg.2011.243",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.243",
                "FirstPage": 2183,
                "LastPage": 2192,
                "PaperType": "J",
                "Abstract": "Blood flow and derived data are essential to investigate the initiation and progression of cerebral aneurysms as well as their risk of rupture. An effective visual exploration of several hemodynamic attributes like the wall shear stress (WSS) and the inflow jet is necessary to understand the hemodynamics. Moreover, the correlation between focus-and-context attributes is of particular interest. An expressive visualization of these attributes and anatomic information requires appropriate visualization techniques to minimize visual clutter and occlusions. We present the FLOWLENS as a focus-and-context approach that addresses these requirements. We group relevant hemodynamic attributes to pairs of focus-and-context attributes and assign them to different anatomic scopes. For each scope, we propose several FLOWLENS visualization templates to provide a flexible visual filtering of the involved hemodynamic pairs. A template consists of the visualization of the focus attribute and the additional depiction of the context attribute inside the lens. Furthermore, the FLOWLENS supports local probing and the exploration of attribute changes over time. The FLOWLENS minimizes visual cluttering, occlusions, and provides a flexible exploration of a region of interest. We have applied our approach to seven representative datasets, including steady and unsteady flow data from CFD simulations and 4D PC-MRI measurements. Informal user interviews with three domain experts confirm the usefulness of our approach.",
                "AuthorNamesDeduped": "Rocco Gasteiger;Mathias Neugebauer;Oliver Beuing;Bernhard Preim",
                "AuthorNames": "Rocco Gasteiger;Mathias Neugebauer;Oliver Beuing;Bernhard Preim",
                "AuthorAffiliation": "Department of Simulation and Graphics, University of Magdeburg, Germany;Department of Simulation and Graphics, University of Magdeburg, Germany;Department of Neuroradiology, University Hospital Magdeburg, Germany;Department of Simulation and Graphics, University of Magdeburg, Germany",
                "InternalReferences": "0.1109/tvcg.2010.166;10.1109/tvcg.2009.138;10.1109/tvcg.2010.153;10.1109/tvcg.2006.124;10.1109/tvcg.2009.126;10.1109/visual.2005.1532818",
                "AuthorKeywords": "Flow Visualization, Focus-and-Context, Illustrative Rendering, Aneurysm",
                "AminerCitationCount": 102,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 908,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1642,
                "i": [
                    1642
                ]
            }
        },
        {
            "name": "Shigeo Takahashi",
            "value": 95,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 1398,
            "x": 372.9866612868811,
            "y": -27.036096280073508,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "A feature-driven approach to locating optimal viewpoints for volume visualization",
                "DOI": "10.1109/visual.2005.1532834",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532834",
                "FirstPage": 495,
                "LastPage": 502,
                "PaperType": "C",
                "Abstract": "Optimal viewpoint selection is an important task because it considerably influences the amount of information contained in the 2D projected images of 3D objects, and thus dominates their first impressions from a psychological point of view. Although several methods have been proposed that calculate the optimal positions of viewpoints especially for 3D surface meshes, none has been done for solid objects such as volumes. This paper presents a new method of locating such optimal viewpoints when visualizing volumes using direct volume rendering. The major idea behind our method is to decompose an entire volume into a set of feature components, and then find a globally optimal viewpoint by finding a compromise between locally optimal viewpoints for the components. As the feature components, the method employs interval volumes and their combinations that characterize the topological transitions of isosurfaces according to the scalar field. Furthermore, opacity transfer functions are also utilized to assign different weights to the decomposed components so that users can emphasize features of specific interest in the volumes. Several examples of volume datasets together with their optimal positions of viewpoints are exhibited in order to demonstrate that the method can effectively guide naive users to find optimal projections of volumes.",
                "AuthorNamesDeduped": "Shigeo Takahashi;Issei Fujishiro;Yuriko Takeshima;Tomoyuki Nishita",
                "AuthorNames": "S. Takahashi;I. Fujishiro;Y. Takeshima;T. Nishita",
                "AuthorAffiliation": "University of Tokyo, Japan;University of Tohoku, Japan;Tohoku University;University of Tokyo, Japan",
                "InternalReferences": "0.1109/visual.1995.480789;10.1109/visual.2004.96;10.1109/visual.2002.1183774;10.1109/visual.2005.1532833;10.1109/visual.1997.663875;10.1109/visual.2002.1183785",
                "AuthorKeywords": "viewpoint selection, viewpoint entropy, direct volume rendering, interval volumes, level-set graphs",
                "AminerCitationCount": 234,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 549,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2372,
                "i": [
                    2372
                ]
            }
        },
        {
            "name": "Tomoyuki Nishita",
            "value": 67,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 1399,
            "x": -256.85791948362703,
            "y": 271.9816339360852,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "A feature-driven approach to locating optimal viewpoints for volume visualization",
                "DOI": "10.1109/visual.2005.1532834",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532834",
                "FirstPage": 495,
                "LastPage": 502,
                "PaperType": "C",
                "Abstract": "Optimal viewpoint selection is an important task because it considerably influences the amount of information contained in the 2D projected images of 3D objects, and thus dominates their first impressions from a psychological point of view. Although several methods have been proposed that calculate the optimal positions of viewpoints especially for 3D surface meshes, none has been done for solid objects such as volumes. This paper presents a new method of locating such optimal viewpoints when visualizing volumes using direct volume rendering. The major idea behind our method is to decompose an entire volume into a set of feature components, and then find a globally optimal viewpoint by finding a compromise between locally optimal viewpoints for the components. As the feature components, the method employs interval volumes and their combinations that characterize the topological transitions of isosurfaces according to the scalar field. Furthermore, opacity transfer functions are also utilized to assign different weights to the decomposed components so that users can emphasize features of specific interest in the volumes. Several examples of volume datasets together with their optimal positions of viewpoints are exhibited in order to demonstrate that the method can effectively guide naive users to find optimal projections of volumes.",
                "AuthorNamesDeduped": "Shigeo Takahashi;Issei Fujishiro;Yuriko Takeshima;Tomoyuki Nishita",
                "AuthorNames": "S. Takahashi;I. Fujishiro;Y. Takeshima;T. Nishita",
                "AuthorAffiliation": "University of Tokyo, Japan;University of Tohoku, Japan;Tohoku University;University of Tokyo, Japan",
                "InternalReferences": "0.1109/visual.1995.480789;10.1109/visual.2004.96;10.1109/visual.2002.1183774;10.1109/visual.2005.1532833;10.1109/visual.1997.663875;10.1109/visual.2002.1183785",
                "AuthorKeywords": "viewpoint selection, viewpoint entropy, direct volume rendering, interval volumes, level-set graphs",
                "AminerCitationCount": 234,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 549,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2372,
                "i": [
                    2372
                ]
            }
        },
        {
            "name": "Daniel Jönsson",
            "value": 50,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1400,
            "x": 5.6801102226292395,
            "y": -374.189439118555,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Intuitive Exploration of Volumetric Data Using Dynamic Galleries",
                "DOI": "10.1109/tvcg.2015.2467294",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467294",
                "FirstPage": 896,
                "LastPage": 905,
                "PaperType": "J",
                "Abstract": "In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.",
                "AuthorNamesDeduped": "Daniel Jönsson;Martin Falk;Anders Ynnerman",
                "AuthorNames": "Daniel Jönsson;Martin Falk;Anders Ynnerman",
                "AuthorAffiliation": "Linköping University, Sweden;Linköping University, Sweden;Linköping University, Sweden",
                "InternalReferences": "0.1109/tvcg.2008.162;10.1109/tvcg.2011.261;10.1109/visual.1996.568113;10.1109/tvcg.2012.231;10.1109/tvcg.2010.195;10.1109/tvcg.2011.224;10.1109/tvcg.2006.148;10.1109/tvcg.2011.218",
                "AuthorKeywords": "Transfer function, scalar fields, volume rendering, touch interaction, visualization, user interfaces",
                "AminerCitationCount": 25,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 665,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1062,
                "i": [
                    1062
                ]
            }
        },
        {
            "name": "Victor Guallar",
            "value": 12,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1401,
            "x": 248.66172637786232,
            "y": 279.8523643544595,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Physics-Based Visual Characterization of Molecular Interaction Forces",
                "DOI": "10.1109/tvcg.2016.2598825",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598825",
                "FirstPage": 731,
                "LastPage": 740,
                "PaperType": "J",
                "Abstract": "Molecular simulations are used in many areas of biotechnology, such as drug design and enzyme engineering. Despite the development of automatic computational protocols, analysis of molecular interactions is still a major aspect where human comprehension and intuition are key to accelerate, analyze, and propose modifications to the molecule of interest. Most visualization algorithms help the users by providing an accurate depiction of the spatial arrangement: the atoms involved in inter-molecular contacts. There are few tools that provide visual information on the forces governing molecular docking. However, these tools, commonly restricted to close interaction between atoms, do not consider whole simulation paths, long-range distances and, importantly, do not provide visual cues for a quick and intuitive comprehension of the energy functions (modeling intermolecular interactions) involved. In this paper, we propose visualizations designed to enable the characterization of interaction forces by taking into account several relevant variables such as molecule-ligand distance and the energy function, which is essential to understand binding affinities. We put emphasis on mapping molecular docking paths obtained from Molecular Dynamics or Monte Carlo simulations, and provide time-dependent visualizations for different energy components and particle resolutions: atoms, groups or residues. The presented visualizations have the potential to support domain experts in a more efficient drug or enzyme design process.",
                "AuthorNamesDeduped": "Pedro Hermosilla;Jorge Estrada;Victor Guallar;Timo Ropinski;Àlvar Vinacua;Pere-Pau Vázquez",
                "AuthorNames": "Pedro Hermosilla;Jorge Estrada;Victor Guallar;Timo Ropinski;Àlvar Vinacua;Pere-Pau Vázquez",
                "AuthorAffiliation": "ViRVIG Group, Barcelona Supercomputing Center;Barcelona Supercomputing Center;Barcelona Supercomputing Center;Visual Computing Group, Ulm University;ViRVIG Group, UPC, Barcelona;ViRVIG Group, UPC, Barcelona",
                "InternalReferences": "0.1109/tvcg.2009.168;10.1109/tvcg.2012.282;10.1109/tvcg.2015.2467293;10.1109/tvcg.2007.70578;10.1109/tvcg.2006.115;10.1109/tvcg.2007.70517;10.1109/tvcg.2014.2346403;10.1109/tvcg.2009.157",
                "AuthorKeywords": "Molecular visualization;binding analysis",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 681,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 940,
                "i": [
                    940
                ]
            }
        },
        {
            "name": "Pere-Pau Vázquez",
            "value": 21,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 1402,
            "x": -372.5258056035165,
            "y": -38.39953332334935,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Visualization of Large Molecular Trajectories",
                "DOI": "10.1109/tvcg.2018.2864851",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864851",
                "FirstPage": 987,
                "LastPage": 996,
                "PaperType": "J",
                "Abstract": "The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.",
                "AuthorNamesDeduped": "David Duran;Pedro Hermosilla;Timo Ropinski;Barbora Kozlíková;Àlvar Vinacua;Pere-Pau Vázquez",
                "AuthorNames": "David Duran;Pedro Hermosilla;Timo Ropinski;Barbora Kozlíková;Álvar Vinacua;Pere-Pau Vázquez",
                "AuthorAffiliation": "Institut de Robotica i Informatica Industrial, Barcelona, Catalunya, ES;Visual Computing Group, U. Ulm.;Visual Computing Group, U. Ulm.;Masarykova univerzita, Brno, Jihomoravský, CZ;Institut de Robotica i Informatica Industrial, Barcelona, Catalunya, ES;Institut de Robotica i Informatica Industrial, Barcelona, Catalunya, ES",
                "InternalReferences": "0.1109/tvcg.2015.2467434;10.1109/visual.2005.1532792;10.1109/tvcg.2016.2598825;10.1109/tvcg.2016.2598797;10.1109/tvcg.2014.2346574;10.1109/tvcg.2012.225",
                "AuthorKeywords": "Molecular visualization,simulation inspection,long trajectories",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 712,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 706,
                "i": [
                    706
                ]
            }
        },
        {
            "name": "Al Globus",
            "value": 131,
            "numPapers": 2,
            "cluster": "11",
            "visible": 1,
            "index": 1403,
            "x": 300.7346038601814,
            "y": -223.4025470782726,
            "vy": 0,
            "vx": 0,
            "r": 1.1508347725964307,
            "node": {
                "Conference": "Vis",
                "Year": 1991,
                "Title": "A tool for visualizing the topology of three-dimensional vector fields",
                "DOI": "10.1109/visual.1991.175773",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1991.175773",
                "FirstPage": 33,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "A description is given of a software system, TOPO, that numerically analyzes and graphically displays topological aspects of a three-dimensional vector field, v, to produce a single, relatively simple picture that characterizes v. The topology of v considered consists of its critical points (where v=0), their invariant manifolds, and the integral curves connecting these invariant manifolds. The field in the neighborhood of each critical point is approximated by the Taylor expansion. The coefficients of the first nonzero term of the Taylor expansion around a critical point are the 3*3 matrix Delta v. Critical points are classified by examining Delta v's eigenvalues. The eigenvectors of Delta v span the invariant manifolds of the linearized field around a critical point. Curves integrated from initial points on the eigenvectors a small distance from a critical point connect with other critical points (or the boundary) to complete the topology. One class of critical surfaces that is important in computational fluid dynamics is analyzed.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Al Globus;Creon Levit;T. Lasinski",
                "AuthorNames": "A. Globus;C. Levit;T. Lasinski",
                "AuthorAffiliation": "Computer Science Corporation;NASA Ames Research Center;NASA Ames Research Center",
                "InternalReferences": "0.1109/visual.1990.146360;10.1109/visual.1990.146359;10.1109/visual.1991.175773",
                "AuthorKeywords": null,
                "AminerCitationCount": 374,
                "CitationCountCrossRef": 124,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 246,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3645,
                "i": [
                    3645
                ]
            }
        },
        {
            "name": "Creon Levit",
            "value": 129,
            "numPapers": 3,
            "cluster": "11",
            "visible": 1,
            "index": 1404,
            "x": -70.87131976448752,
            "y": 368.00442393378876,
            "vy": 0,
            "vx": 0,
            "r": 1.14853195164076,
            "node": {
                "Conference": "Vis",
                "Year": 1991,
                "Title": "A tool for visualizing the topology of three-dimensional vector fields",
                "DOI": "10.1109/visual.1991.175773",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1991.175773",
                "FirstPage": 33,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "A description is given of a software system, TOPO, that numerically analyzes and graphically displays topological aspects of a three-dimensional vector field, v, to produce a single, relatively simple picture that characterizes v. The topology of v considered consists of its critical points (where v=0), their invariant manifolds, and the integral curves connecting these invariant manifolds. The field in the neighborhood of each critical point is approximated by the Taylor expansion. The coefficients of the first nonzero term of the Taylor expansion around a critical point are the 3*3 matrix Delta v. Critical points are classified by examining Delta v's eigenvalues. The eigenvectors of Delta v span the invariant manifolds of the linearized field around a critical point. Curves integrated from initial points on the eigenvectors a small distance from a critical point connect with other critical points (or the boundary) to complete the topology. One class of critical surfaces that is important in computational fluid dynamics is analyzed.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Al Globus;Creon Levit;T. Lasinski",
                "AuthorNames": "A. Globus;C. Levit;T. Lasinski",
                "AuthorAffiliation": "Computer Science Corporation;NASA Ames Research Center;NASA Ames Research Center",
                "InternalReferences": "0.1109/visual.1990.146360;10.1109/visual.1990.146359;10.1109/visual.1991.175773",
                "AuthorKeywords": null,
                "AminerCitationCount": 374,
                "CitationCountCrossRef": 124,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 246,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3645,
                "i": [
                    3645
                ]
            }
        },
        {
            "name": "T. Lasinski",
            "value": 82,
            "numPapers": 2,
            "cluster": "11",
            "visible": 1,
            "index": 1405,
            "x": -196.39499009367867,
            "y": -319.3415223019141,
            "vy": 0,
            "vx": 0,
            "r": 1.0944156591824985,
            "node": {
                "Conference": "Vis",
                "Year": 1991,
                "Title": "A tool for visualizing the topology of three-dimensional vector fields",
                "DOI": "10.1109/visual.1991.175773",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1991.175773",
                "FirstPage": 33,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "A description is given of a software system, TOPO, that numerically analyzes and graphically displays topological aspects of a three-dimensional vector field, v, to produce a single, relatively simple picture that characterizes v. The topology of v considered consists of its critical points (where v=0), their invariant manifolds, and the integral curves connecting these invariant manifolds. The field in the neighborhood of each critical point is approximated by the Taylor expansion. The coefficients of the first nonzero term of the Taylor expansion around a critical point are the 3*3 matrix Delta v. Critical points are classified by examining Delta v's eigenvalues. The eigenvectors of Delta v span the invariant manifolds of the linearized field around a critical point. Curves integrated from initial points on the eigenvectors a small distance from a critical point connect with other critical points (or the boundary) to complete the topology. One class of critical surfaces that is important in computational fluid dynamics is analyzed.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Al Globus;Creon Levit;T. Lasinski",
                "AuthorNames": "A. Globus;C. Levit;T. Lasinski",
                "AuthorAffiliation": "Computer Science Corporation;NASA Ames Research Center;NASA Ames Research Center",
                "InternalReferences": "0.1109/visual.1990.146360;10.1109/visual.1990.146359;10.1109/visual.1991.175773",
                "AuthorKeywords": null,
                "AminerCitationCount": 374,
                "CitationCountCrossRef": 124,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 246,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3645,
                "i": [
                    3645
                ]
            }
        },
        {
            "name": "Saad Nadeem",
            "value": 5,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1406,
            "x": 360.6558855830376,
            "y": 102.84615789768179,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "C2A: Crowd consensus analytics for virtual colonoscopy",
                "DOI": "10.1109/vast.2016.7883508",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883508",
                "FirstPage": 21,
                "LastPage": 30,
                "PaperType": "C",
                "Abstract": "We present a medical crowdsourcing visual analytics platform called C<sup>2</sup>A to visualize, classify and filter crowdsourced clinical data. More specifically, C<sup>2</sup>A is used to build consensus on a clinical diagnosis by visualizing crowd responses and filtering out anomalous activity. Crowdsourcing medical applications have recently shown promise where the non-expert users (the crowd) were able to achieve accuracy similar to the medical experts. This has the potential to reduce interpretation/reading time and possibly improve accuracy by building a consensus on the findings beforehand and letting the medical experts make the final diagnosis. In this paper, we focus on a virtual colonoscopy (VC) application with the clinical technicians as our target users, and the radiologists acting as consultants and classifying segments as benign or malignant. In particular, C<sup>2</sup>A is used to analyze and explore crowd responses on video segments, created from fly-throughs in the virtual colon. C<sup>2</sup>A provides several interactive visualization components to build crowd consensus on video segments, to detect anomalies in the crowd data and in the VC video segments, and finally, to improve the non-expert user's work quality and performance by A/B testing for the optimal crowdsourcing platform and application-specific parameters. Case studies and domain experts feedback demonstrate the effectiveness of our framework in improving workers' output quality, the potential to reduce the radiologists' interpretation time, and hence, the potential to improve the traditional clinical workflow by marking the majority of the video segments as benign based on the crowd consensus.",
                "AuthorNamesDeduped": "Ji Hwan Park;Saad Nadeem;Seyedkoosha Mirhosseini;Arie E. Kaufman",
                "AuthorNames": "Ji Hwan Park;Saad Nadeem;Seyedkoosha Mirhosseini;Arie Kaufman",
                "AuthorAffiliation": "Stony Brook University;Stony Brook University;Stony Brook University;Stony Brook University",
                "InternalReferences": "0.1109/tvcg.2015.2467196;10.1109/tvcg.2006.112;10.1109/tvcg.2009.171;10.1109/tvcg.2006.158;10.1109/vast.2015.7347631;10.1109/tvcg.2013.164;10.1109/tvcg.2015.2467555",
                "AuthorKeywords": null,
                "AminerCitationCount": 0,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 295,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 998,
                "i": [
                    998
                ]
            }
        },
        {
            "name": "Wei Zeng 0002",
            "value": 46,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1407,
            "x": -335.52722098365086,
            "y": 167.8436295454443,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Colon Flattening Using Heat Diffusion Riemannian Metric",
                "DOI": "10.1109/tvcg.2013.139",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.139",
                "FirstPage": 2848,
                "LastPage": 2857,
                "PaperType": "J",
                "Abstract": "We propose a new colon flattening algorithm that is efficient, shape-preserving, and robust to topological noise. Unlike previous approaches, which require a mandatory topological denoising to remove fake handles, our algorithm directly flattens the colon surface without any denoising. In our method, we replace the original Euclidean metric of the colon surface with a heat diffusion metric that is insensitive to topological noise. Using this heat diffusion metric, we then solve a Laplacian equation followed by an integration step to compute the final flattening. We demonstrate that our method is shape-preserving and the shape of the polyps are well preserved. The flattened colon also provides an efficient way to enhance the navigation and inspection in virtual colonoscopy. We further show how the existing colon registration pipeline is made more robust by using our colon flattening. We have tested our method on several colon wall surfaces and the experimental results demonstrate the robustness and the efficiency of our method.",
                "AuthorNamesDeduped": "Krishna Chaitanya Gurijala;Rui Shi;Wei Zeng 0002;Xianfeng Gu;Arie E. Kaufman",
                "AuthorNames": "Krishna Chaitanya Gurijala;Rui Shi;Wei Zeng;Xianfeng Gu;Arie Kaufman",
                "AuthorAffiliation": "Stony Brook University, USA;Stony Brook University, USA;Florida International University, USA;Stony Brook University, USA;Stony Brook University, USA",
                "InternalReferences": "0.1109/visual.2001.964540;10.1109/tvcg.2006.112;10.1109/visual.2001.964540;10.1109/tvcg.2010.200",
                "AuthorKeywords": "Colon flattening, heat diffusion, virtual colonoscopy, volume rendering, topological noise, shape-preserving mapping",
                "AminerCitationCount": 18,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 397,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1357,
                "i": [
                    1357
                ]
            }
        },
        {
            "name": "Xianfeng Gu",
            "value": 73,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 1408,
            "x": 134.07819233330355,
            "y": -350.53250682445076,
            "vy": 0,
            "vx": 0,
            "r": 1.0840529648819806,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Colon Flattening Using Heat Diffusion Riemannian Metric",
                "DOI": "10.1109/tvcg.2013.139",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.139",
                "FirstPage": 2848,
                "LastPage": 2857,
                "PaperType": "J",
                "Abstract": "We propose a new colon flattening algorithm that is efficient, shape-preserving, and robust to topological noise. Unlike previous approaches, which require a mandatory topological denoising to remove fake handles, our algorithm directly flattens the colon surface without any denoising. In our method, we replace the original Euclidean metric of the colon surface with a heat diffusion metric that is insensitive to topological noise. Using this heat diffusion metric, we then solve a Laplacian equation followed by an integration step to compute the final flattening. We demonstrate that our method is shape-preserving and the shape of the polyps are well preserved. The flattened colon also provides an efficient way to enhance the navigation and inspection in virtual colonoscopy. We further show how the existing colon registration pipeline is made more robust by using our colon flattening. We have tested our method on several colon wall surfaces and the experimental results demonstrate the robustness and the efficiency of our method.",
                "AuthorNamesDeduped": "Krishna Chaitanya Gurijala;Rui Shi;Wei Zeng 0002;Xianfeng Gu;Arie E. Kaufman",
                "AuthorNames": "Krishna Chaitanya Gurijala;Rui Shi;Wei Zeng;Xianfeng Gu;Arie Kaufman",
                "AuthorAffiliation": "Stony Brook University, USA;Stony Brook University, USA;Florida International University, USA;Stony Brook University, USA;Stony Brook University, USA",
                "InternalReferences": "0.1109/visual.2001.964540;10.1109/tvcg.2006.112;10.1109/visual.2001.964540;10.1109/tvcg.2010.200",
                "AuthorKeywords": "Colon flattening, heat diffusion, virtual colonoscopy, volume rendering, topological noise, shape-preserving mapping",
                "AminerCitationCount": 18,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 397,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1357,
                "i": [
                    1357
                ]
            }
        },
        {
            "name": "Krishna Chaitanya Gurijala",
            "value": 40,
            "numPapers": 7,
            "cluster": "6",
            "visible": 1,
            "index": 1409,
            "x": 137.965169785271,
            "y": 349.16416185817434,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Colon Flattening Using Heat Diffusion Riemannian Metric",
                "DOI": "10.1109/tvcg.2013.139",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.139",
                "FirstPage": 2848,
                "LastPage": 2857,
                "PaperType": "J",
                "Abstract": "We propose a new colon flattening algorithm that is efficient, shape-preserving, and robust to topological noise. Unlike previous approaches, which require a mandatory topological denoising to remove fake handles, our algorithm directly flattens the colon surface without any denoising. In our method, we replace the original Euclidean metric of the colon surface with a heat diffusion metric that is insensitive to topological noise. Using this heat diffusion metric, we then solve a Laplacian equation followed by an integration step to compute the final flattening. We demonstrate that our method is shape-preserving and the shape of the polyps are well preserved. The flattened colon also provides an efficient way to enhance the navigation and inspection in virtual colonoscopy. We further show how the existing colon registration pipeline is made more robust by using our colon flattening. We have tested our method on several colon wall surfaces and the experimental results demonstrate the robustness and the efficiency of our method.",
                "AuthorNamesDeduped": "Krishna Chaitanya Gurijala;Rui Shi;Wei Zeng 0002;Xianfeng Gu;Arie E. Kaufman",
                "AuthorNames": "Krishna Chaitanya Gurijala;Rui Shi;Wei Zeng;Xianfeng Gu;Arie Kaufman",
                "AuthorAffiliation": "Stony Brook University, USA;Stony Brook University, USA;Florida International University, USA;Stony Brook University, USA;Stony Brook University, USA",
                "InternalReferences": "0.1109/visual.2001.964540;10.1109/tvcg.2006.112;10.1109/visual.2001.964540;10.1109/tvcg.2010.200",
                "AuthorKeywords": "Colon flattening, heat diffusion, virtual colonoscopy, volume rendering, topological noise, shape-preserving mapping",
                "AminerCitationCount": 18,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 397,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1357,
                "i": [
                    1357
                ]
            }
        },
        {
            "name": "Anna Vilanova Bartrolí",
            "value": 53,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 1410,
            "x": -337.7079582671797,
            "y": -164.32691478577937,
            "vy": 0,
            "vx": 0,
            "r": 1.0610247553252734,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "Nonlinear virtual colon unfolding",
                "DOI": "10.1109/visual.2001.964540",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964540",
                "FirstPage": 411,
                "LastPage": 418,
                "PaperType": "C",
                "Abstract": "The majority of virtual endoscopy techniques tries to simulate a real endoscopy. A real endoscopy does not always give the optimal information due to the physical limitations it is subject to. In this paper, we deal with the unfolding of the surface of the colon as a possible visualization technique for diagnosis and polyp detection. A new two-step technique is presented which deals with the problems of double appearance of polyps and nonuniform sampling that other colon unfolding techniques suffer from. In the first step, a distance map from a central path induces nonlinear rays for unambiguous parameterization of the surface. The second step compensates for locally varying distortions of the unfolded surface. A technique similar to magnification fields in information visualization is hereby applied. The technique produces a single view of a complete, virtually dissected colon.",
                "AuthorNamesDeduped": "Anna Vilanova Bartrolí;Rainer Wegenkittl;Andreas König 0002;M. Eduard Gröller",
                "AuthorNames": "A.V. Vilanova Bartroli;R. Wegenkittl;A. Konig;E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Tiani Medgraph, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/infvis.1997.636786;10.1109/visual.1999.809914",
                "AuthorKeywords": "Volume Rendering, Virtual Endoscopy",
                "AminerCitationCount": 125,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 220,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2895,
                "i": [
                    2895
                ]
            }
        },
        {
            "name": "Andreas König 0002",
            "value": 51,
            "numPapers": 1,
            "cluster": "6",
            "visible": 1,
            "index": 1411,
            "x": 360.14417191416754,
            "y": -106.986800289842,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "Nonlinear virtual colon unfolding",
                "DOI": "10.1109/visual.2001.964540",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964540",
                "FirstPage": 411,
                "LastPage": 418,
                "PaperType": "C",
                "Abstract": "The majority of virtual endoscopy techniques tries to simulate a real endoscopy. A real endoscopy does not always give the optimal information due to the physical limitations it is subject to. In this paper, we deal with the unfolding of the surface of the colon as a possible visualization technique for diagnosis and polyp detection. A new two-step technique is presented which deals with the problems of double appearance of polyps and nonuniform sampling that other colon unfolding techniques suffer from. In the first step, a distance map from a central path induces nonlinear rays for unambiguous parameterization of the surface. The second step compensates for locally varying distortions of the unfolded surface. A technique similar to magnification fields in information visualization is hereby applied. The technique produces a single view of a complete, virtually dissected colon.",
                "AuthorNamesDeduped": "Anna Vilanova Bartrolí;Rainer Wegenkittl;Andreas König 0002;M. Eduard Gröller",
                "AuthorNames": "A.V. Vilanova Bartroli;R. Wegenkittl;A. Konig;E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Tiani Medgraph, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/infvis.1997.636786;10.1109/visual.1999.809914",
                "AuthorKeywords": "Volume Rendering, Virtual Endoscopy",
                "AminerCitationCount": 125,
                "CitationCountCrossRef": 13,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 220,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2895,
                "i": [
                    2895
                ]
            }
        },
        {
            "name": "Peter Kohlmann",
            "value": 100,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1412,
            "x": -193.35901656906157,
            "y": 322.2767300185438,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "High-Level User Interfaces for Transfer Function Design with Semantics",
                "DOI": "10.1109/tvcg.2006.148",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.148",
                "FirstPage": 1021,
                "LastPage": 1028,
                "PaperType": "J",
                "Abstract": "Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for nonexpert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation",
                "AuthorNamesDeduped": "Christof Rezk-Salama;Maik Keller;Peter Kohlmann",
                "AuthorNames": "Christof Rezk Salama;Maik Keller;Peter Kohlmann",
                "AuthorAffiliation": "Computer Graphics and Multimedia Systems Group, University of Siegen, Germany;Computer Graphics and Multimedia Systems Group, University of Siegen, Germany;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2003.1250384;10.1109/visual.2003.1250413;10.1109/visual.2002.1183764;10.1109/visual.1998.745319;10.1109/visual.2001.964519;10.1109/visual.2003.1250412;10.1109/visual.1996.568113;10.1109/visual.1997.663875",
                "AuthorKeywords": "Volume rendering, transfer function design, semantic models",
                "AminerCitationCount": 163,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 691,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2278,
                "i": [
                    2278
                ]
            }
        },
        {
            "name": "Kui Wu 0003",
            "value": 4,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1413,
            "x": -75.14446777614667,
            "y": -368.3793003992483,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Direct Multifield Volume Ray Casting of Fiber Surfaces",
                "DOI": "10.1109/tvcg.2016.2599040",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599040",
                "FirstPage": 941,
                "LastPage": 949,
                "PaperType": "J",
                "Abstract": "Multifield data are common in visualization. However, reducing these data to comprehensible geometry is a challenging problem. Fiber surfaces, an analogy of isosurfaces to bivariate volume data, are a promising new mechanism for understanding multifield volumes. In this work, we explore direct ray casting of fiber surfaces from volume data without any explicit geometry extraction. We sample directly along rays in domain space, and perform geometric tests in range space where fibers are defined, using a signed distance field derived from the control polygons. Our method requires little preprocess, and enables real-time exploration of data, dynamic modification and pixel-exact rendering of fiber surfaces, and support for higher-order interpolation in domain space. We demonstrate this approach on several bivariate datasets, including analysis of multi-field combustion data.",
                "AuthorNamesDeduped": "Kui Wu 0003;Aaron Knoll;Benjamin J. Isaac;Hamish A. Carr;Valerio Pascucci",
                "AuthorNames": "Kui Wu;Aaron Knoll;Benjamin J Isaac;Hamish Carr;Valerio Pascucci",
                "AuthorAffiliation": "University of Utah;SCI Institute, Argonne National Laboratory;ICSE, University of Utah;School of Computing, University of Leeds;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/visual.2003.1250414;10.1109/visual.2004.89;10.1109/tvcg.2009.185;10.1109/tvcg.2009.204;10.1109/visual.2003.1250384;10.1109/visual.1998.745713;10.1109/tvcg.2006.157;10.1109/tvcg.2010.145;10.1109/tvcg.2015.2467433;10.1109/visual.2003.1250412;10.1109/visual.1998.745300;10.1109/tvcg.2008.119;10.1109/visual.2004.52",
                "AuthorKeywords": "Volume Rendering;Isosurface;Multidimensional Data",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 660,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 945,
                "i": [
                    945
                ]
            }
        },
        {
            "name": "Benjamin J. Isaac",
            "value": 4,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1414,
            "x": 304.3534363447453,
            "y": 220.9501884704899,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Direct Multifield Volume Ray Casting of Fiber Surfaces",
                "DOI": "10.1109/tvcg.2016.2599040",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2599040",
                "FirstPage": 941,
                "LastPage": 949,
                "PaperType": "J",
                "Abstract": "Multifield data are common in visualization. However, reducing these data to comprehensible geometry is a challenging problem. Fiber surfaces, an analogy of isosurfaces to bivariate volume data, are a promising new mechanism for understanding multifield volumes. In this work, we explore direct ray casting of fiber surfaces from volume data without any explicit geometry extraction. We sample directly along rays in domain space, and perform geometric tests in range space where fibers are defined, using a signed distance field derived from the control polygons. Our method requires little preprocess, and enables real-time exploration of data, dynamic modification and pixel-exact rendering of fiber surfaces, and support for higher-order interpolation in domain space. We demonstrate this approach on several bivariate datasets, including analysis of multi-field combustion data.",
                "AuthorNamesDeduped": "Kui Wu 0003;Aaron Knoll;Benjamin J. Isaac;Hamish A. Carr;Valerio Pascucci",
                "AuthorNames": "Kui Wu;Aaron Knoll;Benjamin J Isaac;Hamish Carr;Valerio Pascucci",
                "AuthorAffiliation": "University of Utah;SCI Institute, Argonne National Laboratory;ICSE, University of Utah;School of Computing, University of Leeds;SCI Institute, University of Utah",
                "InternalReferences": "0.1109/visual.2003.1250414;10.1109/visual.2004.89;10.1109/tvcg.2009.185;10.1109/tvcg.2009.204;10.1109/visual.2003.1250384;10.1109/visual.1998.745713;10.1109/tvcg.2006.157;10.1109/tvcg.2010.145;10.1109/tvcg.2015.2467433;10.1109/visual.2003.1250412;10.1109/visual.1998.745300;10.1109/tvcg.2008.119;10.1109/visual.2004.52",
                "AuthorKeywords": "Volume Rendering;Isosurface;Multidimensional Data",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 660,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 945,
                "i": [
                    945
                ]
            }
        },
        {
            "name": "Jiaxi Hu",
            "value": 7,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1415,
            "x": -373.8025220922775,
            "y": 42.68107868192148,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Visualizing Shape Deformations with Variation of Geometric Spectrum",
                "DOI": "10.1109/tvcg.2016.2598790",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598790",
                "FirstPage": 721,
                "LastPage": 730,
                "PaperType": "J",
                "Abstract": "This paper presents a novel approach based on spectral geometry to quantify and visualize non-isometric deformations of 3D surfaces by mapping two manifolds. The proposed method can determine multi-scale, non-isometric deformations through the variation of Laplace-Beltrami spectrum of two shapes. Given two triangle meshes, the spectra can be varied from one to another with a scale function defined on each vertex. The variation is expressed as a linear interpolation of eigenvalues of the two shapes. In each iteration step, a quadratic programming problem is constructed, based on our derived spectrum variation theorem and smoothness energy constraint, to compute the spectrum variation. The derivation of the scale function is the solution of such a problem. Therefore, the final scale function can be solved by integral of the derivation from each step, which, in turn, quantitatively describes non-isometric deformations between two shapes. To evaluate the method, we conduct extensive experiments on synthetic and real data. We employ real epilepsy patient imaging data to quantify the shape variation between the left and right hippocampi in epileptic brains. In addition, we use longitudinal Alzheimer data to compare the shape deformation of diseased and healthy hippocampus. In order to show the accuracy and effectiveness of the proposed method, we also compare it with spatial registration-based methods, e.g., non-rigid Iterative Closest Point (ICP) and voxel-based method. These experiments demonstrate the advantages of our method.",
                "AuthorNamesDeduped": "Jiaxi Hu;Hajar Hamidian;Zichun Zhong;Jing Hua 0001",
                "AuthorNames": "Jiaxi Hu;Hajar Hamidian;Zichun Zhong;Jing Hua",
                "AuthorAffiliation": "Wayne State University;Wayne State University;Wayne State University;Wayne State University",
                "InternalReferences": "0.1109/tvcg.2009.159;10.1109/tvcg.2015.2467198;10.1109/tvcg.2011.171",
                "AuthorKeywords": "Geometry-based Technique;Spectral Analysis;Biomedical Visualization",
                "AminerCitationCount": 6,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 663,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 952,
                "i": [
                    952
                ]
            }
        },
        {
            "name": "Jing Hua 0001",
            "value": 55,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1416,
            "x": 246.88685411517537,
            "y": -284.0719649404216,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "SciVis",
                "Year": 2019,
                "Title": "DeepOrganNet: On-the-Fly Reconstruction and Visualization of 3D / 4D Lung Models from Single-View Projections by Deep Deformation Network",
                "DOI": "10.1109/tvcg.2019.2934369",
                "Link": "http://dx.doi.org/10.1109/TVCG.2019.2934369",
                "FirstPage": 960,
                "LastPage": 970,
                "PaperType": "J",
                "Abstract": "This paper introduces a deep neural network based method, i.e., DeepOrganNet, to generate and visualize fully high-fidelity 3D / 4D organ geometric models from single-view medical images with complicated background in real time. Traditional 3D / 4D medical image reconstruction requires near hundreds of projections, which cost insufferable computational time and deliver undesirable high imaging / radiation dose to human subjects. Moreover, it always needs further notorious processes to segment or extract the accurate 3D organ models subsequently. The computational time and imaging dose can be reduced by decreasing the number of projections, but the reconstructed image quality is degraded accordingly. To our knowledge, there is no method directly and explicitly reconstructing multiple 3D organ meshes from a single 2D medical grayscale image on the fly. Given single-view 2D medical images, e.g., 3D / 4D-CT projections or X-ray images, our end-to-end DeepOrganNet framework can efficiently and effectively reconstruct 3D / 4D lung models with a variety of geometric shapes by learning the smooth deformation fields from multiple templates based on a trivariate tensor-product deformation technique, leveraging an informative latent descriptor extracted from input 2D images. The proposed method can guarantee to generate high-quality and high-fidelity manifold meshes for 3D / 4D lung models; while, all current deep learning based approaches on the shape reconstruction from a single image cannot. The major contributions of this work are to accurately reconstruct the 3D organ shapes from 2D single-view projection, significantly improve the procedure time to allow on-the-fly visualization, and dramatically reduce the imaging dose for human subjects. Experimental results are evaluated and compared with the traditional reconstruction method and the state-of-the-art in deep learning, by using extensive 3D and 4D examples, including both synthetic phantom and real patient datasets. The efficiency of the proposed method shows that it only needs several milliseconds to generate organ meshes with 10K vertices, which has great potential to be used in real-time image guided radiation therapy (IGRT).",
                "AuthorNamesDeduped": "Yifan Wang;Zichun Zhong;Jing Hua 0001",
                "AuthorNames": "Yifan Wang;Zichun Zhong;Jing Hua",
                "AuthorAffiliation": "Department of Computer Science, Wayne State University, Detroit;Department of Computer Science, Wayne State University, Detroit;Department of Computer Science, Wayne State University, Detroit",
                "InternalReferences": "0.1109/tvcg.2013.159",
                "AuthorKeywords": "Deep deformation network,organ meshes,3D / 4D shapes,2D projections,single-view",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1487,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 572,
                "i": [
                    572
                ]
            }
        },
        {
            "name": "Guangyu Zou",
            "value": 13,
            "numPapers": 5,
            "cluster": "6",
            "visible": 1,
            "index": 1417,
            "x": 9.844645708686723,
            "y": 376.3682810105953,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Authalic Parameterization of General Surfaces Using Lie Advection",
                "DOI": "10.1109/tvcg.2011.171",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.171",
                "FirstPage": 2005,
                "LastPage": 2014,
                "PaperType": "J",
                "Abstract": "Parameterization of complex surfaces constitutes a major means of visualizing highly convoluted geometric structures as well as other properties associated with the surface. It also enables users with the ability to navigate, orient, and focus on regions of interest within a global view and overcome the occlusions to inner concavities. In this paper, we propose a novel area-preserving surface parameterization method which is rigorous in theory, moderate in computation, yet easily extendable to surfaces of non-disc and closed-boundary topologies. Starting from the distortion induced by an initial parameterization, an area restoring diffeomorphic flow is constructed as a Lie advection of differential 2-forms along the manifold, which yields equality of the area elements between the domain and the original surface at its final state. Existence and uniqueness of result are assured through an analytical derivation. Based upon a triangulated surface representation, we also present an efficient algorithm in line with discrete differential modeling. As an exemplar application, the utilization of this method for the effective visualization of brain cortical imaging modalities is presented. Compared with conformal methods, our method can reveal more subtle surface patterns in a quantitative manner. It, therefore, provides a competitive alternative to the existing parameterization techniques for better surface-based analysis in various scenarios.",
                "AuthorNamesDeduped": "Guangyu Zou;Jiaxi Hu;Xianfeng Gu;Jing Hua 0001",
                "AuthorNames": "Guangyu Zou;Jiaxi Hu;Xianfeng Gu;Jing Hua",
                "AuthorAffiliation": "Wayne State University, USA;Wayne State University, USA;State University of New York, Stony Brook, USA;Wayne State University, USA",
                "InternalReferences": "0.1109/tvcg.2008.134;10.1109/tvcg.2009.159;10.1109/tvcg.2006.134;10.1109/visual.2004.75;10.1109/visual.2002.1183795;10.1109/visual.2001.964553",
                "AuthorKeywords": "Area-preserving surface parameterization, differential forms, Lie advection, surface visualization",
                "AminerCitationCount": 61,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 532,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1647,
                "i": [
                    1647
                ]
            }
        },
        {
            "name": "Xinyue Ye",
            "value": 55,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 1418,
            "x": -261.5844771174335,
            "y": -270.9678234241086,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "TrajGraph: A Graph-Based Visual Analytics Approach to Studying Urban Network Centralities Using Taxi Trajectory Data",
                "DOI": "10.1109/tvcg.2015.2467771",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467771",
                "FirstPage": 160,
                "LastPage": 169,
                "PaperType": "J",
                "Abstract": "We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.",
                "AuthorNamesDeduped": "Xiaoke Huang;Ye Zhao 0003;Chao Ma 0023;Jing Yang 0001;Xinyue Ye;Chong Zhang",
                "AuthorNames": "Xiaoke Huang;Ye Zhao;Chao Ma;Jing Yang;Xinyue Ye;Chong Zhang",
                "AuthorAffiliation": "Department of Computer Science, Kent State University;Department of Computer Science, Kent State University;Department of Computer Science, Kent State University;Department of Computer Science, University of North Carolina at Charlotte;Department of Geography, Kent State University;Department of Computer Science, University of North Carolina at Charlotte",
                "InternalReferences": "0.1109/vast.2009.5332593;10.1109/tvcg.2013.226;10.1109/tvcg.2009.145;10.1109/vast.2011.6102455;10.1109/tvcg.2006.122;10.1109/tvcg.2012.265;10.1109/tvcg.2013.228;10.1109/tvcg.2014.2346746",
                "AuthorKeywords": "Graph based visual analytics, Centrality, Taxi trajectories, Urban network, Transportation assessment",
                "AminerCitationCount": 165,
                "CitationCountCrossRef": 126,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 3570,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1100,
                "i": [
                    1100
                ]
            }
        },
        {
            "name": "Chao Ma 0023",
            "value": 55,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 1419,
            "x": 376.05287015489415,
            "y": 23.113607426499833,
            "vy": 0,
            "vx": 0,
            "r": 1.0633275762809442,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "TrajGraph: A Graph-Based Visual Analytics Approach to Studying Urban Network Centralities Using Taxi Trajectory Data",
                "DOI": "10.1109/tvcg.2015.2467771",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467771",
                "FirstPage": 160,
                "LastPage": 169,
                "PaperType": "J",
                "Abstract": "We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.",
                "AuthorNamesDeduped": "Xiaoke Huang;Ye Zhao 0003;Chao Ma 0023;Jing Yang 0001;Xinyue Ye;Chong Zhang",
                "AuthorNames": "Xiaoke Huang;Ye Zhao;Chao Ma;Jing Yang;Xinyue Ye;Chong Zhang",
                "AuthorAffiliation": "Department of Computer Science, Kent State University;Department of Computer Science, Kent State University;Department of Computer Science, Kent State University;Department of Computer Science, University of North Carolina at Charlotte;Department of Geography, Kent State University;Department of Computer Science, University of North Carolina at Charlotte",
                "InternalReferences": "0.1109/vast.2009.5332593;10.1109/tvcg.2013.226;10.1109/tvcg.2009.145;10.1109/vast.2011.6102455;10.1109/tvcg.2006.122;10.1109/tvcg.2012.265;10.1109/tvcg.2013.228;10.1109/tvcg.2014.2346746",
                "AuthorKeywords": "Graph based visual analytics, Centrality, Taxi trajectories, Urban network, Transportation assessment",
                "AminerCitationCount": 165,
                "CitationCountCrossRef": 126,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 3570,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1100,
                "i": [
                    1100
                ]
            }
        },
        {
            "name": "Susan Havre",
            "value": 162,
            "numPapers": 5,
            "cluster": "1",
            "visible": 1,
            "index": 1420,
            "x": -293.0058533638191,
            "y": 237.0602663765908,
            "vy": 0,
            "vx": 0,
            "r": 1.1865284974093264,
            "node": {
                "Conference": "InfoVis",
                "Year": 2000,
                "Title": "ThemeRiver: visualizing theme changes over time",
                "DOI": "10.1109/infvis.2000.885098",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2000.885098",
                "FirstPage": 115,
                "LastPage": 123,
                "PaperType": "C",
                "Abstract": "ThemeRiver/sup TM/ is a prototype system that visualizes thematic variations over time within a large collection of documents. The \"river\" flows from left to right through time, changing width to depict changes in thematic strength of temporally associated documents. Colored \"currents\" flowing within the river narrow or widen to indicate decreases or increases in the strength of an individual topic or a group of topics in the associated documents. The river is shown within the context of a timeline and a corresponding textual presentation of external events.",
                "AuthorNamesDeduped": "Susan Havre;Elizabeth G. Hetzler;Lucy T. Nowell",
                "AuthorNames": "S. Havre;B. Hetzler;L. Nowell",
                "AuthorAffiliation": "Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/infvis.1997.636789;10.1109/infvis.1998.729570",
                "AuthorKeywords": null,
                "AminerCitationCount": 660,
                "CitationCountCrossRef": 194,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 1785,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2938,
                "i": [
                    2938
                ]
            }
        },
        {
            "name": "Elizabeth G. Hetzler",
            "value": 211,
            "numPapers": 17,
            "cluster": "4",
            "visible": 1,
            "index": 1421,
            "x": 55.941168543807265,
            "y": -372.8546441469563,
            "vy": 0,
            "vx": 0,
            "r": 1.2429476108232584,
            "node": {
                "Conference": "InfoVis",
                "Year": 2000,
                "Title": "ThemeRiver: visualizing theme changes over time",
                "DOI": "10.1109/infvis.2000.885098",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2000.885098",
                "FirstPage": 115,
                "LastPage": 123,
                "PaperType": "C",
                "Abstract": "ThemeRiver/sup TM/ is a prototype system that visualizes thematic variations over time within a large collection of documents. The \"river\" flows from left to right through time, changing width to depict changes in thematic strength of temporally associated documents. Colored \"currents\" flowing within the river narrow or widen to indicate decreases or increases in the strength of an individual topic or a group of topics in the associated documents. The river is shown within the context of a timeline and a corresponding textual presentation of external events.",
                "AuthorNamesDeduped": "Susan Havre;Elizabeth G. Hetzler;Lucy T. Nowell",
                "AuthorNames": "S. Havre;B. Hetzler;L. Nowell",
                "AuthorAffiliation": "Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/infvis.1997.636789;10.1109/infvis.1998.729570",
                "AuthorKeywords": null,
                "AminerCitationCount": 660,
                "CitationCountCrossRef": 194,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 1785,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2938,
                "i": [
                    2938
                ]
            }
        },
        {
            "name": "Lucy T. Nowell",
            "value": 128,
            "numPapers": 6,
            "cluster": "1",
            "visible": 1,
            "index": 1422,
            "x": 210.68448392360617,
            "y": 312.82910387916866,
            "vy": 0,
            "vx": 0,
            "r": 1.1473805411629245,
            "node": {
                "Conference": "InfoVis",
                "Year": 2000,
                "Title": "ThemeRiver: visualizing theme changes over time",
                "DOI": "10.1109/infvis.2000.885098",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2000.885098",
                "FirstPage": 115,
                "LastPage": 123,
                "PaperType": "C",
                "Abstract": "ThemeRiver/sup TM/ is a prototype system that visualizes thematic variations over time within a large collection of documents. The \"river\" flows from left to right through time, changing width to depict changes in thematic strength of temporally associated documents. Colored \"currents\" flowing within the river narrow or widen to indicate decreases or increases in the strength of an individual topic or a group of topics in the associated documents. The river is shown within the context of a timeline and a corresponding textual presentation of external events.",
                "AuthorNamesDeduped": "Susan Havre;Elizabeth G. Hetzler;Lucy T. Nowell",
                "AuthorNames": "S. Havre;B. Hetzler;L. Nowell",
                "AuthorAffiliation": "Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA;Northwest Division, Battlle-Pacific Northwest, Richland, WA, USA",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/infvis.1997.636789;10.1109/infvis.1998.729570",
                "AuthorKeywords": null,
                "AminerCitationCount": 660,
                "CitationCountCrossRef": 194,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 1785,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2938,
                "i": [
                    2938
                ]
            }
        },
        {
            "name": "Kristin A. Cook",
            "value": 87,
            "numPapers": 23,
            "cluster": "5",
            "visible": 1,
            "index": 1423,
            "x": -366.7940628939006,
            "y": -88.38617214126474,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods",
                "DOI": "10.1109/tvcg.2016.2598544",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598544",
                "FirstPage": 271,
                "LastPage": 280,
                "PaperType": "J",
                "Abstract": "Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.",
                "AuthorNamesDeduped": "Aritra Dasgupta;Joon-Yong Lee;Ryan Wilson;Robert A. Lafrance;Nick Cramer;Kristin A. Cook;Samuel H. Payne",
                "AuthorNames": "Aritra Dasgupta;Joon-Yong Lee;Ryan Wilson;Robert A. Lafrance;Nick Cramer;Kristin Cook;Samuel Payne",
                "AuthorAffiliation": "Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory;Pacific Northwest National Laboratory",
                "InternalReferences": "0.1109/tvcg.2015.2467591;10.1109/vast.2015.7347625;10.1109/tvcg.2012.224;10.1109/infvis.2005.1532136;10.1109/vast.2006.261416;10.1109/tvcg.2013.124;10.1109/tvcg.2013.120",
                "AuthorKeywords": "trust;transparency;familiarity;uncertainty;biological data analysis",
                "AminerCitationCount": 41,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1667,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 975,
                "i": [
                    975
                ]
            }
        },
        {
            "name": "Yadong Wu",
            "value": 14,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 1424,
            "x": 330.282477688566,
            "y": -182.6567407239653,
            "vy": 0,
            "vx": 0,
            "r": 1.016119746689695,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "D-Map: Visual Analysis of Ego-centric Information Diffusion Patterns in Social Media",
                "DOI": "10.1109/vast.2016.7883510",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883510",
                "FirstPage": 41,
                "LastPage": 50,
                "PaperType": "C",
                "Abstract": "Popular social media platforms could rapidly propagate vital information over social networks among a significant number of people. In this work we present D-Map (Diffusion Map), a novel visualization method to support exploration and analysis of social behaviors during such information diffusion and propagation on typical social media through a map metaphor. In D-Map, users who participated in reposting (i.e., resending a message initially posted by others) one central user's posts (i.e., a series of original tweets) are collected and mapped to a hexagonal grid based on their behavior similarities and in chronological order of the repostings. With additional interaction and linking, D-Map is capable of providing visual portraits of the influential users and describing their social behaviors. A comprehensive visual analysis system is developed to support interactive exploration with D-Map. We evaluate our work with real world social media data and find interesting patterns among users. Key players, important information diffusion paths, and interactions among social communities can be identified.",
                "AuthorNamesDeduped": "Siming Chen 0001;Shuai Chen 0001;Zhenhuang Wang;Jie Liang 0004;Xiaoru Yuan;Nan Cao;Yadong Wu",
                "AuthorNames": "Siming Chen;Shuai Chen;Zhenhuang Wang;Jie Liang;Xiaoru Yuan;Nan Cao;Yadong Wu",
                "AuthorAffiliation": "Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University, China;Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University, China;Faculty of Engineer and Information Technology, The University of Technology, Sydney, Australia;Key Laboratory of Machine Perception (Ministry of Education), and School of EECS, Peking University, China;New York University, Shanghai, China;Southwest University of Science and Technology, China",
                "InternalReferences": "0.1109/tvcg.2015.2467196;10.1109/tvcg.2014.2346922;10.1109/tvcg.2012.291;10.1109/tvcg.2010.154;10.1109/tvcg.2007.70582;10.1109/tvcg.2014.2346433;10.1109/infvis.2005.1532126;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346919;10.1109/tvcg.2014.2346920;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346277",
                "AuthorKeywords": null,
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1339,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 976,
                "i": [
                    976
                ]
            }
        },
        {
            "name": "Markus John",
            "value": 65,
            "numPapers": 35,
            "cluster": "1",
            "visible": 1,
            "index": 1425,
            "x": -120.19933228941314,
            "y": 357.91356570711207,
            "vy": 0,
            "vx": 0,
            "r": 1.0748416810592976,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "VarifocalReader -- In-Depth Visual Analysis of Large Text Documents",
                "DOI": "10.1109/tvcg.2014.2346677",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346677",
                "FirstPage": 1723,
                "LastPage": 1732,
                "PaperType": "J",
                "Abstract": "Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.",
                "AuthorNamesDeduped": "Steffen Koch 0001;Markus John;Michael Wörner 0001;Andreas Müller 0012;Thomas Ertl",
                "AuthorNames": "Steffen Koch;Markus John;Michael Wörner;Andreas Müller;Thomas Ertl",
                "AuthorAffiliation": "Institute of Visualization and Interactive Systems (VIS), University of Stuttgart;Institute of Visualization and Interactive Systems (VIS), University of Stuttgart;Institute of Visualization and Interactive Systems (VIS), University of Stuttgart;Institute for Natural Language Processing (IMS), University of Stuttgart;Institute of Visualization and Interactive Systems (VIS), University of Stuttgart",
                "InternalReferences": "0.1109/vast.2010.5652926;10.1109/tvcg.2008.172;10.1109/vast.2012.6400485;10.1109/tvcg.2013.188;10.1109/tvcg.2007.70577;10.1109/vast.2012.6400486;10.1109/tvcg.2012.277;10.1109/tvcg.2009.165;10.1109/tvcg.2013.162;10.1109/infvis.1995.528686;10.1109/vast.2009.5333248;10.1109/tvcg.2012.260;10.1109/vast.2007.4389006;10.1109/vast.2009.5333919;10.1109/vast.2007.4389004;10.1109/infvis.1997.636787",
                "AuthorKeywords": "visual analytics, document analysis, literary analysis, natural language processing, text mining, machine learning, distant reading",
                "AminerCitationCount": 100,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 1835,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1264,
                "i": [
                    1264
                ]
            }
        },
        {
            "name": "Qi Han 0006",
            "value": 45,
            "numPapers": 29,
            "cluster": "1",
            "visible": 1,
            "index": 1426,
            "x": -153.1895966525529,
            "y": -345.22883349660725,
            "vy": 0,
            "vx": 0,
            "r": 1.0518134715025906,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "CiteRivers: Visual Analytics of Citation Patterns",
                "DOI": "10.1109/tvcg.2015.2467621",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467621",
                "FirstPage": 190,
                "LastPage": 199,
                "PaperType": "J",
                "Abstract": "The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback.",
                "AuthorNamesDeduped": "Florian Heimerl;Qi Han 0006;Steffen Koch 0001;Thomas Ertl",
                "AuthorNames": "Florian Heimerl;Qi Han;Steffen Koch;Thomas Ertl",
                "AuthorAffiliation": "Institute for Visualization and Interactive Systems (VIS), University of Stuttgart;Institute for Visualization and Interactive Systems (VIS), University of Stuttgart;Institute for Visualization and Interactive Systems (VIS), University of Stuttgart;Institute for Visualization and Interactive Systems (VIS), University of Stuttgart",
                "InternalReferences": "0.1109/infvis.2004.77;10.1109/tvcg.2015.2467757;10.1109/tvcg.2008.166;10.1109/tvcg.2013.212;10.1109/vast.2009.5333443;10.1109/tvcg.2011.239;10.1109/tvcg.2012.252;10.1109/tvcg.2013.162;10.1109/tvcg.2012.277;10.1109/infvis.2004.45;10.1109/infvis.2005.1532150;10.1109/tvcg.2009.162;10.1109/tvcg.2009.171;10.1109/infvis.2005.1532122;10.1109/infvis.1995.528686;10.1109/tvcg.2014.2346920;10.1109/tvcg.2009.202",
                "AuthorKeywords": "scientific literature, visual document analysis, visual citation analysis, streamgraph, clustering",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 71,
                "PubsCitedCrossRef": 53,
                "DownloadsXplore": 2581,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1105,
                "i": [
                    1105
                ]
            }
        },
        {
            "name": "Kanupriya Singhal",
            "value": 285,
            "numPapers": 4,
            "cluster": "5",
            "visible": 1,
            "index": 1427,
            "x": 346.2772766047466,
            "y": 151.13585844266024,
            "vy": 0,
            "vx": 0,
            "r": 1.3281519861830744,
            "node": {
                "Conference": "VAST",
                "Year": 2007,
                "Title": "Jigsaw: Supporting Investigative Analysis through Interactive Visualization",
                "DOI": "10.1109/vast.2007.4389006",
                "Link": "http://dx.doi.org/10.1109/VAST.2007.4389006",
                "FirstPage": 131,
                "LastPage": 138,
                "PaperType": "C",
                "Abstract": "Investigative analysts who work with collections of text documents connect embedded threads of evidence in order to formulate hypotheses about plans and activities of potential interest. As the number of documents and the corresponding number of concepts and entities within the documents grow larger, sense-making processes become more and more difficult for the analysts. We have developed a visual analytic system called Jigsaw that represents documents and their entities visually in order to help analysts examine reports more efficiently and develop theories about potential actions more quickly. Jigsaw provides multiple coordinated views of document entities with a special emphasis on visually illustrating connections between entities across the different documents.",
                "AuthorNamesDeduped": "John T. Stasko;Carsten Görg;Zhicheng Liu 0001;Kanupriya Singhal",
                "AuthorNames": "John Stasko;Carsten Gorg;Zhicheng Liu;Kanupriya Singhal",
                "AuthorAffiliation": "School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA;School of Interactive Computing & GVU Center, Georgia Institute of Technology, USA",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/infvis.2004.27;10.1109/vast.2006.261432",
                "AuthorKeywords": "Visual analytics, investigative analysis, intelligence analysis, information visualization, multiple views",
                "AminerCitationCount": 746,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 1130,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2112,
                "i": [
                    2112
                ]
            }
        },
        {
            "name": "Xiaoxiao Lian",
            "value": 44,
            "numPapers": 10,
            "cluster": "1",
            "visible": 1,
            "index": 1428,
            "x": -357.5500631362965,
            "y": 122.50694817531935,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "VAST",
                "Year": 2010,
                "Title": "Understanding text corpora with multiple facets",
                "DOI": "10.1109/vast.2010.5652931",
                "Link": "http://dx.doi.org/10.1109/VAST.2010.5652931",
                "FirstPage": 99,
                "LastPage": 106,
                "PaperType": "C",
                "Abstract": "Text visualization becomes an increasingly more important research topic as the need to understand massive-scale textual information is proven to be imperative for many people and businesses. However, it is still very challenging to design effective visual metaphors to represent large corpora of text due to the unstructured and high-dimensional nature of text. In this paper, we propose a data model that can be used to represent most of the text corpora. Such a data model contains four basic types of facets: time, category, content (unstructured), and structured facet. To understand the corpus with such a data model, we develop a hybrid visualization by combining the trend graph with tag-clouds. We encode the four types of data facets with four separate visual dimensions. To help people discover evolutionary and correlation patterns, we also develop several visual interaction methods that allow people to interactively analyze text by one or more facets. Finally, we present two case studies to demonstrate the effectiveness of our solution in support of multi-faceted visual analysis of text corpora.",
                "AuthorNamesDeduped": "Lei Shi 0002;Furu Wei;Shixia Liu;Li Tan;Xiaoxiao Lian;Michelle X. Zhou",
                "AuthorNames": "Lei Shi;Furu Wei;Shixia Liu;Li Tan;Xiaoxiao Lian;Michelle X. Zhou",
                "AuthorAffiliation": "IBM Research China, Beijing, China;IBM Research China, Beijing, China;IBM Research China, Beijing, China;IBM Research China, Beijing, China;IBM Research China, Beijing, China;IBM Research Almaden, San Jose, CA, USA",
                "InternalReferences": "0.1109/vast.2009.5333443;10.1109/vast.2007.4389005;10.1109/tvcg.2008.172;10.1109/tvcg.2009.171;10.1109/tvcg.2008.166;10.1109/tvcg.2009.165;10.1109/infvis.2002.1173155;10.1109/infvis.1999.801866;10.1109/vast.2007.4389006;10.1109/infvis.2005.1532122;10.1109/infvis.2000.885097",
                "AuthorKeywords": "text visualization, multi-facet data visualization",
                "AminerCitationCount": 86,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 738,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1728,
                "i": [
                    1728
                ]
            }
        },
        {
            "name": "Jing Su",
            "value": 21,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 1429,
            "x": 180.95733944048325,
            "y": -331.9705428236393,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "How ideas flow across multiple social groups",
                "DOI": "10.1109/vast.2016.7883511",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883511",
                "FirstPage": 51,
                "LastPage": 60,
                "PaperType": "C",
                "Abstract": "Tracking how correlated ideas flow within and across multiple social groups facilitates the understanding of the transfer of information, opinions, and thoughts on social media. In this paper, we present IdeaFlow, a visual analytics system for analyzing the lead-lag changes within and across pre-defined social groups regarding a specific set of correlated ideas, each of which is described by a set of words. To model idea flows accurately, we develop a random-walk-based correlation model and integrate it with Bayesian conditional cointegration and a tensor-based technique. To convey complex lead-lag relationships over time, IdeaFlow combines the strengths of a bubble tree, a flow map, and a timeline. In particular, we develop a Voronoi-treemap-based bubble tree to help users get an overview of a set of ideas quickly. A correlated-clustering-based layout algorithm is used to simultaneously generate multiple flow maps with less ambiguity. We also introduce a focus+context timeline to explore huge amounts of temporal data at different levels of time granularity. Quantitative evaluation and case studies demonstrate the accuracy and effectiveness of IdeaFlow.",
                "AuthorNamesDeduped": "Xiting Wang;Shixia Liu;Yang Chen;Tai-Quan Peng;Jing Su;Jing Yang;Baining Guo",
                "AuthorNames": "Xiting Wang;Shixia Liu;Yang Chen;Tai-Quan Peng;Jing Su;Jing Yang;Baining Guo",
                "AuthorAffiliation": "School of Software, Tsinghua University;School of Software, Tsinghua University;School of Software, Tsinghua University;Michigan State University;Tsinghua University;UNCC;Microsoft Research",
                "InternalReferences": "0.1109/vast.2011.6102461;10.1109/vast.2010.5652931;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346433;10.1109/tvcg.2015.2467992;10.1109/tvcg.2015.2467691;10.1109/tvcg.2011.202;10.1109/vast.2012.6400485;10.1109/tvcg.2013.196;10.1109/tvcg.2015.2467757;10.1109/tvcg.2012.212;10.1109/tvcg.2010.129;10.1109/tvcg.2013.162;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346919;10.1109/infvis.2005.1532152;10.1109/tvcg.2014.2346920;10.1109/tvcg.2015.2467991;10.1109/tvcg.2011.239;10.1109/infvis.2005.1532150;10.1109/tvcg.2009.111;10.1109/infvis.2005.1532128",
                "AuthorKeywords": null,
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 1040,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 985,
                "i": [
                    985
                ]
            }
        },
        {
            "name": "Jing Yang",
            "value": 21,
            "numPapers": 21,
            "cluster": "1",
            "visible": 1,
            "index": 1430,
            "x": 90.84232679511732,
            "y": 367.1480241843188,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "How ideas flow across multiple social groups",
                "DOI": "10.1109/vast.2016.7883511",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883511",
                "FirstPage": 51,
                "LastPage": 60,
                "PaperType": "C",
                "Abstract": "Tracking how correlated ideas flow within and across multiple social groups facilitates the understanding of the transfer of information, opinions, and thoughts on social media. In this paper, we present IdeaFlow, a visual analytics system for analyzing the lead-lag changes within and across pre-defined social groups regarding a specific set of correlated ideas, each of which is described by a set of words. To model idea flows accurately, we develop a random-walk-based correlation model and integrate it with Bayesian conditional cointegration and a tensor-based technique. To convey complex lead-lag relationships over time, IdeaFlow combines the strengths of a bubble tree, a flow map, and a timeline. In particular, we develop a Voronoi-treemap-based bubble tree to help users get an overview of a set of ideas quickly. A correlated-clustering-based layout algorithm is used to simultaneously generate multiple flow maps with less ambiguity. We also introduce a focus+context timeline to explore huge amounts of temporal data at different levels of time granularity. Quantitative evaluation and case studies demonstrate the accuracy and effectiveness of IdeaFlow.",
                "AuthorNamesDeduped": "Xiting Wang;Shixia Liu;Yang Chen;Tai-Quan Peng;Jing Su;Jing Yang;Baining Guo",
                "AuthorNames": "Xiting Wang;Shixia Liu;Yang Chen;Tai-Quan Peng;Jing Su;Jing Yang;Baining Guo",
                "AuthorAffiliation": "School of Software, Tsinghua University;School of Software, Tsinghua University;School of Software, Tsinghua University;Michigan State University;Tsinghua University;UNCC;Microsoft Research",
                "InternalReferences": "0.1109/vast.2011.6102461;10.1109/vast.2010.5652931;10.1109/tvcg.2015.2467554;10.1109/tvcg.2014.2346433;10.1109/tvcg.2015.2467992;10.1109/tvcg.2015.2467691;10.1109/tvcg.2011.202;10.1109/vast.2012.6400485;10.1109/tvcg.2013.196;10.1109/tvcg.2015.2467757;10.1109/tvcg.2012.212;10.1109/tvcg.2010.129;10.1109/tvcg.2013.162;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346919;10.1109/infvis.2005.1532152;10.1109/tvcg.2014.2346920;10.1109/tvcg.2015.2467991;10.1109/tvcg.2011.239;10.1109/infvis.2005.1532150;10.1109/tvcg.2009.111;10.1109/infvis.2005.1532128",
                "AuthorKeywords": null,
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 56,
                "DownloadsXplore": 1040,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 985,
                "i": [
                    985
                ]
            }
        },
        {
            "name": "Kai Xu 0003",
            "value": 107,
            "numPapers": 39,
            "cluster": "5",
            "visible": 1,
            "index": 1431,
            "x": -315.09930984962045,
            "y": -209.43358119531086,
            "vy": 0,
            "vx": 0,
            "r": 1.1232009211283822,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "A User Study on Curved Edges in Graph Visualization",
                "DOI": "10.1109/tvcg.2012.189",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.189",
                "FirstPage": 2449,
                "LastPage": 2456,
                "PaperType": "J",
                "Abstract": "Recently there has been increasing research interest in displaying graphs with curved edges to produce more readable visualizations. While there are several automatic techniques, little has been done to evaluate their effectiveness empirically. In this paper we present two experiments studying the impact of edge curvature on graph readability. The goal is to understand the advantages and disadvantages of using curved edges for common graph tasks compared to straight line segments, which are the conventional choice for showing edges in node-link diagrams. We included several edge variations: straight edges, edges with different curvature levels, and mixed straight and curved edges. During the experiments, participants were asked to complete network tasks including determination of connectivity, shortest path, node degree, and common neighbors. We also asked the participants to provide subjective ratings of the aesthetics of different edge types. The results show significant performance differences between the straight and curved edges and clear distinctions between variations of curved edges.",
                "AuthorNamesDeduped": "Kai Xu 0003;Chris Rooney;Peter J. Passmore;Dong-Han Ham;Phong Hai Nguyen",
                "AuthorNames": "Kai Xu;Chris Rooney;Peter Passmore;Dong-Han Ham;Phong H. Nguyen",
                "AuthorAffiliation": "Middlesex University, UK;Middlesex University, UK;Middlesex University, UK;Chonnam National University, South Korea;Middlesex University, UK",
                "InternalReferences": "0.1109/tvcg.2011.233;10.1109/infvis.2002.1173155;10.1109/tvcg.2006.147;10.1109/infvis.2005.1532136;10.1109/infvis.2005.1532131;10.1109/infvis.2003.1249008;10.1109/tvcg.2006.166",
                "AuthorKeywords": "Graph, visualization, curved edges, evaluation",
                "AminerCitationCount": 91,
                "CitationCountCrossRef": 50,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 1240,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1417,
                "i": [
                    1417
                ]
            }
        },
        {
            "name": "B. L. William Wong",
            "value": 70,
            "numPapers": 33,
            "cluster": "5",
            "visible": 1,
            "index": 1432,
            "x": 373.9453206905254,
            "y": -58.43712119586387,
            "vy": 0,
            "vx": 0,
            "r": 1.0805987334484743,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "SenseMap: Supporting browser-based online sensemaking through analytic provenance",
                "DOI": "10.1109/vast.2016.7883515",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883515",
                "FirstPage": 91,
                "LastPage": 100,
                "PaperType": "C",
                "Abstract": "Sensemaking is described as the process in which people collect, organize and create representations of information, all centered around some problem they need to understand. People often get lost when solving complicated tasks using big datasets over long periods of exploration and analysis. They may forget what they have done, are unaware of where they are in the context of the overall task, and are unsure where to continue. In this paper, we introduce a tool, SenseMap, to address these issues in the context of browser-based online sensemaking. We conducted a semi-structured interview with nine participants to explore their behaviors in online sensemaking with existing browser functionality. A simplified sensemaking model based on Pirolli and Card's model is derived to better represent the behaviors we found: users iteratively collect information sources relevant to the task, curate them in a way that makes sense, and finally communicate their findings to others. SenseMap automatically captures provenance of user sensemaking actions and provides multi-linked views to visualize the collected information and enable users to curate and communicate their findings. To explore how SenseMap is used, we conducted a user study in a naturalistic work setting with five participants completing the same sensemaking task related to their daily work activities. All participants found the visual representation and interaction of the tool intuitive to use. Three of them engaged with the tool and produced successful outcomes. It helped them to organize information sources, to quickly find and navigate to the sources they wanted, and to effectively communicate their findings.",
                "AuthorNamesDeduped": "Phong Hai Nguyen;Kai Xu 0003;Andy Bardill;Betul Salman;Kate Herd;B. L. William Wong",
                "AuthorNames": "Phong H. Nguyen;Kai Xu;Andy Bardill;Betul Salman;Kate Herd;B.L. William Wong",
                "AuthorAffiliation": "Middlesex University, London, UK;Middlesex University, London, UK;Middlesex University, London, UK;Middlesex University, London, UK;Middlesex University, London, UK;Middlesex University, London, UK",
                "InternalReferences": "0.1109/tvcg.2008.137;10.1109/tvcg.2015.2467611;10.1109/vast.2008.4677365;10.1109/tvcg.2013.132;10.1109/visual.2005.1532788;10.1109/tvcg.2013.124;10.1109/tvcg.2011.185",
                "AuthorKeywords": null,
                "AminerCitationCount": 37,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 662,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 988,
                "i": [
                    988
                ]
            }
        },
        {
            "name": "Wei Hong 0006",
            "value": 22,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1433,
            "x": -236.3443838992774,
            "y": 295.78933753479174,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "A Pipeline for Computer Aided Polyp Detection",
                "DOI": "10.1109/tvcg.2006.112",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.112",
                "FirstPage": 861,
                "LastPage": 868,
                "PaperType": "J",
                "Abstract": "We present a novel pipeline for computer-aided detection (CAD) of colonic polyps by integrating texture and shape analysis with volume rendering and conformal colon flattening. Using our automatic method, the 3D polyp detection problem is converted into a 2D pattern recognition problem. The colon surface is first segmented and extracted from the CT data set of the patient's abdomen, which is then mapped to a 2D rectangle using conformal mapping. This flattened image is rendered using a direct volume rendering technique with a translucent electronic biopsy transfer function. The polyps are detected by a 2D clustering method on the flattened image. The false positives are further reduced by analyzing the volumetric shape and texture features. Compared with shape based methods, our method is much more efficient without the need of computing curvature and other shape parameters for the whole colon surface. The final detection results are stored in the 2D image, which can be easily incorporated into a virtual colonoscopy (VC) system to highlight the polyp locations. The extracted colon surface mesh can be used to accelerate the volumetric ray casting algorithm used to generate the VC endoscopic view. The proposed automatic CAD pipeline is incorporated into an interactive VC system, with a goal of helping radiologists detect polyps faster and with higher accuracy",
                "AuthorNamesDeduped": "Wei Hong 0006;Feng Qiu;Arie E. Kaufman",
                "AuthorNames": "Wei Hong;Feng Qiu;Arie kaufman",
                "AuthorAffiliation": "Department of Computer Science, Stony Brook University, Stony Brook, NY, USA;Department of Computer Science, Stony Brook University, Stony Brook, NY, USA;Department of Computer Science, Stony Brook University, Stony Brook, NY, USA",
                "InternalReferences": "0.1109/visual.2001.964540;10.1109/visual.2004.27;10.1109/visual.1992.235231;10.1109/visual.2003.1250384",
                "AuthorKeywords": "Computer Aided Detection, Virtual Colonoscopy, Texture Analysis, Volume Rendering",
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 577,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2286,
                "i": [
                    2286
                ]
            }
        },
        {
            "name": "Jing Xia",
            "value": 24,
            "numPapers": 19,
            "cluster": "3",
            "visible": 1,
            "index": 1434,
            "x": -25.538736622871795,
            "y": -377.88592581850355,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure",
                "DOI": "10.1109/tvcg.2014.2346911",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346911",
                "FirstPage": 1853,
                "LastPage": 1862,
                "PaperType": "J",
                "Abstract": "We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.",
                "AuthorNamesDeduped": "Sungahn Ko;Jieqiong Zhao;Jing Xia;Shehzad Afzal;Xiaoyu Wang 0001;Greg Abram;Niklas Elmqvist;Len Kne;David Van Riper;Kelly P. Gaither;Shaun Kennedy;William J. Tolone;William Ribarsky;David S. Ebert",
                "AuthorNames": "Sungahn Ko;Shaun Kennedy;William Tolone;William Ribarsky;David S. Ebert;Jieqiong Zhao;Jing Xia;Shehzad Afzal;Xiaoyu Wang;Greg Abram;Niklas Elmqvist;Len Kne;David Van Riper;Kelly Gaither",
                "AuthorAffiliation": "Purdue University in West Lafayette, IN, USA;Purdue University in West Lafayette, IN, USA;State Key Lab of CAD&CG, Zhejiang University in Hangzhou, China;Purdue University in West Lafayette, IN, USA;University of North Carolina at Charlotte in Charlotte, NC, USA;University of Texas at Austin in Austin, TX, USA;Purdue University in West Lafayette, IN, USA;University of Minnesota in Minneapolis, MN, USA;University of Minnesota in Minneapolis, MN, USA;University of Texas at Austin in Austin, TX, USA;University of Minnesota in Minneapolis, MN, USA;University of North Carolina at Charlotte in Charlotte, NC, USA;University of North Carolina at Charlotte in Charlotte, NC, USA;Purdue University in West Lafayette, IN, USA",
                "InternalReferences": "0.1109/infvis.2000.885098;10.1109/tvcg.2011.225;10.1109/tvcg.2012.260;10.1109/tvcg.2007.70541;10.1109/tvcg.2010.223;10.1109/tvcg.2013.146;10.1109/tvcg.2010.171;10.1109/vast.2011.6102460;10.1109/vast.2011.6102457",
                "AuthorKeywords": "Computational steering, visual analytics, critical infrastructure, homeland security",
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 954,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1291,
                "i": [
                    1291
                ]
            }
        },
        {
            "name": "Yumeng Hou",
            "value": 15,
            "numPapers": 11,
            "cluster": "3",
            "visible": 1,
            "index": 1435,
            "x": 274.18526309653106,
            "y": 261.4812450266482,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "DimScanner: A Relation-based Visual Exploration Approach Towards Data Dimension Inspection",
                "DOI": "10.1109/vast.2016.7883514",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883514",
                "FirstPage": 81,
                "LastPage": 90,
                "PaperType": "C",
                "Abstract": "Exploring multi-dimensional datasets can be cumbersome if data analysts have little knowledge about the data. Various dimension relation inspection tools and dimension exploration tools have been proposed for efficient data examining and understanding. However, the needed workload varies largely with respect to data complexity and user expertise, which can only be reduced with rich background knowledge over the data. In this paper we address the workload challenge with a data structuring and exploration scheme that affords dimension relation detection and that serves as the background knowledge for further investigation. We contribute a novel data structuring scheme that leverages an information-theoretic view structuring algorithm to uncover information-aware relations among different data views, and thereby discloses redundancy and other relation patterns among dimensions. The integrated system, DimScanner, empowers analysts with rich user controls and assistance widgets to interactively detect the relations of multi-dimensional data.",
                "AuthorNamesDeduped": "Jing Xia;Wei Chen 0001;Yumeng Hou;Wanqi Hu;Xinxin Huang;David S. Ebert",
                "AuthorNames": "Jing Xia;Wei Chen;Yumeng Hou;Wanqi Hu;Xinxin Huang;David S. Ebertk",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Purdue University System, West Lafayette, IN, US",
                "InternalReferences": "0.1109/tvcg.2015.2467191;10.1109/tvcg.2009.153;10.1109/infvis.1998.729559;10.1109/vast.2009.5332628;10.1109/tvcg.2010.184;10.1109/vast.2010.5652450;10.1109/vast.2006.261423;10.1109/tvcg.2013.160;10.1109/tvcg.2013.150;10.1109/tvcg.2011.229;10.1109/infvis.2005.1532142;10.1109/infvis.2004.3",
                "AuthorKeywords": null,
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 528,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 999,
                "i": [
                    999
                ]
            }
        },
        {
            "name": "Wanqi Hu",
            "value": 15,
            "numPapers": 11,
            "cluster": "3",
            "visible": 1,
            "index": 1436,
            "x": -378.9356413990355,
            "y": -7.601294462231503,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "VAST",
                "Year": 2016,
                "Title": "DimScanner: A Relation-based Visual Exploration Approach Towards Data Dimension Inspection",
                "DOI": "10.1109/vast.2016.7883514",
                "Link": "http://dx.doi.org/10.1109/VAST.2016.7883514",
                "FirstPage": 81,
                "LastPage": 90,
                "PaperType": "C",
                "Abstract": "Exploring multi-dimensional datasets can be cumbersome if data analysts have little knowledge about the data. Various dimension relation inspection tools and dimension exploration tools have been proposed for efficient data examining and understanding. However, the needed workload varies largely with respect to data complexity and user expertise, which can only be reduced with rich background knowledge over the data. In this paper we address the workload challenge with a data structuring and exploration scheme that affords dimension relation detection and that serves as the background knowledge for further investigation. We contribute a novel data structuring scheme that leverages an information-theoretic view structuring algorithm to uncover information-aware relations among different data views, and thereby discloses redundancy and other relation patterns among dimensions. The integrated system, DimScanner, empowers analysts with rich user controls and assistance widgets to interactively detect the relations of multi-dimensional data.",
                "AuthorNamesDeduped": "Jing Xia;Wei Chen 0001;Yumeng Hou;Wanqi Hu;Xinxin Huang;David S. Ebert",
                "AuthorNames": "Jing Xia;Wei Chen;Yumeng Hou;Wanqi Hu;Xinxin Huang;David S. Ebertk",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Purdue University System, West Lafayette, IN, US",
                "InternalReferences": "0.1109/tvcg.2015.2467191;10.1109/tvcg.2009.153;10.1109/infvis.1998.729559;10.1109/vast.2009.5332628;10.1109/tvcg.2010.184;10.1109/vast.2010.5652450;10.1109/vast.2006.261423;10.1109/tvcg.2013.160;10.1109/tvcg.2013.150;10.1109/tvcg.2011.229;10.1109/infvis.2005.1532142;10.1109/infvis.2004.3",
                "AuthorKeywords": null,
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 528,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 999,
                "i": [
                    999
                ]
            }
        },
        {
            "name": "Xinxin Huang",
            "value": 81,
            "numPapers": 17,
            "cluster": "3",
            "visible": 1,
            "index": 1437,
            "x": 284.64897495930677,
            "y": -250.4495179764097,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "VAET: A Visual Analytics Approach for E-Transactions Time-Series",
                "DOI": "10.1109/tvcg.2014.2346913",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346913",
                "FirstPage": 1743,
                "LastPage": 1752,
                "PaperType": "J",
                "Abstract": "Previous studies on E-transaction time-series have mainly focused on finding temporal trends of transaction behavior. Interesting transactions that are time-stamped and situation-relevant may easily be obscured in a large amount of information. This paper proposes a visual analytics system, Visual Analysis of E-transaction Time-Series (VAET), that allows the analysts to interactively explore large transaction datasets for insights about time-varying transactions. With a set of analyst-determined training samples, VAET automatically estimates the saliency of each transaction in a large time-series using a probabilistic decision tree learner. It provides an effective time-of-saliency (TOS) map where the analysts can explore a large number of transactions at different time granularities. Interesting transactions are further encoded with KnotLines, a compact visual representation that captures both the temporal variations and the contextual connection of transactions. The analysts can thus explore, select, and investigate knotlines of interest. A case study and user study with a real E-transactions dataset (26 million records) demonstrate the effectiveness of VAET.",
                "AuthorNamesDeduped": "Cong Xie;Wei Chen 0001;Xinxin Huang;Yueqi Hu;Scott Barlowe;Jing Yang 0001",
                "AuthorNames": "Cong Xie;Wei Chen;Xinxin Huang;Yueqi Hu;Scott Barlowe;Jing Yang",
                "AuthorAffiliation": "State Key Lab of CAD&CG, Zhejiang University;Cyber Innovation Joint Research Center, Zhejiang University;State Key Lab of CAD&CG, Zhejiang University;Dept. of Computer Science, University of North Carolina, Charlotte;Western Carolina University;Dept. of Computer Science, University of North Carolina, Charlotte",
                "InternalReferences": "0.1109/tvcg.2009.123;10.1109/vast.2007.4389009;10.1109/tvcg.2012.212;10.1109/infvis.1995.528685;10.1109/vast.2012.6400494;10.1109/tvcg.2010.162;10.1109/tvcg.2009.180",
                "AuthorKeywords": "Time-Series, Visual Analytics, E-transaction",
                "AminerCitationCount": 62,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1325,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1263,
                "i": [
                    1263
                ]
            }
        },
        {
            "name": "Tera Marie Green",
            "value": 77,
            "numPapers": 7,
            "cluster": "5",
            "visible": 1,
            "index": 1438,
            "x": -40.72923602010169,
            "y": 377.08239064323715,
            "vy": 0,
            "vx": 0,
            "r": 1.0886586067933217,
            "node": {
                "Conference": "VAST",
                "Year": 2008,
                "Title": "Visual analytics for complex concepts using a human cognition model",
                "DOI": "10.1109/vast.2008.4677361",
                "Link": "http://dx.doi.org/10.1109/VAST.2008.4677361",
                "FirstPage": 91,
                "LastPage": 98,
                "PaperType": "C",
                "Abstract": "As the information being visualized and the process of understanding that information both become increasingly complex, it is necessary to develop new visualization approaches that facilitate the flow of human reasoning. In this paper, we endeavor to push visualization design a step beyond current user models by discussing a modeling framework of human ldquohigher cognition.rdquo Based on this cognition model, we present design guidelines for the development of visual interfaces designed to maximize the complementary cognitive strengths of both human and computer. Some of these principles are already being reflected in the better visual analytics designs, while others have not yet been applied or fully applied. But none of the guidelines have explained the deeper rationale that the model provides. Lastly, we discuss and assess these visual analytics guidelines through the evaluation of several visualization examples.",
                "AuthorNamesDeduped": "Tera Marie Green;William Ribarsky;Brian D. Fisher",
                "AuthorNames": "Tera Marie Green;William Ribarsky;Brian Fisher",
                "AuthorAffiliation": "Cahrlotten Visualization Center, North Carolina State University, Charlotte, USA;Cahrlotten Visualization Center, North Carolina State University, Charlotte, USA;School of Interactive Arts and Technology, Simon Fraser University, Canada",
                "InternalReferences": "0.1109/visual.2005.1532781;10.1109/vast.2006.261425;10.1109/tvcg.2007.70574;10.1109/vast.2007.4389006;10.1109/vast.2007.4389005;10.1109/vast.2007.4389009;10.1109/infvis.1995.528686",
                "AuthorKeywords": "visual analytics, cognition and perception theory, embodied cognition, visualization taxonomies and models",
                "AminerCitationCount": 113,
                "CitationCountCrossRef": 59,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 1302,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1991,
                "i": [
                    1991
                ]
            }
        },
        {
            "name": "Steve Kieffer",
            "value": 76,
            "numPapers": 14,
            "cluster": "2",
            "visible": 1,
            "index": 1439,
            "x": -224.76110666867476,
            "y": -305.667212715189,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "High-Quality Ultra-Compact Grid Layout of Grouped Networks",
                "DOI": "10.1109/tvcg.2015.2467251",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467251",
                "FirstPage": 339,
                "LastPage": 348,
                "PaperType": "J",
                "Abstract": "Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks.",
                "AuthorNamesDeduped": "Vahan Yoghourdjian;Tim Dwyer;Graeme Gange;Steve Kieffer;Karsten Klein 0001;Kim Marriott",
                "AuthorNames": "Vahan Yoghourdjian;Tim Dwyer;Graeme Gange;Steve Kieffer;Karsten Klein;Kim Marriott",
                "AuthorAffiliation": "Monash University;Monash University;The University of Melbourne;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2008.117;10.1109/tvcg.2013.151;10.1109/tvcg.2006.156;10.1109/tvcg.2009.109;10.1109/infvis.2003.1249009;10.1109/tvcg.2015.2467451;10.1109/tvcg.2012.245",
                "AuthorKeywords": "Network visualization, graph drawing, power graph, optimization, large-neighborhood search",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 995,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1032,
                "i": [
                    1032
                ]
            }
        },
        {
            "name": "Michael Wybrow",
            "value": 130,
            "numPapers": 17,
            "cluster": "2",
            "visible": 1,
            "index": 1440,
            "x": 372.33634153767537,
            "y": 73.59109165068529,
            "vy": 0,
            "vx": 0,
            "r": 1.1496833621185953,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "HOLA: Human-like Orthogonal Network Layout",
                "DOI": "10.1109/tvcg.2015.2467451",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467451",
                "FirstPage": 349,
                "LastPage": 358,
                "PaperType": "J",
                "Abstract": "Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new “human-centred” methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.",
                "AuthorNamesDeduped": "Steve Kieffer;Tim Dwyer;Kim Marriott;Michael Wybrow",
                "AuthorNames": "Steve Kieffer;Tim Dwyer;Kim Marriott;Michael Wybrow",
                "AuthorAffiliation": "Monash University and NICTA Victoria;Monash University;Monash University and NICTA Victoria;Monash University",
                "InternalReferences": "0.1109/tvcg.2006.120;10.1109/tvcg.2012.208;10.1109/tvcg.2013.151;10.1109/tvcg.2006.156;10.1109/tvcg.2009.109;10.1109/tvcg.2008.141;10.1109/tvcg.2006.147;10.1109/tvcg.2012.245;10.1109/tvcg.2008.155",
                "AuthorKeywords": "Graph layout, orthogonal layout, automatic layout algorithms, user-generated layout, graph-drawing aesthetics",
                "AminerCitationCount": 80,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 1837,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1015,
                "i": [
                    1015
                ]
            }
        },
        {
            "name": "Mark W. Jones",
            "value": 123,
            "numPapers": 30,
            "cluster": "6",
            "visible": 1,
            "index": 1441,
            "x": -324.371830009071,
            "y": 197.31425669871481,
            "vy": 0,
            "vx": 0,
            "r": 1.1416234887737478,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "TimeNotes: A Study on Effective Chart Visualization and Interaction Techniques for Time-Series Data",
                "DOI": "10.1109/tvcg.2015.2467751",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467751",
                "FirstPage": 549,
                "LastPage": 558,
                "PaperType": "J",
                "Abstract": "Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.",
                "AuthorNamesDeduped": "James S. Walker;Rita Borgo;Mark W. Jones",
                "AuthorNames": "James Walker;Rita Borgo;Mark W. Jones",
                "AuthorAffiliation": "Swansea University;Swansea University;Swansea University",
                "InternalReferences": "0.1109/tvcg.2009.181;10.1109/tvcg.2014.2346428;10.1109/infvis.2005.1532148;10.1109/tvcg.2011.160;10.1109/tvcg.2010.162;10.1109/tvcg.2010.193;10.1109/infvis.1999.801860;10.1109/tvcg.2011.195",
                "AuthorKeywords": "Time-series Exploration, Focus+Context, Lens, Interaction Techniques",
                "AminerCitationCount": 61,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 3246,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1019,
                "i": [
                    1019
                ]
            }
        },
        {
            "name": "Julie Lein",
            "value": 27,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 1442,
            "x": 105.9345525029148,
            "y": -364.72985974006457,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Poemage: Visualizing the Sonic Topology of a Poem",
                "DOI": "10.1109/tvcg.2015.2467811",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467811",
                "FirstPage": 439,
                "LastPage": 448,
                "PaperType": "J",
                "Abstract": "The digital humanities have experienced tremendous growth within the last decade, mostly in the context of developing computational tools that support what is called distant reading - collecting and analyzing huge amounts of textual data for synoptic evaluation. On the other end of the spectrum is a practice at the heart of the traditional humanities, close reading - the careful, in-depth analysis of a single text in order to extract, engage, and even generate as much productive meaning as possible. The true value of computation to close reading is still very much an open question. During a two-year design study, we explored this question with several poetry scholars, focusing on an investigation of sound and linguistic devices in poetry. The contributions of our design study include a problem characterization and data abstraction of the use of sound in poetry as well as Poemage, a visualization tool for interactively exploring the sonic topology of a poem. The design of Poemage is grounded in the evaluation of a series of technology probes we deployed to our poetry collaborators, and we validate the final design with several case studies that illustrate the disruptive impact technology can have on poetry scholarship. Finally, we also contribute a reflection on the challenges we faced conducting visualization research in literary studies.",
                "AuthorNamesDeduped": "Nina McCurdy;Julie Lein;Katherine Coles;Miriah D. Meyer",
                "AuthorNames": "Nina McCurdy;Julie Lein;Katharine Coles;Miriah Meyer",
                "AuthorAffiliation": "University of Utah School of Computing;University of Utah Department of English;University of Utah Department of English;University of Utah School of Computing",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2009.122;10.1109/vast.2009.5333443;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/infvis.2005.1532126;10.1109/tvcg.2012.213;10.1109/vast.2007.4389006;10.1109/tvcg.2009.165;10.1109/tvcg.2009.171;10.1109/infvis.2002.1173155;10.1109/tvcg.2008.172;10.1109/infvis.1995.528686",
                "AuthorKeywords": "Visualization in the humanities, design studies, text and document data, graph/network data",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1378,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1023,
                "i": [
                    1023
                ]
            }
        },
        {
            "name": "Katherine Coles",
            "value": 27,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 1443,
            "x": 168.31694993367472,
            "y": 340.6162127160492,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Poemage: Visualizing the Sonic Topology of a Poem",
                "DOI": "10.1109/tvcg.2015.2467811",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467811",
                "FirstPage": 439,
                "LastPage": 448,
                "PaperType": "J",
                "Abstract": "The digital humanities have experienced tremendous growth within the last decade, mostly in the context of developing computational tools that support what is called distant reading - collecting and analyzing huge amounts of textual data for synoptic evaluation. On the other end of the spectrum is a practice at the heart of the traditional humanities, close reading - the careful, in-depth analysis of a single text in order to extract, engage, and even generate as much productive meaning as possible. The true value of computation to close reading is still very much an open question. During a two-year design study, we explored this question with several poetry scholars, focusing on an investigation of sound and linguistic devices in poetry. The contributions of our design study include a problem characterization and data abstraction of the use of sound in poetry as well as Poemage, a visualization tool for interactively exploring the sonic topology of a poem. The design of Poemage is grounded in the evaluation of a series of technology probes we deployed to our poetry collaborators, and we validate the final design with several case studies that illustrate the disruptive impact technology can have on poetry scholarship. Finally, we also contribute a reflection on the challenges we faced conducting visualization research in literary studies.",
                "AuthorNamesDeduped": "Nina McCurdy;Julie Lein;Katherine Coles;Miriah D. Meyer",
                "AuthorNames": "Nina McCurdy;Julie Lein;Katharine Coles;Miriah Meyer",
                "AuthorAffiliation": "University of Utah School of Computing;University of Utah Department of English;University of Utah Department of English;University of Utah School of Computing",
                "InternalReferences": "0.1109/tvcg.2011.186;10.1109/tvcg.2009.122;10.1109/vast.2009.5333443;10.1109/tvcg.2008.135;10.1109/tvcg.2011.233;10.1109/infvis.2005.1532126;10.1109/tvcg.2012.213;10.1109/vast.2007.4389006;10.1109/tvcg.2009.165;10.1109/tvcg.2009.171;10.1109/infvis.2002.1173155;10.1109/tvcg.2008.172;10.1109/infvis.1995.528686",
                "AuthorKeywords": "Visualization in the humanities, design studies, text and document data, graph/network data",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1378,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1023,
                "i": [
                    1023
                ]
            }
        },
        {
            "name": "Stefania Forlini",
            "value": 33,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 1444,
            "x": -354.3172911012876,
            "y": -137.51093493480954,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Speculative Practices: Utilizing InfoVis to Explore Untapped Literary Collections",
                "DOI": "10.1109/tvcg.2015.2467452",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467452",
                "FirstPage": 429,
                "LastPage": 438,
                "PaperType": "J",
                "Abstract": "In this paper we exemplify how information visualization supports speculative thinking, hypotheses testing, and preliminary interpretation processes as part of literary research. While InfoVis has become a buzz topic in the digital humanities, skepticism remains about how effectively it integrates into and expands on traditional humanities research approaches. From an InfoVis perspective, we lack case studies that show the specific design challenges that make literary studies and humanities research at large a unique application area for information visualization. We examine these questions through our case study of the Speculative W@nderverse, a visualization tool that was designed to enable the analysis and exploration of an untapped literary collection consisting of thousands of science fiction short stories. We present the results of two empirical studies that involved general-interest readers and literary scholars who used the evolving visualization prototype as part of their research for over a year. Our findings suggest a design space for visualizing literary collections that is defined by (1) their academic and public relevance, (2) the tension between qualitative vs. quantitative methods of interpretation, (3) result-vs. process-driven approaches to InfoVis, and (4) the unique material and visual qualities of cultural collections. Through the Speculative W@nderverse we demonstrate how visualization can bridge these sometimes contradictory perspectives by cultivating curiosity and providing entry points into literary collections while, at the same time, supporting multiple aspects of humanities research processes.",
                "AuthorNamesDeduped": "Uta Hinrichs;Stefania Forlini;Bridget Moynihan",
                "AuthorNames": "Uta Hinrichs;Stefania Forlini;Bridget Moynihan",
                "AuthorAffiliation": "SACHI Group, University of St Andrews, UK;Department of English, University of Calgary;Department of English, University of Calgary",
                "InternalReferences": "0.1109/tvcg.2012.272;10.1109/tvcg.2014.2346431;10.1109/tvcg.2008.175;10.1109/tvcg.2008.127;10.1109/tvcg.2007.70541;10.1109/tvcg.2012.213;10.1109/vast.2007.4389006;10.1109/tvcg.2009.165;10.1109/tvcg.2007.70577;10.1109/tvcg.2009.171;10.1109/tvcg.2008.172;10.1109/vast.2008.4677370;10.1109/vast.2009.5333443",
                "AuthorKeywords": "Digital Humanities, Interlinked Visualization, Literary Studies, Cultural Collections, Science Fiction",
                "AminerCitationCount": 54,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1067,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1029,
                "i": [
                    1029
                ]
            }
        },
        {
            "name": "Bridget Moynihan",
            "value": 33,
            "numPapers": 12,
            "cluster": "1",
            "visible": 1,
            "index": 1445,
            "x": 354.2724097340563,
            "y": -137.98934633233446,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Speculative Practices: Utilizing InfoVis to Explore Untapped Literary Collections",
                "DOI": "10.1109/tvcg.2015.2467452",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467452",
                "FirstPage": 429,
                "LastPage": 438,
                "PaperType": "J",
                "Abstract": "In this paper we exemplify how information visualization supports speculative thinking, hypotheses testing, and preliminary interpretation processes as part of literary research. While InfoVis has become a buzz topic in the digital humanities, skepticism remains about how effectively it integrates into and expands on traditional humanities research approaches. From an InfoVis perspective, we lack case studies that show the specific design challenges that make literary studies and humanities research at large a unique application area for information visualization. We examine these questions through our case study of the Speculative W@nderverse, a visualization tool that was designed to enable the analysis and exploration of an untapped literary collection consisting of thousands of science fiction short stories. We present the results of two empirical studies that involved general-interest readers and literary scholars who used the evolving visualization prototype as part of their research for over a year. Our findings suggest a design space for visualizing literary collections that is defined by (1) their academic and public relevance, (2) the tension between qualitative vs. quantitative methods of interpretation, (3) result-vs. process-driven approaches to InfoVis, and (4) the unique material and visual qualities of cultural collections. Through the Speculative W@nderverse we demonstrate how visualization can bridge these sometimes contradictory perspectives by cultivating curiosity and providing entry points into literary collections while, at the same time, supporting multiple aspects of humanities research processes.",
                "AuthorNamesDeduped": "Uta Hinrichs;Stefania Forlini;Bridget Moynihan",
                "AuthorNames": "Uta Hinrichs;Stefania Forlini;Bridget Moynihan",
                "AuthorAffiliation": "SACHI Group, University of St Andrews, UK;Department of English, University of Calgary;Department of English, University of Calgary",
                "InternalReferences": "0.1109/tvcg.2012.272;10.1109/tvcg.2014.2346431;10.1109/tvcg.2008.175;10.1109/tvcg.2008.127;10.1109/tvcg.2007.70541;10.1109/tvcg.2012.213;10.1109/vast.2007.4389006;10.1109/tvcg.2009.165;10.1109/tvcg.2007.70577;10.1109/tvcg.2009.171;10.1109/tvcg.2008.172;10.1109/vast.2008.4677370;10.1109/vast.2009.5333443",
                "AuthorKeywords": "Digital Humanities, Interlinked Visualization, Literary Studies, Cultural Collections, Science Fiction",
                "AminerCitationCount": 54,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1067,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1029,
                "i": [
                    1029
                ]
            }
        },
        {
            "name": "Jocelyn Ng",
            "value": 23,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1446,
            "x": -168.07709303979175,
            "y": 341.17457524747243,
            "vy": 0,
            "vx": 0,
            "r": 1.0264824409902131,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Matches, Mismatches, and Methods: Multiple-View Workflows for Energy Portfolio Analysis",
                "DOI": "10.1109/tvcg.2015.2466971",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2466971",
                "FirstPage": 449,
                "LastPage": 458,
                "PaperType": "J",
                "Abstract": "The energy performance of large building portfolios is challenging to analyze and monitor, as current analysis tools are not scalable or they present derived and aggregated data at too coarse of a level. We conducted a visualization design study, beginning with a thorough work domain analysis and a characterization of data and task abstractions. We describe generalizable visual encoding design choices for time-oriented data framed in terms of matches and mismatches, as well as considerations for workflow design. Our designs address several research questions pertaining to scalability, view coordination, and the inappropriateness of line charts for derived and aggregated data due to a combination of data semantics and domain convention. We also present guidelines relating to familiarity and trust, as well as methodological considerations for visualization design studies. Our designs were adopted by our collaborators and incorporated into the design of an energy analysis software application that will be deployed to tens of thousands of energy workers in their client base.",
                "AuthorNamesDeduped": "Matthew Brehmer;Jocelyn Ng;Kevin Tate;Tamara Munzner",
                "AuthorNames": "Matthew Brehmer;Jocelyn Ng;Kevin Tate;Tamara Munzner",
                "AuthorAffiliation": "University of British Columbia;EnerNOC, Inc.;EnerNOC, Inc.;University of British Columbia",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2008.166;10.1109/tvcg.2013.145;10.1109/tvcg.2013.173;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70583;10.1109/tvcg.2011.209;10.1109/tvcg.2014.2346331;10.1109/tvcg.2014.2346578;10.1109/tvcg.2009.111;10.1109/tvcg.2011.196;10.1109/tvcg.2012.213;10.1109/infvis.1999.801851;10.1109/infvis.2005.1532122",
                "AuthorKeywords": "Design study, design methodologies, time series data, task and requirements analysis, coordinated and multiple views",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 918,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1031,
                "i": [
                    1031
                ]
            }
        },
        {
            "name": "Kevin Tate",
            "value": 23,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1447,
            "x": -106.56211208230624,
            "y": -365.2321402458415,
            "vy": 0,
            "vx": 0,
            "r": 1.0264824409902131,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Matches, Mismatches, and Methods: Multiple-View Workflows for Energy Portfolio Analysis",
                "DOI": "10.1109/tvcg.2015.2466971",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2466971",
                "FirstPage": 449,
                "LastPage": 458,
                "PaperType": "J",
                "Abstract": "The energy performance of large building portfolios is challenging to analyze and monitor, as current analysis tools are not scalable or they present derived and aggregated data at too coarse of a level. We conducted a visualization design study, beginning with a thorough work domain analysis and a characterization of data and task abstractions. We describe generalizable visual encoding design choices for time-oriented data framed in terms of matches and mismatches, as well as considerations for workflow design. Our designs address several research questions pertaining to scalability, view coordination, and the inappropriateness of line charts for derived and aggregated data due to a combination of data semantics and domain convention. We also present guidelines relating to familiarity and trust, as well as methodological considerations for visualization design studies. Our designs were adopted by our collaborators and incorporated into the design of an energy analysis software application that will be deployed to tens of thousands of energy workers in their client base.",
                "AuthorNamesDeduped": "Matthew Brehmer;Jocelyn Ng;Kevin Tate;Tamara Munzner",
                "AuthorNames": "Matthew Brehmer;Jocelyn Ng;Kevin Tate;Tamara Munzner",
                "AuthorAffiliation": "University of British Columbia;EnerNOC, Inc.;EnerNOC, Inc.;University of British Columbia",
                "InternalReferences": "0.1109/tvcg.2011.185;10.1109/tvcg.2013.124;10.1109/tvcg.2008.166;10.1109/tvcg.2013.145;10.1109/tvcg.2013.173;10.1109/tvcg.2010.162;10.1109/tvcg.2007.70583;10.1109/tvcg.2011.209;10.1109/tvcg.2014.2346331;10.1109/tvcg.2014.2346578;10.1109/tvcg.2009.111;10.1109/tvcg.2011.196;10.1109/tvcg.2012.213;10.1109/infvis.1999.801851;10.1109/infvis.2005.1532122",
                "AuthorKeywords": "Design study, design methodologies, time series data, task and requirements analysis, coordinated and multiple views",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 918,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1031,
                "i": [
                    1031
                ]
            }
        },
        {
            "name": "Vahan Yoghourdjian",
            "value": 27,
            "numPapers": 6,
            "cluster": "2",
            "visible": 1,
            "index": 1448,
            "x": 325.39869295461995,
            "y": 197.39729132747735,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "High-Quality Ultra-Compact Grid Layout of Grouped Networks",
                "DOI": "10.1109/tvcg.2015.2467251",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467251",
                "FirstPage": 339,
                "LastPage": 348,
                "PaperType": "J",
                "Abstract": "Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks.",
                "AuthorNamesDeduped": "Vahan Yoghourdjian;Tim Dwyer;Graeme Gange;Steve Kieffer;Karsten Klein 0001;Kim Marriott",
                "AuthorNames": "Vahan Yoghourdjian;Tim Dwyer;Graeme Gange;Steve Kieffer;Karsten Klein;Kim Marriott",
                "AuthorAffiliation": "Monash University;Monash University;The University of Melbourne;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2008.117;10.1109/tvcg.2013.151;10.1109/tvcg.2006.156;10.1109/tvcg.2009.109;10.1109/infvis.2003.1249009;10.1109/tvcg.2015.2467451;10.1109/tvcg.2012.245",
                "AuthorKeywords": "Network visualization, graph drawing, power graph, optimization, large-neighborhood search",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 995,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1032,
                "i": [
                    1032
                ]
            }
        },
        {
            "name": "Graeme Gange",
            "value": 27,
            "numPapers": 6,
            "cluster": "2",
            "visible": 1,
            "index": 1449,
            "x": -373.4076514553998,
            "y": 74.27466482295718,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "High-Quality Ultra-Compact Grid Layout of Grouped Networks",
                "DOI": "10.1109/tvcg.2015.2467251",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467251",
                "FirstPage": 339,
                "LastPage": 348,
                "PaperType": "J",
                "Abstract": "Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks.",
                "AuthorNamesDeduped": "Vahan Yoghourdjian;Tim Dwyer;Graeme Gange;Steve Kieffer;Karsten Klein 0001;Kim Marriott",
                "AuthorNames": "Vahan Yoghourdjian;Tim Dwyer;Graeme Gange;Steve Kieffer;Karsten Klein;Kim Marriott",
                "AuthorAffiliation": "Monash University;Monash University;The University of Melbourne;Monash University;Monash University;Monash University",
                "InternalReferences": "0.1109/tvcg.2008.117;10.1109/tvcg.2013.151;10.1109/tvcg.2006.156;10.1109/tvcg.2009.109;10.1109/infvis.2003.1249009;10.1109/tvcg.2015.2467451;10.1109/tvcg.2012.245",
                "AuthorKeywords": "Network visualization, graph drawing, power graph, optimization, large-neighborhood search",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 995,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1032,
                "i": [
                    1032
                ]
            }
        },
        {
            "name": "Renata G. Raidou",
            "value": 0,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 1450,
            "x": 225.24502313284273,
            "y": -307.106951327848,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "InfoVis",
                "Year": 2015,
                "Title": "Orientation-Enhanced Parallel Coordinate Plots",
                "DOI": "10.1109/tvcg.2015.2467872",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467872",
                "FirstPage": 589,
                "LastPage": 598,
                "PaperType": "J",
                "Abstract": "Parallel Coordinate Plots (PCPs) is one of the most powerful techniques for the visualization of multivariate data. However, for large datasets, the representation suffers from clutter due to overplotting. In this case, discerning the underlying data information and selecting specific interesting patterns can become difficult. We propose a new and simple technique to improve the display of PCPs by emphasizing the underlying data structure. Our Orientation-enhanced Parallel Coordinate Plots (OPCPs) improve pattern and outlier discernibility by visually enhancing parts of each PCP polyline with respect to its slope. This enhancement also allows us to introduce a novel and efficient selection method, the Orientation-enhanced Brushing (O-Brushing). Our solution is particularly useful when multiple patterns are present or when the view on certain patterns is obstructed by noise. We present the results of our approach with several synthetic and real-world datasets. Finally, we conducted a user evaluation, which verifies the advantages of the OPCPs in terms of discernibility of information in complex data. It also confirms that O-Brushing eases the selection of data patterns in PCPs and reduces the amount of necessary user interactions compared to state-of-the-art brushing techniques.",
                "AuthorNamesDeduped": "Renata G. Raidou;Martin Eisemann;Marcel Breeuwer;Elmar Eisemann;Anna Vilanova",
                "AuthorNames": "Renata Georgia Raidou;Martin Eisemann;Marcel Breeuwer;Elmar Eisemann;Anna Vilanova",
                "AuthorAffiliation": "Eindhoven University of Technology and Delft University of Technology;TH Köln and Delft University of Technology;Eindhoven University of Technology and Philips Healthcare;Delft University of Technology;Delft University of Technology",
                "InternalReferences": "0.1109/infvis.1998.729559;10.1109/infvis.2004.68;10.1109/tvcg.2006.138;10.1109/tvcg.2007.70535;10.1109/infvis.2005.1532141;10.1109/visual.1999.809866;10.1109/tvcg.2011.166;10.1109/tvcg.2014.2346979;10.1109/infvis.2002.1173157;10.1109/infvis.2005.1532138;10.1109/tvcg.2009.153;10.1109/visual.1995.485139;10.1109/tvcg.2006.170;10.1109/infvis.2004.15;10.1109/visual.1994.346302;10.1109/infvis.2003.1249008;10.1109/visual.1996.567800;10.1109/infvis.2003.1249015;10.1109/tvcg.2009.179",
                "AuthorKeywords": "Parallel Coordinates, Orientation-enhanced Parallel Coordinates, Brushing, Orientation-enhanced Brushing, Data Readability, Data Selection",
                "AminerCitationCount": 40,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 898,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1033,
                "i": [
                    1033
                ]
            }
        },
        {
            "name": "Kai Bürger",
            "value": 111,
            "numPapers": 42,
            "cluster": "11",
            "visible": 1,
            "index": 1451,
            "x": 41.373349306352935,
            "y": 378.7324200107175,
            "vy": 0,
            "vx": 0,
            "r": 1.1278065630397236,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Turbulence Visualization at the Terascale on Desktop PCs",
                "DOI": "10.1109/tvcg.2012.274",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.274",
                "FirstPage": 2169,
                "LastPage": 2177,
                "PaperType": "J",
                "Abstract": "Despite the ongoing efforts in turbulence research, the universal properties of the turbulence small-scale structure and the relationships between small- and large-scale turbulent motions are not yet fully understood. The visually guided exploration of turbulence features, including the interactive selection and simultaneous visualization of multiple features, can further progress our understanding of turbulence. Accomplishing this task for flow fields in which the full turbulence spectrum is well resolved is challenging on desktop computers. This is due to the extreme resolution of such fields, requiring memory and bandwidth capacities going beyond what is currently available. To overcome these limitations, we present a GPU system for feature-based turbulence visualization that works on a compressed flow field representation. We use a wavelet-based compression scheme including run-length and entropy encoding, which can be decoded on the GPU and embedded into brick-based volume ray-casting. This enables a drastic reduction of the data to be streamed from disk to GPU memory. Our system derives turbulence properties directly from the velocity gradient tensor, and it either renders these properties in turn or generates and renders scalar feature volumes. The quality and efficiency of the system is demonstrated in the visualization of two unsteady turbulence simulations, each comprising a spatio-temporal resolution of 10244. On a desktop computer, the system can visualize each time step in 5 seconds, and it achieves about three times this rate for the visualization of a scalar feature volume.",
                "AuthorNamesDeduped": "Marc Treib;Kai Bürger;Florian Reichl;Charles Meneveau;Alexander S. Szalay;Rüdiger Westermann",
                "AuthorNames": "Marc Treib;Kai Bürger;Florian Reichl;Charles Meneveau;Alex Szalay;Rüdiger Westermann",
                "AuthorAffiliation": "Technische Universität München, Munich, Germany;Technische Universität München, Munich, Germany;Technische Universität München, Munich, Germany;Johns Hopkins University, Baltimore, MD, USA;Johns Hopkins University, Baltimore, MD, USA;Technische Universität München, Munich, Germany",
                "InternalReferences": "0.1109/visual.2002.1183757;10.1109/visual.2001.964520;10.1109/tvcg.2006.143;10.1109/visual.2005.1532808;10.1109/visual.2003.1250384;10.1109/visual.2001.964531;10.1109/visual.2004.55;10.1109/visual.2003.1250385",
                "AuthorKeywords": "Visualization system and toolkit design, vector fields, volume rendering, data streaming, data compression",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 689,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1455,
                "i": [
                    1455
                ]
            }
        },
        {
            "name": "Roger Crawfis",
            "value": 293,
            "numPapers": 59,
            "cluster": "6",
            "visible": 1,
            "index": 1452,
            "x": -286.4361120478278,
            "y": -251.40476072446248,
            "vy": 0,
            "vx": 0,
            "r": 1.3373632700057572,
            "node": {
                "Conference": "Vis",
                "Year": 1993,
                "Title": "Texture splats for 3D scalar and vector field visualization",
                "DOI": "10.1109/visual.1993.398877",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1993.398877",
                "FirstPage": 261,
                "LastPage": 266,
                "PaperType": "C",
                "Abstract": "Volume visualization is becoming an important tool for understanding large 3D data sets. A popular technique for volume rendering is known as splatting. With new hardware architectures offering substantial improvements in the performance of rendering texture mapped objects, we present textured splats. An ideal reconstruction function for 3D signals is developed which can be used as a texture map for a splat. Extensions to the basic splatting technique are then developed to additionally represent vector fields.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Roger Crawfis;Nelson L. Max",
                "AuthorNames": "R.A. Crawfis;N. Max",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, Livermore, CA, USA;Lawrence Livemore National Laboratory, Livermore, CA, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 75,
                "PubsCitedCrossRef": 6,
                "DownloadsXplore": 428,
                "Award": "TT;BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3529,
                "i": [
                    3529
                ]
            }
        },
        {
            "name": "Nelson L. Max",
            "value": 200,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1453,
            "x": 381.1616915152458,
            "y": -8.109557400780133,
            "vy": 0,
            "vx": 0,
            "r": 1.2302820955670697,
            "node": {
                "Conference": "Vis",
                "Year": 1992,
                "Title": "A characterization of the scientific data analysis process",
                "DOI": "10.1109/visual.1992.235203",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1992.235203",
                "FirstPage": 235,
                "LastPage": 242,
                "PaperType": "C",
                "Abstract": "It is shown how data visualization fits into the broader process of scientific data analysis. Scientists from several disciplines were observed while they analyzed their own data. Examination of the observations exposed process elements outside conventional image viewing. For example, analysts queried for quantitative information, made a variety of comparisons, applied math, managed data, and kept records. The characterization of scientific data analysis reveals activity beyond that traditionally supported by computer. It offers an understanding which has the potential to be applied to many future designs, and suggests specific recommendations for improving the support of this important aspect of scientific computing.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "R. R. Springmeyer;Meera Blattner;Nelson L. Max",
                "AuthorNames": "R.R. Springmeyer;M.M. Blattner;N.L. Max",
                "AuthorAffiliation": "Lawrence Livermore National Laboratory, University of California, Livermore, CA, USA;Lawrence Livermore National Laboratory, University of California, Livermore, CA, USA;Lawrence Livermore National Laboratory, University of California, Livermore, CA, USA",
                "InternalReferences": "0.1109/visual.1990.146399",
                "AuthorKeywords": null,
                "AminerCitationCount": 167,
                "CitationCountCrossRef": 44,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 180,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3586,
                "i": [
                    3586
                ]
            }
        },
        {
            "name": "Deborah Silver",
            "value": 208,
            "numPapers": 40,
            "cluster": "6",
            "visible": 1,
            "index": 1454,
            "x": -275.67362358756424,
            "y": 263.54136915881344,
            "vy": 0,
            "vx": 0,
            "r": 1.2394933793897525,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Feature Aligned Volume Manipulation for Illustration and Visualization",
                "DOI": "10.1109/tvcg.2006.144",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.144",
                "FirstPage": 1069,
                "LastPage": 1076,
                "PaperType": "J",
                "Abstract": "In this paper we describe a GPU-based technique for creating illustrative visualization through interactive manipulation of volumetric models. It is partly inspired by medical illustrations, where it is common to depict cuts and deformation in order to provide a better understanding of anatomical and biological structures or surgical processes, and partly motivated by the need for a real-time solution that supports the specification and visualization of such illustrative manipulation. We propose two new feature aligned techniques, namely surface alignment and segment alignment, and compare them with the axis-aligned techniques which were reported in previous work on volume manipulation. We also present a mechanism for defining features using texture volumes, and methods for computing correct normals for the deformed volume in respect to different alignments. We describe a GPU-based implementation to achieve real-time performance of the techniques and a collection of manipulation operators including peelers, retractors, pliers and dilators which are adaptations of the metaphors and tools used in surgical procedures and medical illustrations. Our approach is directly applicable in medical and biological illustration, and we demonstrate how it works as an interactive tool for focus+context visualization, as well as a generic technique for volume graphics",
                "AuthorNamesDeduped": "Carlos D. Correa;Deborah Silver;Min Chen 0001",
                "AuthorNames": "Carlos Correa;Deborah Silver;Min Chen",
                "AuthorAffiliation": "Department of Electrical and Computer Engineering, State University of New Jersey, Rutgers, USA;Department of Electrical and Computer Engineering, State University of New Jersey, Rutgers, USA;Department of Computer Science, University of Wales, Swansea, UK",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/visual.2000.885694",
                "AuthorKeywords": "Illustrative visualization, Illustrative manipulation, GPU computing, volume rendering, volume deformation, computerassisted medical illustration",
                "AminerCitationCount": 125,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 680,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2280,
                "i": [
                    2280
                ]
            }
        },
        {
            "name": "Xin Wang",
            "value": 69,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1455,
            "x": 25.26219321233186,
            "y": -380.67285376567474,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Tracking scalar features in unstructured datasets",
                "DOI": "10.1109/visual.1998.745288",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745288",
                "FirstPage": 79,
                "LastPage": 86,
                "PaperType": "C",
                "Abstract": "3D time-varying unstructured and structured data sets are difficult to visualize and analyze because of the immense amount of data involved. These data sets contain many evolving amorphous regions, and standard visualization techniques provide no facilities to aid the scientist to follow regions of interest. In this paper, we present a basic framework for the visualization of time-varying data sets, and a new algorithm and data structure to track volume features in unstructured scalar data sets. The algorithm and data structure are general and can be used for structured, curvilinear, adaptive and hybrid grids as well. The features tracked can be any type of connected regions. Examples are shown from ongoing research.",
                "AuthorNamesDeduped": "Deborah Silver;Xin Wang",
                "AuthorNames": "D. Silver;X. Wang",
                "AuthorAffiliation": "Department of Electrical and Computer Engineering and CAIP Center, Rutgers University, Piscataway, NJ, USA;Department of Electrical and Computer Engineering and CAIP Center, Rutgers University, Piscataway, NJ, USA",
                "InternalReferences": "0.1109/visual.1996.567807;10.1109/visual.1995.480809;10.1109/visual.1995.480789;10.1109/visual.1997.663886",
                "AuthorKeywords": "Scientific Visualization, Time-varying Visualization,Feature Tracking, Computer Vision, CFD",
                "AminerCitationCount": 135,
                "CitationCountCrossRef": 26,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 240,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3160,
                "i": [
                    3160
                ]
            }
        },
        {
            "name": "Steffen Oeltze-Jafra",
            "value": 33,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1456,
            "x": 238.5951840364203,
            "y": 297.8629519672205,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Interactive Visual Analysis of Image-Centric Cohort Study Data",
                "DOI": "10.1109/tvcg.2014.2346591",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346591",
                "FirstPage": 1673,
                "LastPage": 1682,
                "PaperType": "J",
                "Abstract": "Epidemiological population studies impose information about a set of subjects (a cohort) to characterize disease-specific risk factors. Cohort studies comprise heterogenous variables describing the medical condition as well as demographic and lifestyle factors and, more recently, medical image data. We propose an Interactive Visual Analysis (IVA) approach that enables epidemiologists to rapidly investigate the entire data pool for hypothesis validation and generation. We incorporate image data, which involves shape-based object detection and the derivation of attributes describing the object shape. The concurrent investigation of image-based and non-image data is realized in a web-based multiple coordinated view system, comprising standard views from information visualization and epidemiological data representations such as pivot tables. The views are equipped with brushing facilities and augmented by 3D shape renderings of the segmented objects, e.g., each bar in a histogram is overlaid with a mean shape of the associated subgroup of the cohort. We integrate an overview visualization, clustering of variables and object shape for data-driven subgroup definition and statistical key figures for measuring the association between variables. We demonstrate the IVA approach by validating and generating hypotheses related to lower back pain as part of a qualitative evaluation.",
                "AuthorNamesDeduped": "Paul Klemm;Steffen Oeltze-Jafra;Kai Lawonn;Katrin Hegenscheid;Henry Völzke;Bernhard Preim",
                "AuthorNames": "Paul Klemm;Steffen Oeltze-Jafra;Kai Lawonn;Katrin Hegenscheid;Henry Völzke;Bernhard Preim",
                "AuthorAffiliation": "Otto-von-Guericke University Magdeburg, Germany;Otto-von-Guericke University Magdeburg, Germany;Otto-von-Guericke University Magdeburg, Germany;Ernst-Moritz-Arndt University Greifswald, Germany;Ernst-Moritz-Arndt University Greifswald, Germany;Otto-von-Guericke University Magdeburg, Germany",
                "InternalReferences": "0.1109/tvcg.2013.160;10.1109/tvcg.2011.185;10.1109/visual.2000.885739;10.1109/tvcg.2011.217;10.1109/tvcg.2007.70569",
                "AuthorKeywords": "Interactive Visual Analysis, Epidemiology, Spine",
                "AminerCitationCount": 58,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 947,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1266,
                "i": [
                    1266
                ]
            }
        },
        {
            "name": "Gábor Janiga",
            "value": 9,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1457,
            "x": -377.2656406239032,
            "y": -58.48620696058117,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Automatic Detection and Visualization of Qualitative Hemodynamic Characteristics in Cerebral Aneurysms",
                "DOI": "10.1109/tvcg.2012.202",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.202",
                "FirstPage": 2178,
                "LastPage": 2187,
                "PaperType": "J",
                "Abstract": "Cerebral aneurysms are a pathological vessel dilatation that bear a high risk of rupture. For the understanding and evaluation of the risk of rupture, the analysis of hemodynamic information plays an important role. Besides quantitative hemodynamic information, also qualitative flow characteristics, e.g., the inflow jet and impingement zone are correlated with the risk of rupture. However, the assessment of these two characteristics is currently based on an interactive visual investigation of the flow field, obtained by computational fluid dynamics (CFD) or blood flow measurements. We present an automatic and robust detection as well as an expressive visualization of these characteristics. The detection can be used to support a comparison, e.g., of simulation results reflecting different treatment options. Our approach utilizes local streamline properties to formalize the inflow jet and impingement zone. We extract a characteristic seeding curve on the ostium, on which an inflow jet boundary contour is constructed. Based on this boundary contour we identify the impingement zone. Furthermore, we present several visualization techniques to depict both characteristics expressively. Thereby, we consider accuracy and robustness of the extracted characteristics, minimal visual clutter and occlusions. An evaluation with six domain experts confirms that our approach detects both hemodynamic characteristics reasonably.",
                "AuthorNamesDeduped": "Rocco Gasteiger;Dirk J. Lehmann;Roy van Pelt;Gábor Janiga;Oliver Beuing;Anna Vilanova;Holger Theisel;Bernhard Preim",
                "AuthorNames": "Rocco Gasteiger;Dirk J. Lehmann;Roy van Pelt;Gábor Janiga;Oliver Beuing;Anna Vilanova;Holger Theisel;Bernhard Preim",
                "AuthorAffiliation": "Department of Simulation and Graphics, group Visualization, University of Magdeburg, Germany;Department of Simulation and Graphics, group Visual Computing, University of Magdeburg, Germany;Department of Biomedical Engineering, group of Biomedical Image Analysis, Eindhoven University of Technology, Netherlands;Institute of Fluid Dynamics and Thermodynamics, University of Magdeburg, Germany;Department of Neuroradiology, University Hospital Magdeburg, Germany;Department of Biomedical Engineering, group of Biomedical Image Analysis, Eindhoven University of Technology, Netherlands;Department of Simulation and Graphics, group Visual Computing, University of Magdeburg, Germany;Department of Simulation and Graphics, group Visualization, University of Magdeburg, Germany",
                "InternalReferences": "0.1109/tvcg.2011.215;10.1109/tvcg.2011.159;10.1109/tvcg.2011.243;10.1109/tvcg.2009.138;10.1109/tvcg.2010.153;10.1109/tvcg.2010.173",
                "AuthorKeywords": "Cerebral aneurysm, Hemodynamic, Inflow jet, Impingement zone, Visualization, Glyph",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 527,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1458,
                "i": [
                    1458
                ]
            }
        },
        {
            "name": "Maik Schulze",
            "value": 30,
            "numPapers": 10,
            "cluster": "11",
            "visible": 1,
            "index": 1458,
            "x": 317.7997734665616,
            "y": -211.7859862800232,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Rotation Invariant Vortices for Flow Visualization",
                "DOI": "10.1109/tvcg.2015.2467200",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467200",
                "FirstPage": 817,
                "LastPage": 826,
                "PaperType": "J",
                "Abstract": "We propose a new class of vortex definitions for flows that are induced by rotating mechanical parts, such as stirring devices, helicopters, hydrocyclones, centrifugal pumps, or ventilators. Instead of a Galilean invariance, we enforce a rotation invariance, i.e., the invariance of a vortex under a uniform-speed rotation of the underlying coordinate system around a fixed axis. We provide a general approach to transform a Galilean invariant vortex concept to a rotation invariant one by simply adding a closed form matrix to the Jacobian. In particular, we present rotation invariant versions of the well-known Sujudi-Haimes, Lambda-2, and Q vortex criteria. We apply them to a number of artificial and real rotating flows, showing that for these cases rotation invariant vortices give better results than their Galilean invariant counterparts.",
                "AuthorNamesDeduped": "Tobias Günther;Maik Schulze;Holger Theisel",
                "AuthorNames": "Tobias Günther;Maik Schulze;Holger Theisel",
                "AuthorAffiliation": "Visual Computing Group, University of Magdeburg;MAXON Computer;Visual Computing Group, University of Magdeburg",
                "InternalReferences": "0.1109/tvcg.2014.2346415;10.1109/visual.2002.1183789;10.1109/tvcg.2014.2346412;10.1109/tvcg.2011.249;10.1109/tvcg.2013.189;10.1109/visual.1999.809917;10.1109/visual.1999.809896;10.1109/visual.1998.745296;10.1109/visual.2005.1532851;10.1109/tvcg.2007.70545;10.1109/tvcg.2010.198",
                "AuthorKeywords": "Vortex cores, rotation invariance, Galilean invariance, scientific visualization, flow visualization, line fields",
                "AminerCitationCount": 38,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 933,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1051,
                "i": [
                    1051
                ]
            }
        },
        {
            "name": "Daniel Haehn",
            "value": 40,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1459,
            "x": -91.30757000996486,
            "y": 370.9621647269104,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects",
                "DOI": "10.1109/tvcg.2015.2467441",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467441",
                "FirstPage": 738,
                "LastPage": 746,
                "PaperType": "J",
                "Abstract": "In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.",
                "AuthorNamesDeduped": "Ali K. Al-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorNames": "Ali K. Ai-Awami;Johanna Beyer;Daniel Haehn;Narayanan Kasthuri;Jeff W. Lichtman;Hanspeter Pfister;Markus Hadwiger",
                "AuthorAffiliation": "King Abdullah University of Science and Technology (KAUST);School of Engineering and Applied Sciences, Harvard University;School of Engineering and Applied Sciences, Harvard University;School of Medicine, Boston University;Center for Brain Science, Harvard University;School of Engineering and Applied Sciences, Harvard University;King Abdullah University of Science and Technology (KAUST)",
                "InternalReferences": "0.1109/tvcg.2014.2346312;10.1109/visual.2005.1532788;10.1109/tvcg.2013.142;10.1109/tvcg.2009.121;10.1109/tvcg.2012.240;10.1109/tvcg.2014.2346371;10.1109/tvcg.2013.174;10.1109/tvcg.2014.2346249;10.1109/tvcg.2007.70584",
                "AuthorKeywords": "Neuroscience, Segmentation, Proofreading, Data and Provenance Tracking",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 1352,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1052,
                "i": [
                    1052
                ]
            }
        },
        {
            "name": "Amin Abbasloo",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1460,
            "x": -183.316750315709,
            "y": -335.32815129912376,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Visualizing Tensor Normal Distributions at Multiple Levels of Detail",
                "DOI": "10.1109/tvcg.2015.2467031",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467031",
                "FirstPage": 975,
                "LastPage": 984,
                "PaperType": "J",
                "Abstract": "Despite the widely recognized importance of symmetric second order tensor fields in medicine and engineering, the visualization of data uncertainty in tensor fields is still in its infancy. A recently proposed tensorial normal distribution, involving a fourth order covariance tensor, provides a mathematical description of how different aspects of the tensor field, such as trace, anisotropy, or orientation, vary and covary at each point. However, this wealth of information is far too rich for a human analyst to take in at a single glance, and no suitable visualization tools are available. We propose a novel approach that facilitates visual analysis of tensor covariance at multiple levels of detail. We start with a visual abstraction that uses slice views and direct volume rendering to indicate large-scale changes in the covariance structure, and locations with high overall variance. We then provide tools for interactive exploration, making it possible to drill down into different types of variability, such as in shape or orientation. Finally, we allow the analyst to focus on specific locations of the field, and provide tensor glyph animations and overlays that intuitively depict confidence intervals at those points. Our system is demonstrated by investigating the effects of measurement noise on diffusion tensor MRI, and by analyzing two ensembles of stress tensor fields from solid mechanics.",
                "AuthorNamesDeduped": "Amin Abbasloo;Vitalis Wiens;Max Hermann;Thomas Schultz 0001",
                "AuthorNames": "Amin Abbasloo;Vitalis Wiens;Max Hermann;Thomas Schultz",
                "AuthorAffiliation": "University of Bonn;University of Bonn;University of Bonn;University of Bonn",
                "InternalReferences": "0.1109/tvcg.2009.170;10.1109/tvcg.2009.184;10.1109/visual.2005.1532773;10.1109/tvcg.2006.181;10.1109/tvcg.2006.134;10.1109/tvcg.2010.199;10.1109/tvcg.2008.128;10.1109/tvcg.2007.70602;10.1109/tvcg.2015.2467435",
                "AuthorKeywords": "Uncertainty visualization, tensor visualization, direct volume rendering, interaction, glyph based visualization",
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 721,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1064,
                "i": [
                    1064
                ]
            }
        },
        {
            "name": "Vitalis Wiens",
            "value": 0,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1461,
            "x": 361.80677854958464,
            "y": 123.47410657936278,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Visualizing Tensor Normal Distributions at Multiple Levels of Detail",
                "DOI": "10.1109/tvcg.2015.2467031",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467031",
                "FirstPage": 975,
                "LastPage": 984,
                "PaperType": "J",
                "Abstract": "Despite the widely recognized importance of symmetric second order tensor fields in medicine and engineering, the visualization of data uncertainty in tensor fields is still in its infancy. A recently proposed tensorial normal distribution, involving a fourth order covariance tensor, provides a mathematical description of how different aspects of the tensor field, such as trace, anisotropy, or orientation, vary and covary at each point. However, this wealth of information is far too rich for a human analyst to take in at a single glance, and no suitable visualization tools are available. We propose a novel approach that facilitates visual analysis of tensor covariance at multiple levels of detail. We start with a visual abstraction that uses slice views and direct volume rendering to indicate large-scale changes in the covariance structure, and locations with high overall variance. We then provide tools for interactive exploration, making it possible to drill down into different types of variability, such as in shape or orientation. Finally, we allow the analyst to focus on specific locations of the field, and provide tensor glyph animations and overlays that intuitively depict confidence intervals at those points. Our system is demonstrated by investigating the effects of measurement noise on diffusion tensor MRI, and by analyzing two ensembles of stress tensor fields from solid mechanics.",
                "AuthorNamesDeduped": "Amin Abbasloo;Vitalis Wiens;Max Hermann;Thomas Schultz 0001",
                "AuthorNames": "Amin Abbasloo;Vitalis Wiens;Max Hermann;Thomas Schultz",
                "AuthorAffiliation": "University of Bonn;University of Bonn;University of Bonn;University of Bonn",
                "InternalReferences": "0.1109/tvcg.2009.170;10.1109/tvcg.2009.184;10.1109/visual.2005.1532773;10.1109/tvcg.2006.181;10.1109/tvcg.2006.134;10.1109/tvcg.2010.199;10.1109/tvcg.2008.128;10.1109/tvcg.2007.70602;10.1109/tvcg.2015.2467435",
                "AuthorKeywords": "Uncertainty visualization, tensor visualization, direct volume rendering, interaction, glyph based visualization",
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 721,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1064,
                "i": [
                    1064
                ]
            }
        },
        {
            "name": "Max Hermann",
            "value": 8,
            "numPapers": 12,
            "cluster": "11",
            "visible": 1,
            "index": 1462,
            "x": -350.31040368730993,
            "y": 153.4034584630802,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Accurate Interactive Visualization of Large Deformations and Variability in Biomedical Image Ensembles",
                "DOI": "10.1109/tvcg.2015.2467198",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467198",
                "FirstPage": 708,
                "LastPage": 717,
                "PaperType": "J",
                "Abstract": "Large image deformations pose a challenging problem for the visualization and statistical analysis of 3D image ensembles which have a multitude of applications in biology and medicine. Simple linear interpolation in the tangent space of the ensemble introduces artifactual anatomical structures that hamper the application of targeted visual shape analysis techniques. In this work we make use of the theory of stationary velocity fields to facilitate interactive non-linear image interpolation and plausible extrapolation for high quality rendering of large deformations and devise an efficient image warping method on the GPU. This does not only improve quality of existing visualization techniques, but opens up a field of novel interactive methods for shape ensemble analysis. Taking advantage of the efficient non-linear 3D image warping, we showcase four visualizations: 1) browsing on-the-fly computed group mean shapes to learn about shape differences between specific classes, 2) interactive reformation to investigate complex morphologies in a single view, 3) likelihood volumes to gain a concise overview of variability and 4) streamline visualization to show variation in detail, specifically uncovering its component tangential to a reference surface. Evaluation on a real world dataset shows that the presented method outperforms the state-of-the-art in terms of visual quality while retaining interactive frame rates. A case study with a domain expert was performed in which the novel analysis and visualization methods are applied on standard model structures, namely skull and mandible of different rodents, to investigate and compare influence of phylogeny, diet and geography on shape. The visualizations enable for instance to distinguish (population-)normal and pathological morphology, assist in uncovering correlation to extrinsic factors and potentially support assessment of model quality.",
                "AuthorNamesDeduped": "Max Hermann;Anja C. Schunke;Thomas Schultz 0001;Reinhard Klein",
                "AuthorNames": "Max Hermann;Anja C. Schunke;Thomas Schultz;Reinhard Klein",
                "AuthorAffiliation": "Institut für Informatik II, Universität Bonn;Max Planck Institute for Evolutionary Biology, Plön;Institut für Informatik II, Universität Bonn;Institut für Informatik II, Universität Bonn",
                "InternalReferences": "0.1109/tvcg.2006.140;10.1109/visual.2002.1183754;10.1109/tvcg.2014.2346591;10.1109/tvcg.2014.2346405;10.1109/tvcg.2006.123",
                "AuthorKeywords": "Statistical deformation model, stationary velocity fields, image warping, interactive visual analysis",
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 873,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1060,
                "i": [
                    1060
                ]
            }
        },
        {
            "name": "Yi Gu",
            "value": 22,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1463,
            "x": 154.73831692302994,
            "y": -349.86576465242774,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "TransGraph: Hierarchical Exploration of Transition Relationships in Time-Varying Volumetric Data",
                "DOI": "10.1109/tvcg.2011.246",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.246",
                "FirstPage": 2015,
                "LastPage": 2024,
                "PaperType": "J",
                "Abstract": "A fundamental challenge for time-varying volume data analysis and visualization is the lack of capability to observe and track data change or evolution in an occlusion-free, controllable, and adaptive fashion. In this paper, we propose to organize a timevarying data set into a hierarchy of states. By deriving transition probabilities among states, we construct a global map that captures the essential transition relationships in the time-varying data. We introduce the TransGraph, a graph-based representation to visualize hierarchical state transition relationships. The TransGraph not only provides a visual mapping that abstracts data evolution over time in different levels of detail, but also serves as a navigation tool that guides data exploration and tracking. The user interacts with the TransGraph and makes connection to the volumetric data through brushing and linking. A set of intuitive queries is provided to enable knowledge extraction from time-varying data. We test our approach with time-varying data sets of different characteristics and the results show that the TransGraph can effectively augment our ability in understanding time-varying data.",
                "AuthorNamesDeduped": "Yi Gu;Chaoli Wang 0001",
                "AuthorNames": "Yi Gu;Chaoli Wang",
                "AuthorAffiliation": "Department of Computer Science, Michigan Technological University, Houghton, MI, USA;Department of Computer Science, Michigan Technological University, Houghton, MI, USA",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/visual.1994.346321;10.1109/vast.2006.261451;10.1109/visual.1999.809871;10.1109/tvcg.2006.165;10.1109/visual.2003.1250401;10.1109/tvcg.2008.116;10.1109/tvcg.2010.190;10.1109/visual.2003.1250402;10.1109/visual.1995.480809;10.1109/tvcg.2008.140;10.1109/visual.2001.964531;10.1109/tvcg.2009.200",
                "AuthorKeywords": "Time-varying data visualization, hierarchical representation, states, transition relationship, user interface",
                "AminerCitationCount": 69,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 1012,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1649,
                "i": [
                    1649
                ]
            }
        },
        {
            "name": "Marco Ament",
            "value": 28,
            "numPapers": 48,
            "cluster": "6",
            "visible": 1,
            "index": 1464,
            "x": 122.27346210530972,
            "y": 362.6281848736821,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "SciVis",
                "Year": 2013,
                "Title": "Ambient Volume Scattering",
                "DOI": "10.1109/tvcg.2013.129",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.129",
                "FirstPage": 2936,
                "LastPage": 2945,
                "PaperType": "J",
                "Abstract": "We present ambient scattering as a preintegration method for scattering on mesoscopic scales in direct volume rendering. Far-range scattering effects usually provide negligible contributions to a given location due to the exponential attenuation with increasing distance. This motivates our approach to preintegrating multiple scattering within a finite spherical region around any given sample point. To this end, we solve the full light transport with a Monte-Carlo simulation within a set of spherical regions, where each region may have different material parameters regarding anisotropy and extinction. This precomputation is independent of the data set and the transfer function, and results in a small preintegration table. During rendering, the look-up table is accessed for each ray sample point with respect to the viewing direction, phase function, and material properties in the spherical neighborhood of the sample. Our rendering technique is efficient and versatile because it readily fits in existing ray marching algorithms and can be combined with local illumination and volumetric ambient occlusion. It provides interactive volumetric scattering and soft shadows, with interactive control of the transfer function, anisotropy parameter of the phase function, lighting conditions, and viewpoint. A GPU implementation demonstrates the benefits of ambient scattering for the visualization of different types of data sets, with respect to spatial perception, high-quality illumination, translucency, and rendering speed.",
                "AuthorNamesDeduped": "Marco Ament;Filip Sadlo;Daniel Weiskopf",
                "AuthorNames": "Marco Ament;Filip Sadlo;Daniel Weiskopf",
                "AuthorAffiliation": "VISUS, University of Stuttgart, Germany;VISUS, University of Stuttgart, Germany;VISUS, University of Stuttgart, Germany",
                "InternalReferences": "0.1109/tvcg.2011.211;10.1109/tvcg.2007.70555;10.1109/visual.2003.1250394;10.1109/visual.2000.885683;10.1109/tvcg.2010.187;10.1109/visual.2004.64;10.1109/visual.2003.1250406;10.1109/tvcg.2010.145;10.1109/tvcg.2012.232;10.1109/tvcg.2011.161;10.1109/tvcg.2011.198;10.1109/visual.2002.1183764;10.1109/visual.2005.1532803;10.1109/tvcg.2009.204",
                "AuthorKeywords": "Direct volume rendering, volume illumination, ambient scattering, preintegrated light transport, gradient-free shading",
                "AminerCitationCount": 41,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 764,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1348,
                "i": [
                    1348
                ]
            }
        },
        {
            "name": "Daisuke Sakurai",
            "value": 5,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 1465,
            "x": -335.2268572583122,
            "y": -184.8592820842794,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Interactive Visualization for Singular Fibers of Functions f : R3 → R2",
                "DOI": "10.1109/tvcg.2015.2467433",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467433",
                "FirstPage": 945,
                "LastPage": 954,
                "PaperType": "J",
                "Abstract": "Scalar topology in the form of Morse theory has provided computational tools that analyze and visualize data from scientific and engineering tasks. Contracting isocontours to single points encapsulates variations in isocontour connectivity in the Reeb graph. For multivariate data, isocontours generalize to fibers-inverse images of points in the range, and this area is therefore known as fiber topology. However, fiber topology is less fully developed than Morse theory, and current efforts rely on manual visualizations. This paper presents how to accelerate and semi-automate this task through an interface for visualizing fiber singularities of multivariate functions R<sup>3</sup>→R<sup>2</sup>. This interface exploits existing conventions of fiber topology, but also introduces a 3D view based on the extension of Reeb graphs to Reeb spaces. Using the Joint Contour Net, a quantized approximation of the Reeb space, this accelerates topological visualization and permits online perturbation to reduce or remove degeneracies in functions under study. Validation of the interface is performed by assessing whether the interface supports the mathematical workflow both of experts and of less experienced mathematicians.",
                "AuthorNamesDeduped": "Daisuke Sakurai;Osamu Saeki;Hamish A. Carr;Hsiang-Yun Wu;Takahiro Yamamoto;David J. Duke;Shigeo Takahashi",
                "AuthorNames": "Daisuke Sakurai;Osamu Saeki;Hamish Carr;Hsiang-Yun Wu;Takahiro Yamamoto;David Duke;Shigeo Takahashi",
                "AuthorAffiliation": "University of Tokyo, Japan Atomic Energy Agency, Kashiwa, Japan;Kyushu University, Fukuoka, Japan;University of Leeds, Leeds, UK;Keio University, Yokohama, Japan;Kyushu Sangyo University, Fukuoka, Japan;University of Leeds, Leeds, UK;University of Aizu, Aizu-Wakamatsu, Japan",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/visual.1997.663875;10.1109/tvcg.2012.287;10.1109/tvcg.2010.213;10.1109/tvcg.2014.2346447;10.1109/tvcg.2010.146;10.1109/visual.2002.1183774;10.1109/tvcg.2008.143;10.1109/tvcg.2009.119;10.1109/tvcg.2007.70601",
                "AuthorKeywords": "Singular fibers, fiber topology, mathematical visualization, design study",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 458,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1076,
                "i": [
                    1076
                ]
            }
        },
        {
            "name": "Osamu Saeki",
            "value": 5,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 1466,
            "x": 372.1834192610685,
            "y": -90.16375339979848,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Interactive Visualization for Singular Fibers of Functions f : R3 → R2",
                "DOI": "10.1109/tvcg.2015.2467433",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467433",
                "FirstPage": 945,
                "LastPage": 954,
                "PaperType": "J",
                "Abstract": "Scalar topology in the form of Morse theory has provided computational tools that analyze and visualize data from scientific and engineering tasks. Contracting isocontours to single points encapsulates variations in isocontour connectivity in the Reeb graph. For multivariate data, isocontours generalize to fibers-inverse images of points in the range, and this area is therefore known as fiber topology. However, fiber topology is less fully developed than Morse theory, and current efforts rely on manual visualizations. This paper presents how to accelerate and semi-automate this task through an interface for visualizing fiber singularities of multivariate functions R<sup>3</sup>→R<sup>2</sup>. This interface exploits existing conventions of fiber topology, but also introduces a 3D view based on the extension of Reeb graphs to Reeb spaces. Using the Joint Contour Net, a quantized approximation of the Reeb space, this accelerates topological visualization and permits online perturbation to reduce or remove degeneracies in functions under study. Validation of the interface is performed by assessing whether the interface supports the mathematical workflow both of experts and of less experienced mathematicians.",
                "AuthorNamesDeduped": "Daisuke Sakurai;Osamu Saeki;Hamish A. Carr;Hsiang-Yun Wu;Takahiro Yamamoto;David J. Duke;Shigeo Takahashi",
                "AuthorNames": "Daisuke Sakurai;Osamu Saeki;Hamish Carr;Hsiang-Yun Wu;Takahiro Yamamoto;David Duke;Shigeo Takahashi",
                "AuthorAffiliation": "University of Tokyo, Japan Atomic Energy Agency, Kashiwa, Japan;Kyushu University, Fukuoka, Japan;University of Leeds, Leeds, UK;Keio University, Yokohama, Japan;Kyushu Sangyo University, Fukuoka, Japan;University of Leeds, Leeds, UK;University of Aizu, Aizu-Wakamatsu, Japan",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/visual.1997.663875;10.1109/tvcg.2012.287;10.1109/tvcg.2010.213;10.1109/tvcg.2014.2346447;10.1109/tvcg.2010.146;10.1109/visual.2002.1183774;10.1109/tvcg.2008.143;10.1109/tvcg.2009.119;10.1109/tvcg.2007.70601",
                "AuthorKeywords": "Singular fibers, fiber topology, mathematical visualization, design study",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 458,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1076,
                "i": [
                    1076
                ]
            }
        },
        {
            "name": "Daniel Schikore",
            "value": 127,
            "numPapers": 11,
            "cluster": "11",
            "visible": 1,
            "index": 1467,
            "x": -213.6045207583585,
            "y": 317.99859860004415,
            "vy": 0,
            "vx": 0,
            "r": 1.1462291306850891,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "The contour spectrum",
                "DOI": "10.1109/visual.1997.663875",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663875",
                "FirstPage": 167,
                "LastPage": 173,
                "PaperType": "C",
                "Abstract": "The authors introduce the contour spectrum, a user interface component that improves qualitative user interaction and provides real-time exact quantification in the visualization of isocontours. The contour spectrum is a signature consisting of a variety of scalar data and contour attributes, computed over the range of scalar values /spl omega//spl isin/R. They explore the use of surface, area, volume, and gradient integral of the contour that are shown to be univariate B-spline functions of the scalar value /spl omega/ for multi-dimensional unstructured triangular grids. These quantitative properties are calculated in real-time and presented to the user as a collection of signature graphs (plots of functions of /spl omega/) to assist in selecting relevant isovalues /spl omega//sub 0/ for informative visualization. For time-varying data, these quantitative properties can also be computed over time, and displayed using a 2D interface, giving the user an overview of the time-varying function, and allowing interaction in both isovalue and time step. The effectiveness of the current system and potential extensions are discussed.",
                "AuthorNamesDeduped": "Chandrajit L. Bajaj;Valerio Pascucci;Daniel Schikore",
                "AuthorNames": "C.L. Bajaj;V. Pascucci;D.R. Schikore",
                "AuthorAffiliation": "Shastra Lab & Center for Image Analysis and Data Visualization Department of Computer Sciences, Purdue University, West Lafayette, IN, USA and Department of Computer Sciences TICAM, University of Texas, Austin, Austin, TX, USA;Shastra Lab & Center for Image Analysis and Data Visualization Department of Computer Sciences, Purdue University, West Lafayette, IN, USA and Department of Computer Sciences TICAM, University of Texas, Austin, Austin, TX, USA;Shastra Lab & Center for Image Analysis and Data Visualization Department of Computer Sciences, Purdue University, West Lafayette, IN, USA",
                "InternalReferences": "0.1109/visual.1996.568123;10.1109/visual.1995.480803;10.1109/visual.1996.568113",
                "AuthorKeywords": "Visualization, Scalar Data, User Interfaces, Real-time Quantitative Query",
                "AminerCitationCount": 550,
                "CitationCountCrossRef": 94,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 404,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3235,
                "i": [
                    3235
                ]
            }
        },
        {
            "name": "Hsiang-Yun Wu",
            "value": 5,
            "numPapers": 11,
            "cluster": "11",
            "visible": 1,
            "index": 1468,
            "x": -57.31916067032679,
            "y": -378.89908131328224,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "SciVis",
                "Year": 2018,
                "Title": "Labels on Levels: Labeling of Multi-Scale Multi-Instance and Crowded 3D Biological Environments",
                "DOI": "10.1109/tvcg.2018.2864491",
                "Link": "http://dx.doi.org/10.1109/TVCG.2018.2864491",
                "FirstPage": 977,
                "LastPage": 986,
                "PaperType": "J",
                "Abstract": "Labeling is intrinsically important for exploring and understanding complex environments and models in a variety of domains. We present a method for interactive labeling of crowded 3D scenes containing very many instances of objects spanning multiple scales in size. In contrast to previous labeling methods, we target cases where many instances of dozens of types are present and where the hierarchical structure of the objects in the scene presents an opportunity to choose the most suitable level for each placed label. Our solution builds on and goes beyond labeling techniques in medical 3D visualization, cartography, and biological illustrations from books and prints. In contrast to these techniques, the main characteristics of our new technique are: 1) a novel way of labeling objects as part of a bigger structure when appropriate, 2) visual clutter reduction by labeling only representative instances for each type of an object, and a strategy of selecting those. The appropriate level of label is chosen by analyzing the scene's depth buffer and the scene objects' hierarchy tree. We address the topic of communicating the parent-children relationship between labels by employing visual hierarchy concepts adapted from graphic design. Selecting representative instances considers several criteria tailored to the character of the data and is combined with a greedy optimization approach. We demonstrate the usage of our method with models from mesoscale biology where these two characteristics-multi-scale and multi-instance-are abundant, along with the fact that these scenes are extraordinarily dense.",
                "AuthorNamesDeduped": "David Kouril;Ladislav Cmolík;Barbora Kozlíková;Hsiang-Yun Wu;Graham Johnson;David S. Goodsell;Arthur J. Olson;M. Eduard Gröller;Ivan Viola",
                "AuthorNames": "David Kouřil;Ladislav Čmolík;Barbora Kozlíková;Hslanc-Yun Wu;Graham Johnson;David S. Goodsell;Arthur Olson;M. Eduard Gröller;Ivan Viola",
                "AuthorAffiliation": "TU Wien;Faculty of Electrical Engineering, Czech Technical University, Prague;Masaryk University;TU Wien;Allen Institute for Cell Science;The Scripps Research Institute;The Scripps Research Institute;TU Wien;TU Wien",
                "InternalReferences": "0.1109/tvcg.2006.136;10.1109/tvcg.2008.168;10.1109/tvcg.2017.2744518",
                "AuthorKeywords": "labeling,multi-scale data,multi-instance data",
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 1718,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 699,
                "i": [
                    699
                ]
            }
        },
        {
            "name": "Takahiro Yamamoto",
            "value": 5,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 1469,
            "x": 298.309534753392,
            "y": 240.7517839502229,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "SciVis",
                "Year": 2015,
                "Title": "Interactive Visualization for Singular Fibers of Functions f : R3 → R2",
                "DOI": "10.1109/tvcg.2015.2467433",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467433",
                "FirstPage": 945,
                "LastPage": 954,
                "PaperType": "J",
                "Abstract": "Scalar topology in the form of Morse theory has provided computational tools that analyze and visualize data from scientific and engineering tasks. Contracting isocontours to single points encapsulates variations in isocontour connectivity in the Reeb graph. For multivariate data, isocontours generalize to fibers-inverse images of points in the range, and this area is therefore known as fiber topology. However, fiber topology is less fully developed than Morse theory, and current efforts rely on manual visualizations. This paper presents how to accelerate and semi-automate this task through an interface for visualizing fiber singularities of multivariate functions R<sup>3</sup>→R<sup>2</sup>. This interface exploits existing conventions of fiber topology, but also introduces a 3D view based on the extension of Reeb graphs to Reeb spaces. Using the Joint Contour Net, a quantized approximation of the Reeb space, this accelerates topological visualization and permits online perturbation to reduce or remove degeneracies in functions under study. Validation of the interface is performed by assessing whether the interface supports the mathematical workflow both of experts and of less experienced mathematicians.",
                "AuthorNamesDeduped": "Daisuke Sakurai;Osamu Saeki;Hamish A. Carr;Hsiang-Yun Wu;Takahiro Yamamoto;David J. Duke;Shigeo Takahashi",
                "AuthorNames": "Daisuke Sakurai;Osamu Saeki;Hamish Carr;Hsiang-Yun Wu;Takahiro Yamamoto;David Duke;Shigeo Takahashi",
                "AuthorAffiliation": "University of Tokyo, Japan Atomic Energy Agency, Kashiwa, Japan;Kyushu University, Fukuoka, Japan;University of Leeds, Leeds, UK;Keio University, Yokohama, Japan;Kyushu Sangyo University, Fukuoka, Japan;University of Leeds, Leeds, UK;University of Aizu, Aizu-Wakamatsu, Japan",
                "InternalReferences": "0.1109/tvcg.2008.119;10.1109/visual.1997.663875;10.1109/tvcg.2012.287;10.1109/tvcg.2010.213;10.1109/tvcg.2014.2346447;10.1109/tvcg.2010.146;10.1109/visual.2002.1183774;10.1109/tvcg.2008.143;10.1109/tvcg.2009.119;10.1109/tvcg.2007.70601",
                "AuthorKeywords": "Singular fibers, fiber topology, mathematical visualization, design study",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 458,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1076,
                "i": [
                    1076
                ]
            }
        },
        {
            "name": "David J. Duke",
            "value": 20,
            "numPapers": 14,
            "cluster": "11",
            "visible": 1,
            "index": 1470,
            "x": -382.7198150221433,
            "y": 23.99048122519601,
            "vy": 0,
            "vx": 0,
            "r": 1.023028209556707,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Building an Ontology of Visualization",
                "DOI": "10.1109/visual.2004.10",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.10",
                "FirstPage": 7,
                "LastPage": 7,
                "PaperType": "M",
                "Abstract": "Recent activity within the UK National e-Science Programme has identified a need to establish an ontology for visualization. Motivation for this includes defining web and grid services for visualization (the ‘semantic grid’), supporting collaborative work, curation, and underpinning visualization research and education. At a preliminary meeting, members of the UK visualization community identified a skeleton for the ontology. We have started to build on this by identifying how existing work might be related and utilized. We believe that the greatest challenge is reaching a consensus within the visualization community itself. This poster is intended as one step in this process, setting out the perceived needs for the ontology, and sketching initial directions. It is hoped that this will lead to debate, feedback and involvement across the community.",
                "AuthorNamesDeduped": "David J. Duke;Ken W. Brodlie;David A. Duce",
                "AuthorNames": "D.J. Duke;K.W. Brodlie;D.A. Duce",
                "AuthorAffiliation": "School of Computing, University of Leeds, UK;School of Computing, University of Leeds, UK;Department of Computing, Oxford-Brookes University, UK",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 7,
                "PubsCitedCrossRef": 2,
                "DownloadsXplore": 260,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2560,
                "i": [
                    2560
                ]
            }
        },
        {
            "name": "Jorik Blaas",
            "value": 168,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1471,
            "x": 266.0907735709866,
            "y": -276.3072569086703,
            "vy": 0,
            "vx": 0,
            "r": 1.1934369602763386,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "Reducing Snapshots to Points: A Visual Analytics Approach to Dynamic Network Exploration",
                "DOI": "10.1109/tvcg.2015.2468078",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2468078",
                "FirstPage": 1,
                "LastPage": 10,
                "PaperType": "J",
                "Abstract": "We propose a visual analytics approach for the exploration and analysis of dynamic networks. We consider snapshots of the network as points in high-dimensional space and project these to two dimensions for visualization and interaction using two juxtaposed views: one for showing a snapshot and one for showing the evolution of the network. With this approach users are enabled to detect stable states, recurring states, outlier topologies, and gain knowledge about the transitions between states and the network evolution in general. The components of our approach are discretization, vectorization and normalization, dimensionality reduction, and visualization and interaction, which are discussed in detail. The effectiveness of the approach is shown by applying it to artificial and real-world dynamic networks.",
                "AuthorNamesDeduped": "Stef van den Elzen;Danny Holten;Jorik Blaas;Jarke J. van Wijk",
                "AuthorNames": "Stef van den Elzen;Danny Holten;Jorik Blaas;Jarke J. van Wijk",
                "AuthorAffiliation": "Eindhoven University of Technology, SynerScope B. V.;SynerScope B. V.;SynerScope B. V.;Eindhoven University of Technology",
                "InternalReferences": "0.1109/tvcg.2011.226;10.1109/infvis.2004.18;10.1109/tvcg.2013.198;10.1109/tvcg.2006.147;10.1109/tvcg.2006.193;10.1109/tvcg.2008.125;10.1109/tvcg.2011.178;10.1109/infvis.1999.801851",
                "AuthorKeywords": "Dynamic Networks, Exploration, Dimensionality Reduction",
                "AminerCitationCount": 174,
                "CitationCountCrossRef": 113,
                "PubsCitedCrossRef": 63,
                "DownloadsXplore": 5146,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1101,
                "i": [
                    1101
                ]
            }
        },
        {
            "name": "Hua Guo",
            "value": 30,
            "numPapers": 22,
            "cluster": "5",
            "visible": 1,
            "index": 1472,
            "x": -9.56743415524655,
            "y": 383.6123879695298,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "A Case Study Using Visualization Interaction Logs and Insight Metrics to Understand How Analysts Arrive at Insights",
                "DOI": "10.1109/tvcg.2015.2467613",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467613",
                "FirstPage": 51,
                "LastPage": 60,
                "PaperType": "J",
                "Abstract": "We present results from an experiment aimed at using logs of interactions with a visual analytics application to better understand how interactions lead to insight generation. We performed an insight-based user study of a visual analytics application and ran post hoc quantitative analyses of participants' measured insight metrics and interaction logs. The quantitative analyses identified features of interaction that were correlated with insight characteristics, and we confirmed these findings using a qualitative analysis of video captured during the user study. Results of the experiment include design guidelines for the visual analytics application aimed at supporting insight generation. Furthermore, we demonstrated an analysis method using interaction logs that identified which interaction patterns led to insights, going beyond insight-based evaluations that only quantify insight characteristics. We also discuss choices and pitfalls encountered when applying this analysis method, such as the benefits and costs of applying an abstraction framework to application-specific actions before further analysis. Our method can be applied to evaluations of other visualization tools to inform the design of insight-promoting interactions and to better understand analyst behaviors.",
                "AuthorNamesDeduped": "Hua Guo;Steven R. Gomez;Caroline Ziemkiewicz;David H. Laidlaw",
                "AuthorNames": "Hua Guo;Steven R. Gomez;Caroline Ziemkiewicz;David H. Laidlaw",
                "AuthorAffiliation": "Brown University;Brown University;Aptima Inc.;Brown University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346575;10.1109/vast.2014.7042482;10.1109/vast.2008.4677365;10.1109/tvcg.2008.137;10.1109/vast.2009.5333878;10.1109/tvcg.2014.2346452;10.1109/tvcg.2012.221;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Evaluation, visual analytics, interaction, intelligence analysis, insight-based evaluation",
                "AminerCitationCount": 111,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 2229,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1110,
                "i": [
                    1110
                ]
            }
        },
        {
            "name": "Steven R. Gomez",
            "value": 30,
            "numPapers": 20,
            "cluster": "5",
            "visible": 1,
            "index": 1473,
            "x": -252.1572952283674,
            "y": -289.4247716818897,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "A Case Study Using Visualization Interaction Logs and Insight Metrics to Understand How Analysts Arrive at Insights",
                "DOI": "10.1109/tvcg.2015.2467613",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467613",
                "FirstPage": 51,
                "LastPage": 60,
                "PaperType": "J",
                "Abstract": "We present results from an experiment aimed at using logs of interactions with a visual analytics application to better understand how interactions lead to insight generation. We performed an insight-based user study of a visual analytics application and ran post hoc quantitative analyses of participants' measured insight metrics and interaction logs. The quantitative analyses identified features of interaction that were correlated with insight characteristics, and we confirmed these findings using a qualitative analysis of video captured during the user study. Results of the experiment include design guidelines for the visual analytics application aimed at supporting insight generation. Furthermore, we demonstrated an analysis method using interaction logs that identified which interaction patterns led to insights, going beyond insight-based evaluations that only quantify insight characteristics. We also discuss choices and pitfalls encountered when applying this analysis method, such as the benefits and costs of applying an abstraction framework to application-specific actions before further analysis. Our method can be applied to evaluations of other visualization tools to inform the design of insight-promoting interactions and to better understand analyst behaviors.",
                "AuthorNamesDeduped": "Hua Guo;Steven R. Gomez;Caroline Ziemkiewicz;David H. Laidlaw",
                "AuthorNames": "Hua Guo;Steven R. Gomez;Caroline Ziemkiewicz;David H. Laidlaw",
                "AuthorAffiliation": "Brown University;Brown University;Aptima Inc.;Brown University",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/tvcg.2014.2346575;10.1109/vast.2014.7042482;10.1109/vast.2008.4677365;10.1109/tvcg.2008.137;10.1109/vast.2009.5333878;10.1109/tvcg.2014.2346452;10.1109/tvcg.2012.221;10.1109/tvcg.2007.70515",
                "AuthorKeywords": "Evaluation, visual analytics, interaction, intelligence analysis, insight-based evaluation",
                "AminerCitationCount": 111,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 2229,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1110,
                "i": [
                    1110
                ]
            }
        },
        {
            "name": "Jeffrey LeBlanc",
            "value": 87,
            "numPapers": 0,
            "cluster": "3",
            "visible": 1,
            "index": 1474,
            "x": 381.56597627603537,
            "y": 43.0976304280875,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "Vis",
                "Year": 1990,
                "Title": "Exploring N-dimensional databases",
                "DOI": "10.1109/visual.1990.146386",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1990.146386",
                "FirstPage": 230,
                "LastPage": 237,
                "PaperType": "C",
                "Abstract": "The authors present a tool for the display and analysis of N-dimensional data based on a technique called dimensional stacking. This technique is described. The primary goal is to create a tool that enables the user to project data of arbitrary dimensions onto a two-dimensional image. Of equal importance is the ability to control the viewing parameters, so that one can interactively adjust what ranges of values each dimension takes and the form in which the dimensions are displayed. This will allow an intuitive feel for the data to be developed as the database is explored. The system uses dimensional stacking, to collapse and N-dimension space down into a 2-D space and then render the values contained therein. Each value can then be represented as a pixel or rectangular region on a 2-D screen whose intensity corresponds to the data value at that point.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Jeffrey LeBlanc;Matthew O. Ward;Norman Wittels",
                "AuthorNames": "J. LeBlanc;M.O. Ward;N. Wittels",
                "AuthorAffiliation": "Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Civil Engineering Department, Worcester Polytechnic Institute, Worcester, MA, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 415,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 247,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3702,
                "i": [
                    3702
                ]
            }
        },
        {
            "name": "Norman Wittels",
            "value": 87,
            "numPapers": 0,
            "cluster": "3",
            "visible": 1,
            "index": 1475,
            "x": -310.57216776249874,
            "y": 226.04187357943735,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "Vis",
                "Year": 1990,
                "Title": "Exploring N-dimensional databases",
                "DOI": "10.1109/visual.1990.146386",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1990.146386",
                "FirstPage": 230,
                "LastPage": 237,
                "PaperType": "C",
                "Abstract": "The authors present a tool for the display and analysis of N-dimensional data based on a technique called dimensional stacking. This technique is described. The primary goal is to create a tool that enables the user to project data of arbitrary dimensions onto a two-dimensional image. Of equal importance is the ability to control the viewing parameters, so that one can interactively adjust what ranges of values each dimension takes and the form in which the dimensions are displayed. This will allow an intuitive feel for the data to be developed as the database is explored. The system uses dimensional stacking, to collapse and N-dimension space down into a 2-D space and then render the values contained therein. Each value can then be represented as a pixel or rectangular region on a 2-D screen whose intensity corresponds to the data value at that point.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Jeffrey LeBlanc;Matthew O. Ward;Norman Wittels",
                "AuthorNames": "J. LeBlanc;M.O. Ward;N. Wittels",
                "AuthorAffiliation": "Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Civil Engineering Department, Worcester Polytechnic Institute, Worcester, MA, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 415,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 247,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3702,
                "i": [
                    3702
                ]
            }
        },
        {
            "name": "Xizhou Zhu",
            "value": 46,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 1476,
            "x": 76.34301628232177,
            "y": -376.5922780208287,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "An Uncertainty-Aware Approach for Exploratory Microblog Retrieval",
                "DOI": "10.1109/tvcg.2015.2467554",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467554",
                "FirstPage": 250,
                "LastPage": 259,
                "PaperType": "J",
                "Abstract": "Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.",
                "AuthorNamesDeduped": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorNames": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorAffiliation": "Tsinghua University;Tsinghua University;USTC;Microsoft;Microsoft;University of Maryland, Baltimore County",
                "InternalReferences": "0.1109/tvcg.2013.186;10.1109/tvcg.2012.291;10.1109/vast.2009.5332611;10.1109/tvcg.2013.223;10.1109/tvcg.2011.233;10.1109/vast.2014.7042494;10.1109/visual.1996.568116;10.1109/infvis.2005.1532150;10.1109/vast.2010.5652931;10.1109/tvcg.2011.197;10.1109/tvcg.2014.2346919;10.1109/tvcg.2013.232;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346920;10.1109/tvcg.2010.183;10.1109/tvcg.2012.285;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "microblog data, mutual reinforcement model, uncertainty modeling, uncertainty visualization, uncertainty propagation",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 1373,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1112,
                "i": [
                    1112
                ]
            }
        },
        {
            "name": "Qinying Liao",
            "value": 46,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 1477,
            "x": 198.1585345079133,
            "y": 329.3678721455329,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "An Uncertainty-Aware Approach for Exploratory Microblog Retrieval",
                "DOI": "10.1109/tvcg.2015.2467554",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467554",
                "FirstPage": 250,
                "LastPage": 259,
                "PaperType": "J",
                "Abstract": "Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.",
                "AuthorNamesDeduped": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorNames": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorAffiliation": "Tsinghua University;Tsinghua University;USTC;Microsoft;Microsoft;University of Maryland, Baltimore County",
                "InternalReferences": "0.1109/tvcg.2013.186;10.1109/tvcg.2012.291;10.1109/vast.2009.5332611;10.1109/tvcg.2013.223;10.1109/tvcg.2011.233;10.1109/vast.2014.7042494;10.1109/visual.1996.568116;10.1109/infvis.2005.1532150;10.1109/vast.2010.5652931;10.1109/tvcg.2011.197;10.1109/tvcg.2014.2346919;10.1109/tvcg.2013.232;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346920;10.1109/tvcg.2010.183;10.1109/tvcg.2012.285;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "microblog data, mutual reinforcement model, uncertainty modeling, uncertainty visualization, uncertainty propagation",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 1373,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1112,
                "i": [
                    1112
                ]
            }
        },
        {
            "name": "Shimei Pan",
            "value": 46,
            "numPapers": 17,
            "cluster": "1",
            "visible": 1,
            "index": 1478,
            "x": -368.725454030618,
            "y": -109.04833584202274,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "An Uncertainty-Aware Approach for Exploratory Microblog Retrieval",
                "DOI": "10.1109/tvcg.2015.2467554",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467554",
                "FirstPage": 250,
                "LastPage": 259,
                "PaperType": "J",
                "Abstract": "Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.",
                "AuthorNamesDeduped": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorNames": "Mengchen Liu;Shixia Liu;Xizhou Zhu;Qinying Liao;Furu Wei;Shimei Pan",
                "AuthorAffiliation": "Tsinghua University;Tsinghua University;USTC;Microsoft;Microsoft;University of Maryland, Baltimore County",
                "InternalReferences": "0.1109/tvcg.2013.186;10.1109/tvcg.2012.291;10.1109/vast.2009.5332611;10.1109/tvcg.2013.223;10.1109/tvcg.2011.233;10.1109/vast.2014.7042494;10.1109/visual.1996.568116;10.1109/infvis.2005.1532150;10.1109/vast.2010.5652931;10.1109/tvcg.2011.197;10.1109/tvcg.2014.2346919;10.1109/tvcg.2013.232;10.1109/tvcg.2011.202;10.1109/tvcg.2014.2346920;10.1109/tvcg.2010.183;10.1109/tvcg.2012.285;10.1109/tvcg.2013.221;10.1109/tvcg.2014.2346922",
                "AuthorKeywords": "microblog data, mutual reinforcement model, uncertainty modeling, uncertainty visualization, uncertainty propagation",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 49,
                "PubsCitedCrossRef": 55,
                "DownloadsXplore": 1373,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1112,
                "i": [
                    1112
                ]
            }
        },
        {
            "name": "Johanna Fulda",
            "value": 37,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 1479,
            "x": 345.6646046184528,
            "y": -168.71864483206556,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "TimeLineCurator: Interactive Authoring of Visual Timelines from Unstructured Text",
                "DOI": "10.1109/tvcg.2015.2467531",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467531",
                "FirstPage": 300,
                "LastPage": 309,
                "PaperType": "J",
                "Abstract": "We present TimeLineCurator, a browser-based authoring tool that automatically extracts event data from temporal references in unstructured text documents using natural language processing and encodes them along a visual timeline. Our goal is to facilitate the timeline creation process for journalists and others who tell temporal stories online. Current solutions involve manually extracting and formatting event data from source documents, a process that tends to be tedious and error prone. With TimeLineCurator, a prospective timeline author can quickly identify the extent of time encompassed by a document, as well as the distribution of events occurring along this timeline. Authors can speculatively browse possible documents to quickly determine whether they are appropriate sources of timeline material. TimeLineCurator provides controls for curating and editing events on a timeline, the ability to combine timelines from multiple source documents, and export curated timelines for online deployment. We evaluate TimeLineCurator through a benchmark comparison of entity extraction error against a manual timeline curation process, a preliminary evaluation of the user experience of timeline authoring, a brief qualitative analysis of its visual output, and a discussion of prospective use cases suggested by members of the target author communities following its deployment.",
                "AuthorNamesDeduped": "Johanna Fulda;Matthew Brehmer;Tamara Munzner",
                "AuthorNames": "Johanna Fulda;Matthew Brehmel;Tamara Munzner",
                "AuthorAffiliation": "University of Munich (LMU) and University of British Columbia;University of British Columbia;University of British Columbia",
                "InternalReferences": "0.1109/vast.2014.7042493;10.1109/tvcg.2011.185;10.1109/tvcg.2014.2346431;10.1109/tvcg.2013.124;10.1109/vast.2012.6400557;10.1109/vast.2011.6102461;10.1109/vast.2012.6400485;10.1109/tvcg.2013.162;10.1109/tvcg.2013.214;10.1109/tvcg.2012.224;10.1109/tvcg.2014.2346291;10.1109/tvcg.2012.213;10.1109/vast.2007.4389006;10.1109/tvcg.2012.212;10.1109/vast.2012.6400530;10.1109/tvcg.2007.70577",
                "AuthorKeywords": "System, timelines, authoring environment, time-oriented data, journalism",
                "AminerCitationCount": 81,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 76,
                "DownloadsXplore": 2230,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1116,
                "i": [
                    1116
                ]
            }
        },
        {
            "name": "Peng Mi",
            "value": 21,
            "numPapers": 16,
            "cluster": "4",
            "visible": 1,
            "index": 1480,
            "x": -140.962128953143,
            "y": 358.0218962591499,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "VAST",
                "Year": 2015,
                "Title": "BiSet: Semantic Edge Bundling with Biclusters for Sensemaking",
                "DOI": "10.1109/tvcg.2015.2467813",
                "Link": "http://dx.doi.org/10.1109/TVCG.2015.2467813",
                "FirstPage": 310,
                "LastPage": 319,
                "PaperType": "J",
                "Abstract": "Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, “in-between”, to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.",
                "AuthorNamesDeduped": "Maoyuan Sun;Peng Mi;Chris North 0001;Naren Ramakrishnan",
                "AuthorNames": "Maoyuan Sun;Peng Mi;Chris North;Naren Ramakrishnan",
                "AuthorAffiliation": "Department of Computer Science, Virginia Tech;Department of Computer Science, Virginia Tech;Department of Computer Science, Virginia Tech;Department of Computer Science, Virginia Tech",
                "InternalReferences": "0.1109/tvcg.2007.70521;10.1109/tvcg.2009.122;10.1109/tvcg.2008.135;10.1109/tvcg.2012.252;10.1109/tvcg.2012.260;10.1109/infvis.2004.1;10.1109/tvcg.2014.2346260;10.1109/tvcg.2007.70582;10.1109/tvcg.2006.147;10.1109/tvcg.2011.233;10.1109/vast.2009.5333878;10.1109/tvcg.2011.250;10.1109/tvcg.2010.138;10.1109/tvcg.2014.2346752;10.1109/tvcg.2010.210;10.1109/tvcg.2011.183;10.1109/tvcg.2014.2346665",
                "AuthorKeywords": "Bicluster, coordinated relationship, semantic edge bundling",
                "AminerCitationCount": 61,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1077,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1119,
                "i": [
                    1119
                ]
            }
        },
        {
            "name": "Song Zhang 0004",
            "value": 240,
            "numPapers": 25,
            "cluster": "6",
            "visible": 1,
            "index": 1481,
            "x": -137.94579316059094,
            "y": -359.33404813529074,
            "vy": 0,
            "vx": 0,
            "r": 1.2763385146804835,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "A Novel Interface for Interactive Exploration of DTI fibers",
                "DOI": "10.1109/tvcg.2009.112",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.112",
                "FirstPage": 1433,
                "LastPage": 1440,
                "PaperType": "J",
                "Abstract": "Visual exploration is essential to the visualization and analysis of densely sampled 3D DTI fibers in biological speciments, due to the high geometric, spatial, and anatomical complexity of fiber tracts. Previous methods for DTI fiber visualization use zooming, color-mapping, selection, and abstraction to deliver the characteristics of the fibers. However, these schemes mainly focus on the optimization of visualization in the 3D space where cluttering and occlusion make grasping even a few thousand fibers difficult. This paper introduces a novel interaction method that augments the 3D visualization with a 2D representation containing a low-dimensional embedding of the DTI fibers. This embedding preserves the relationship between the fibers and removes the visual clutter that is inherent in 3D renderings of the fibers. This new interface allows the user to manipulate the DTI fibers as both 3D curves and 2D embedded points and easily compare or validate his or her results in both domains. The implementation of the framework is GPU based to achieve real-time interaction. The framework was applied to several tasks, and the results show that our method reduces the user's workload in recognizing 3D DTI fibers and permits quick and accurate DTI fiber selection.",
                "AuthorNamesDeduped": "Wei Chen 0001;Zi'ang Ding;Song Zhang 0004;Anna MacKay-Brandt;Stephen Correia;Huamin Qu;John Allen Crow;David F. Tate;Zhicheng Yan;Qunsheng Peng 0001",
                "AuthorNames": "Wei Chen;Zi'ang Ding;Song Zhang;Anna MacKay-Brandt;Stephen Correia;Huamin Qu;John Allen Crow;David F. Tate;Zhicheng Yan;Qunsheng Peng",
                "AuthorAffiliation": "State Key Laboratory of CAD&CG, University of Zhejiang, China;State Key Laboratory of CAD&CG, University of Zhejiang, China;Department of Computer Science and Engineering, Mississippi State University, USA;Brown University, USA;Brown University, USA;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;College of Veterinary Medicine, Mississippi State University, USA;Brigham and Women's Hospital, USA;State Key Laboratory of CAD&CG, University of Zhejiang, China;State Key Laboratory of CAD&CG, University of Zhejiang, China",
                "InternalReferences": "0.1109/tvcg.2007.70602;10.1109/tvcg.2009.141;10.1109/visual.2005.1532777;10.1109/visual.2003.1250413;10.1109/visual.2005.1532778;10.1109/visual.2005.1532779;10.1109/visual.2005.1532772;10.1109/visual.2003.1250379;10.1109/visual.2004.30",
                "AuthorKeywords": "Diffusion Tensor Imaging, fibers, fiber Clustering, Visualization Interface",
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 39,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 771,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1928,
                "i": [
                    1928
                ]
            }
        },
        {
            "name": "Robert J. Moorhead",
            "value": 256,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1482,
            "x": 344.5598253022762,
            "y": 171.83866499616693,
            "vy": 0,
            "vx": 0,
            "r": 1.294761082325849,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Noodles: A Tool for Visualization of Numerical Weather Model Ensemble Uncertainty",
                "DOI": "10.1109/tvcg.2010.181",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.181",
                "FirstPage": 1421,
                "LastPage": 1430,
                "PaperType": "J",
                "Abstract": "Numerical weather prediction ensembles are routinely used for operational weather forecasting. The members of these ensembles are individual simulations with either slightly perturbed initial conditions or different model parameterizations, or occasionally both. Multi-member ensemble output is usually large, multivariate, and challenging to interpret interactively. Forecast meteorologists are interested in understanding the uncertainties associated with numerical weather prediction; specifically variability between the ensemble members. Currently, visualization of ensemble members is mostly accomplished through spaghetti plots of a single midtroposphere pressure surface height contour. In order to explore new uncertainty visualization methods, the Weather Research and Forecasting (WRF) model was used to create a 48-hour, 18 member parameterization ensemble of the 13 March 1993 \"Superstorm\". A tool was designed to interactively explore the ensemble uncertainty of three important weather variables: water-vapor mixing ratio, perturbation potential temperature, and perturbation pressure. Uncertainty was quantified using individual ensemble member standard deviation, inter-quartile range, and the width of the 95% confidence interval. Bootstrapping was employed to overcome the dependence on normality in the uncertainty metrics. A coordinated view of ribbon and glyph-based uncertainty visualization, spaghetti plots, iso-pressure colormaps, and data transect plots was provided to two meteorologists for expert evaluation. They found it useful in assessing uncertainty in the data, especially in finding outliers in the ensemble run and therefore avoiding the WRF parameterizations that lead to these outliers. Additionally, the meteorologists could identify spatial regions where the uncertainty was significantly high, allowing for identification of poorly simulated storm environments and physical interpretation of these model issues.",
                "AuthorNamesDeduped": "Jibonananda Sanyal;Song Zhang 0004;Jamie L. Dyer;Andrew Mercer 0001;Philip Amburn;Robert J. Moorhead",
                "AuthorNames": "Jibonananda Sanyal;Song Zhang;Jamie Dyer;Andrew Mercer;Philip Amburn;Robert Moorhead",
                "AuthorAffiliation": "Geosystems Research Institute, Mississippi State University, USA;Department of Computer Science and Engineering, Mississippi State University, USA;Department of Geosciences, Mississippi State University, USA;Department of Geosciences and Northern Gulf Institute, Mississippi State University, USA;Geosystems Research Institute, Mississippi State University, USA;Geosystems Research Institute, Mississippi State University, USA",
                "InternalReferences": "0.1109/tvcg.2009.114;10.1109/infvis.2002.1173145",
                "AuthorKeywords": "Uncertainty visualization, weather ensemble, geographic/geospatial visualization, glyph-based techniques, time-varying data, qualitative evaluation",
                "AminerCitationCount": 307,
                "CitationCountCrossRef": 185,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 2978,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1766,
                "i": [
                    1766
                ]
            }
        },
        {
            "name": "Zhenyu Guo",
            "value": 49,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1483,
            "x": -370.26785860401435,
            "y": 106.07409148702456,
            "vy": 0,
            "vx": 0,
            "r": 1.056419113413932,
            "node": {
                "Conference": "VAST",
                "Year": 2011,
                "Title": "Pointwise local pattern exploration for sensitivity analysis",
                "DOI": "10.1109/vast.2011.6102450",
                "Link": "http://dx.doi.org/10.1109/VAST.2011.6102450",
                "FirstPage": 131,
                "LastPage": 140,
                "PaperType": "C",
                "Abstract": "Sensitivity analysis is a powerful method for discovering the significant factors that contribute to targets and understanding the interaction between variables in multivariate datasets. A number of sensitivity analysis methods fall into the class of local analysis, in which the sensitivity is defined as the partial derivatives of a target variable with respect to a group of independent variables. Incorporating sensitivity analysis in visual analytic tools is essential for multivariate phenomena analysis. However, most current multivariate visualization techniques do not allow users to explore local patterns individually for understanding the sensitivity from a pointwise view. In this paper, we present a novel pointwise local pattern exploration system for visual sensitivity analysis. Using this system, analysts are able to explore local patterns and the sensitivity at individual data points, which reveals the relationships between a focal point and its neighbors. During exploration, users are able to interactively change the derivative coefficients to perform sensitivity analysis based on different requirements as well as their domain knowledge. Each local pattern is assigned an outlier factor, so that users can quickly identify anomalous local patterns that do not conform with the global pattern. Users can also compare the local pattern with the global pattern both visually and statistically. Finally, the local pattern is integrated into the original attribute space using color mapping and jittering, which reveals the distribution of the partial derivatives. Case studies with real datasets are used to investigate the effectiveness of the visualizations and interactions.",
                "AuthorNamesDeduped": "Zhenyu Guo;Matthew O. Ward;Elke A. Rundensteiner;Carolina Ruiz",
                "AuthorNames": "Zhenyu Guo;Matthew O. Ward;Elke A. Rundensteiner;Carolina Ruiz",
                "AuthorAffiliation": "Computer Science Department, Worcester Polytechnic Institute, USA;Computer Science Department, Worcester Polytechnic Institute, USA;Computer Science Department, Worcester Polytechnic Institute, USA;Computer Science Department, Worcester Polytechnic Institute, USA",
                "InternalReferences": "0.1109/visual.2005.1532821;10.1109/vast.2008.4677368;10.1109/vast.2010.5652460;10.1109/vast.2009.5332611;10.1109/infvis.2004.71;10.1109/vast.2009.5333431",
                "AuthorKeywords": "Knowledge discovery, sensitivity analysis, local pattern visualizations",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 332,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1599,
                "i": [
                    1599
                ]
            }
        },
        {
            "name": "Aleks Aris",
            "value": 148,
            "numPapers": 2,
            "cluster": "4",
            "visible": 1,
            "index": 1484,
            "x": 201.43983534981254,
            "y": -328.43871990713944,
            "vy": 0,
            "vx": 0,
            "r": 1.1704087507196315,
            "node": {
                "Conference": "InfoVis",
                "Year": 2006,
                "Title": "Network Visualization by Semantic Substrates",
                "DOI": "10.1109/tvcg.2006.166",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.166",
                "FirstPage": 733,
                "LastPage": 740,
                "PaperType": "J",
                "Abstract": "Networks have remained a challenge for information visualization designers because of the complex issues of node and link layout coupled with the rich set of tasks that users present. This paper offers a strategy based on two principles: (1) layouts are based on user-defined semantic substrates, which are non-overlapping regions in which node placement is based on node attributes, (2) users interactively adjust sliders to control link visibility to limit clutter and thus ensure comprehensibility of source and destination. Scalability is further facilitated by user control of which nodes are visible. We illustrate our semantic substrates approach as implemented in NVSS 1.0 with legal precedent data for up to 1122 court cases in three regions with 7645 legal citations",
                "AuthorNamesDeduped": "Ben Shneiderman;Aleks Aris",
                "AuthorNames": "Ben Shneiderman;Aleks Aris",
                "AuthorAffiliation": "Computer Science Department and the Human-Computer Interaction Laboratory, University of Maryland, College Park, USA;Computer Science Department and the Human-Computer Interaction Laboratory, University of Maryland, College Park, USA",
                "InternalReferences": "0.1109/infvis.2004.1;10.1109/infvis.2005.1532124;10.1109/infvis.2005.1532126",
                "AuthorKeywords": "Network visualization, semantic substrate, information visualization, graphical user interfaces",
                "AminerCitationCount": 491,
                "CitationCountCrossRef": 192,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 2317,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2217,
                "i": [
                    2217
                ]
            }
        },
        {
            "name": "Emanuel Zgraggen",
            "value": 5,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1485,
            "x": 73.3463937641193,
            "y": 378.37852280725286,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "PanoramicData: Data Analysis through Pen & Touch",
                "DOI": "10.1109/tvcg.2014.2346293",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346293",
                "FirstPage": 2112,
                "LastPage": 2121,
                "PaperType": "J",
                "Abstract": "Interactively exploring multidimensional datasets requires frequent switching among a range of distinct but inter-related tasks (e.g., producing different visuals based on different column sets, calculating new variables, and observing the interactions between sets of data). Existing approaches either target specific different problem domains (e.g., data-transformation or data-presentation) or expose only limited aspects of the general exploratory process; in either case, users are forced to adopt coping strategies (e.g., arranging windows or using undo as a mechanism for comparison instead of using side-by-side displays) to compensate for the lack of an integrated suite of exploratory tools. PanoramicData (PD) addresses these problems by unifying a comprehensive set of tools for visual data exploration into a hybrid pen and touch system designed to exploit the visualization advantages of large interactive displays. PD goes beyond just familiar visualizations by including direct UI support for data transformation and aggregation, filtering and brushing. Leveraging an unbounded whiteboard metaphor, users can combine these tools like building blocks to create detailed interactive visual display networks in which each visualization can act as a filter for others. Further, by operating directly on relational-databases, PD provides an approachable visual language that exposes a broad set of the expressive power of SQL including functionally complete logic filtering, computation of aggregates and natural table joins. To understand the implications of this novel approach, we conducted a formative user study with both data and visualization experts. The results indicated that the system provided a fluid and natural user experience for probing multi-dimensional data and was able to cover the full range of queries that the users wanted to pose.",
                "AuthorNamesDeduped": "Emanuel Zgraggen;Robert C. Zeleznik;Steven Mark Drucker",
                "AuthorNames": "Emanuel Zgraggen;Robert Zeleznik;Steven M. Drucker",
                "AuthorAffiliation": "Brown University;Brown University;Microsoft Research",
                "InternalReferences": "0.1109/infvis.2000.885086;10.1109/tvcg.2009.162;10.1109/tvcg.2010.164;10.1109/tvcg.2011.251;10.1109/tvcg.2013.191;10.1109/tvcg.2012.275;10.1109/vast.2007.4389013;10.1109/tvcg.2013.150;10.1109/tvcg.2007.70521;10.1109/tvcg.2008.137;10.1109/infvis.2005.1532136;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.204",
                "AuthorKeywords": "Visual analytics, pen and touch, user interfaces, interaction design, coordinated and multiple views",
                "AminerCitationCount": 50,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 906,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1195,
                "i": [
                    1195
                ]
            }
        },
        {
            "name": "Robert C. Zeleznik",
            "value": 5,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1486,
            "x": -309.7785826273521,
            "y": -229.53698992839648,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "PanoramicData: Data Analysis through Pen & Touch",
                "DOI": "10.1109/tvcg.2014.2346293",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346293",
                "FirstPage": 2112,
                "LastPage": 2121,
                "PaperType": "J",
                "Abstract": "Interactively exploring multidimensional datasets requires frequent switching among a range of distinct but inter-related tasks (e.g., producing different visuals based on different column sets, calculating new variables, and observing the interactions between sets of data). Existing approaches either target specific different problem domains (e.g., data-transformation or data-presentation) or expose only limited aspects of the general exploratory process; in either case, users are forced to adopt coping strategies (e.g., arranging windows or using undo as a mechanism for comparison instead of using side-by-side displays) to compensate for the lack of an integrated suite of exploratory tools. PanoramicData (PD) addresses these problems by unifying a comprehensive set of tools for visual data exploration into a hybrid pen and touch system designed to exploit the visualization advantages of large interactive displays. PD goes beyond just familiar visualizations by including direct UI support for data transformation and aggregation, filtering and brushing. Leveraging an unbounded whiteboard metaphor, users can combine these tools like building blocks to create detailed interactive visual display networks in which each visualization can act as a filter for others. Further, by operating directly on relational-databases, PD provides an approachable visual language that exposes a broad set of the expressive power of SQL including functionally complete logic filtering, computation of aggregates and natural table joins. To understand the implications of this novel approach, we conducted a formative user study with both data and visualization experts. The results indicated that the system provided a fluid and natural user experience for probing multi-dimensional data and was able to cover the full range of queries that the users wanted to pose.",
                "AuthorNamesDeduped": "Emanuel Zgraggen;Robert C. Zeleznik;Steven Mark Drucker",
                "AuthorNames": "Emanuel Zgraggen;Robert Zeleznik;Steven M. Drucker",
                "AuthorAffiliation": "Brown University;Brown University;Microsoft Research",
                "InternalReferences": "0.1109/infvis.2000.885086;10.1109/tvcg.2009.162;10.1109/tvcg.2010.164;10.1109/tvcg.2011.251;10.1109/tvcg.2013.191;10.1109/tvcg.2012.275;10.1109/vast.2007.4389013;10.1109/tvcg.2013.150;10.1109/tvcg.2007.70521;10.1109/tvcg.2008.137;10.1109/infvis.2005.1532136;10.1109/tvcg.2007.70594;10.1109/tvcg.2012.204",
                "AuthorKeywords": "Visual analytics, pen and touch, user interfaces, interaction design, coordinated and multiple views",
                "AminerCitationCount": 50,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 906,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1195,
                "i": [
                    1195
                ]
            }
        },
        {
            "name": "Roger Beecham",
            "value": 37,
            "numPapers": 13,
            "cluster": "5",
            "visible": 1,
            "index": 1487,
            "x": 383.60005766919153,
            "y": -40.01244501643135,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "Moving beyond sequential design: Reflections on a rich multi-channel approach to data visualization",
                "DOI": "10.1109/tvcg.2014.2346323",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346323",
                "FirstPage": 2171,
                "LastPage": 2180,
                "PaperType": "J",
                "Abstract": "We reflect on a four-year engagement with transport authorities and others involving a large dataset describing the use of a public bicycle-sharing scheme. We describe the role visualization of these data played in fostering engagement with policy makers, transport operators, the transport research community, the museum and gallery sector and the general public. We identify each of these as `channels'-evolving relationships between producers and consumers of visualization-where traditional roles of the visualization expert and domain expert are blurred. In each case, we identify the different design decisions that were required to support each of these channels and the role played by the visualization process. Using chauffeured interaction with a flexible visual analytics system we demonstrate how insight was gained by policy makers into gendered spatio-temporal cycle behaviors, how this led to further insight into workplace commuting activity, group cycling behavior and explanations for street navigation choice. We demonstrate how this supported, and was supported by, the seemingly unrelated development of narrative-driven visualization via TEDx, of the creation and the setting of an art installation and the curating of digital and physical artefacts. We assert that existing models of visualization design, of tool/technique development and of insight generation do not adequately capture the richness of parallel engagement via these multiple channels of communication. We argue that developing multiple channels in parallel opens up opportunities for visualization design and analysis by building trust and authority and supporting creativity. This rich, non-sequential approach to visualization design is likely to foster serendipity, deepen insight and increase impact.",
                "AuthorNamesDeduped": "Jo Wood;Roger Beecham;Jason Dykes",
                "AuthorNames": "Jo Wood;Roger Beecham;Jason Dykes",
                "AuthorAffiliation": "giCentre, City University London;giCentre, City University London;giCentre, City University London",
                "InternalReferences": "0.1109/tvcg.2012.272;10.1109/tvcg.2012.262;10.1109/tvcg.2012.213;10.1109/tvcg.2011.175;10.1109/tvcg.2013.134;10.1109/tvcg.2010.179;10.1109/tvcg.2013.132;10.1109/infvis.2004.59;10.1109/tvcg.2011.209;10.1109/tvcg.2013.145;10.1109/tvcg.2008.127",
                "AuthorKeywords": "Movement visualization, visual analytics, bikeshare, impact, visualization models, design study",
                "AminerCitationCount": 51,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 59,
                "DownloadsXplore": 1238,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1197,
                "i": [
                    1197
                ]
            }
        },
        {
            "name": "Amitabh Varshney",
            "value": 155,
            "numPapers": 45,
            "cluster": "6",
            "visible": 1,
            "index": 1488,
            "x": -255.91270362519992,
            "y": 288.7190470392291,
            "vy": 0,
            "vx": 0,
            "r": 1.178468624064479,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms",
                "DOI": "10.1109/tvcg.2012.231",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.231",
                "FirstPage": 2355,
                "LastPage": 2363,
                "PaperType": "J",
                "Abstract": "Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensity-gradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner.",
                "AuthorNamesDeduped": "Cheuk Yiu Ip;Amitabh Varshney;Joseph F. JáJá",
                "AuthorNames": "Cheuk Yiu Ip;Amitabh Varshney;Joseph JaJa",
                "AuthorAffiliation": "Institute for Advanced Computer Studies, University of Maryland, College Park, USA;Institute for Advanced Computer Studies, University of Maryland, College Park, USA;Institute for Advanced Computer Studies, University of Maryland, College Park, USA",
                "InternalReferences": "0.1109/tvcg.2010.132;10.1109/tvcg.2009.185;10.1109/visual.1999.809932;10.1109/visual.2005.1532795;10.1109/visual.2003.1250370;10.1109/tvcg.2010.208;10.1109/tvcg.2008.162;10.1109/tvcg.2011.248;10.1109/tvcg.2011.173;10.1109/tvcg.2006.174;10.1109/tvcg.2011.231;10.1109/tvcg.2007.70590;10.1109/tvcg.2009.197;10.1109/tvcg.2006.148;10.1109/tvcg.2009.120;10.1109/visual.2003.1250369",
                "AuthorKeywords": "Volume exploration, volume classification, normalized cut, Information-guided exploration",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 47,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 711,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1442,
                "i": [
                    1442
                ]
            }
        },
        {
            "name": "Silvia Born",
            "value": 20,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1489,
            "x": -6.326975100634212,
            "y": -385.8885452900564,
            "vy": 0,
            "vx": 0,
            "r": 1.023028209556707,
            "node": {
                "Conference": "SciVis",
                "Year": 2014,
                "Title": "Stent Maps - Comparative Visualization for the Prediction of Adverse Events of Transcatheter Aortic Valve Implantations",
                "DOI": "10.1109/tvcg.2014.2346459",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346459",
                "FirstPage": 2704,
                "LastPage": 2713,
                "PaperType": "J",
                "Abstract": "Transcatheter aortic valve implantation (TAVI) is a minimally-invasive method for the treatment of aortic valve stenosis in patients with high surgical risk. Despite the success of TAVI, side effects such as paravalvular leakages can occur postoperatively. The goal of this project is to quantitatively analyze the co-occurrence of this complication and several potential risk factors such as stent shape after implantation, implantation height, amount and distribution of calcifications, and contact forces between stent and surrounding structure. In this paper, we present a two-dimensional visualization (stent maps), which allows (1) to comprehensively display all these aspects from CT data and mechanical simulation results and (2) to compare different datasets to identify patterns that are typical for adverse effects. The area of a stent map represents the surface area of the implanted stent - virtually straightened and uncoiled. Several properties of interest, like radial forces or stent compression, are displayed in this stent map in a heatmap-like fashion. Important anatomical landmarks and calcifications are plotted to show their spatial relation to the stent and possible correlations with the color-coded parameters. To provide comparability, the maps of different patient datasets are spatially adjusted according to a corresponding anatomical landmark. Also, stent maps summarizing the characteristics of different populations (e.g. with or without side effects) can be generated. Up to this point several interesting patterns have been observed with our technique, which remained hidden when examining the raw CT data or 3D visualizations of the same data. One example are obvious radial force maxima between the right and non-coronary valve leaflet occurring mainly in cases without leakages. These observations confirm the usefulness of our approach and give starting points for new hypotheses and further analyses. Because of its reduced dimensionality, the stent map data is an appropriate input for statistical group evaluation and machine learning methods.",
                "AuthorNamesDeduped": "Silvia Born;Simon Harald Sündermann;Christoph Russ;Raoul Hopf;Carlos E. Ruiz;Volkmar Falk;Michael Gessat",
                "AuthorNames": "Silvia Born;Simon H. Sündermann;Christoph Russ;Raoul Hopf;Carlos E. Ruiz;Volkmar Falk;Michael Gessat",
                "AuthorAffiliation": "University of Zurich, Hybrid Laboratory for Cardiovascular Technologies, Switzerland;Division of Cardiovascular Surgery, University Hospital of Zurich, Switzerland;Swiss Federal Institute of Technology (ETH) Zurich, Computer Vision Laboratory, Switzerland;Swiss Federal Institute of Technology (ETH) Zurich, Institute of Mechanical Systems, Switzerland;Structural and Congenital Heart Division, Lenox Hill Hospital;Division of Cardiovascular Surgery, University Hospital of Zurich, Switzerland;University of Zurich, Computer Vision Laboratory, Switzerland",
                "InternalReferences": "0.1109/tvcg.2009.169;10.1109/tvcg.2007.70550;10.1109/visual.2001.964540;10.1109/tvcg.2011.235;10.1109/tvcg.2013.139;10.1109/visual.2003.1250353",
                "AuthorKeywords": "Comparative visualization, medical visualization, vessel flattening, transcatheter aortic valve implantation (TAVI)",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 661,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1234,
                "i": [
                    1234
                ]
            }
        },
        {
            "name": "Hui Zhang 0006",
            "value": 24,
            "numPapers": 10,
            "cluster": "2",
            "visible": 1,
            "index": 1490,
            "x": 265.4183331574266,
            "y": 280.362459013209,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "KnotPad: Visualizing and Exploring Knot Theory with Fluid Reidemeister Moves",
                "DOI": "10.1109/tvcg.2012.242",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.242",
                "FirstPage": 2051,
                "LastPage": 2060,
                "PaperType": "J",
                "Abstract": "We present KnotPad, an interactive paper-like system for visualizing and exploring mathematical knots; we exploit topological drawing and math-aware deformation methods in particular to enable and enrich our interactions with knot diagrams. Whereas most previous efforts typically employ physically based modeling to simulate the 3D dynamics of knots and ropes, our tool offers a Reidemeister move based interactive environment that is much closer to the topological problems being solved in knot theory, yet without interfering with the traditional advantages of paper-based analysis and manipulation of knot diagrams. Drawing knot diagrams with many crossings and producing their equivalent is quite challenging and error-prone. KnotPad can restrict user manipulations to the three types of Reidemeister moves, resulting in a more fluid yet mathematically correct user experience with knots. For our principal test case of mathematical knots, KnotPad permits us to draw and edit their diagrams empowered by a family of interactive techniques. Furthermore, we exploit supplementary interface elements to enrich the user experiences. For example, KnotPad allows one to pull and drag on knot diagrams to produce mathematically valid moves. Navigation enhancements in KnotPad provide still further improvement: by remembering and displaying the sequence of valid moves applied during the entire interaction, KnotPad allows a much cleaner exploratory interface for the user to analyze and study knot equivalence. All these methods combine to reveal the complex spatial relationships of knot diagrams with a mathematically true and rich user experience.",
                "AuthorNamesDeduped": "Hui Zhang 0006;Jianguang Weng;Lin Jing;Yiwen Zhong",
                "AuthorNames": "Hui Zhang;Jianguang Weng;Lin Jing;Yiwen Zhong",
                "AuthorAffiliation": "Pervasive Technology Institute, Indiana University, USA;Zhejiang University of Media and Communications, China;Fujian Agriculture and Forestry University, China;Fujian Agriculture and Forestry University, China",
                "InternalReferences": "0.1109/visual.2005.1532804;10.1109/visual.2005.1532843;10.1109/tvcg.2007.70593",
                "AuthorKeywords": "Knot Theory, Math Visualization",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 602,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1474,
                "i": [
                    1474
                ]
            }
        },
        {
            "name": "Andrew J. Hanson",
            "value": 154,
            "numPapers": 44,
            "cluster": "2",
            "visible": 1,
            "index": 1491,
            "x": -385.222499422673,
            "y": -27.452248333217987,
            "vy": 0,
            "vx": 0,
            "r": 1.1773172135866437,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Visualizing Large-Scale Uncertainty in Astrophysical Data",
                "DOI": "10.1109/tvcg.2007.70530",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70530",
                "FirstPage": 1640,
                "LastPage": 1647,
                "PaperType": "J",
                "Abstract": "Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary significantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty.",
                "AuthorNamesDeduped": "Hongwei Li;Chi-Wing Fu;Yinggang Li;Andrew J. Hanson",
                "AuthorNames": "Hongwei Li;Chi-Wing Fu;Yinggang Li;Andrew Hanson",
                "AuthorAffiliation": "Hong Kong University of Science & Technology, Hong Kong, China;Hong Kong University of Science & Technology, Hong Kong, China;Indiana University, Bloomington, USA;Indiana University, Bloomington, USA",
                "InternalReferences": "0.1109/visual.2000.885679;10.1109/visual.2002.1183769;10.1109/visual.2003.1250404;10.1109/visual.2005.1532807;10.1109/tvcg.2006.155;10.1109/tvcg.2006.176;10.1109/visual.2004.25;10.1109/visual.2005.1532853;10.1109/visual.1996.568116;10.1109/visual.2002.1183824;10.1109/visual.1996.568105;10.1109/infvis.2002.1173145;10.1109/visual.2005.1532803;10.1109/visual.2004.18",
                "AuthorKeywords": "Uncertainty visualization, large spatial scale, interstellar data, astronomy",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 614,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2189,
                "i": [
                    2189
                ]
            }
        },
        {
            "name": "Harlan Foote",
            "value": 125,
            "numPapers": 26,
            "cluster": "4",
            "visible": 1,
            "index": 1492,
            "x": 302.6962322791112,
            "y": -240.0520588623025,
            "vy": 0,
            "vx": 0,
            "r": 1.1439263097294186,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "A multi-level middle-out cross-zooming approach for large graph analytics",
                "DOI": "10.1109/vast.2009.5333880",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5333880",
                "FirstPage": 147,
                "LastPage": 154,
                "PaperType": "C",
                "Abstract": "This paper presents a working graph analytics model that embraces the strengths of the traditional top-down and bottom-up approaches with a resilient crossover concept to exploit the vast middle-ground information overlooked by the two extreme analytical approaches. Our graph analytics model is co-developed by users and researchers, who carefully studied the functional requirements that reflect the critical thinking and interaction pattern of a real-life intelligence analyst. To evaluate the model, we implement a system prototype, known as GreenHornet, which allows our analysts to test the theory in practice, identify the technological and usage-related gaps in the model, and then adapt the new technology in their work space. The paper describes the implementation of GreenHornet and compares its strengths and weaknesses against the other prevailing models and tools.",
                "AuthorNamesDeduped": "Pak Chung Wong;Patrick Mackey;Kristin A. Cook;Randall M. Rohrer;Harlan Foote;Mark A. Whiting",
                "AuthorNames": "Pak Chung Wong;Patrick Mackey;Kristin A. Cook;Randall M. Rohrer;Harlan Foote;Mark A. Whiting",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA;U.S. Department of Defense, Fort Meade, MD, USA;Pacific Northwest National Laboratory, Richland, WA, USA;Pacific Northwest National Laboratory, Richland, WA, USA",
                "InternalReferences": "0.1109/vast.2007.4389006;10.1109/infvis.2004.43;10.1109/infvis.2004.66;10.1109/tvcg.2007.70582;10.1109/vast.2008.4677383",
                "AuthorKeywords": "Graph analytics, information visualization",
                "AminerCitationCount": 13,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 565,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1874,
                "i": [
                    1874
                ]
            }
        },
        {
            "name": "Tangzhi Ye",
            "value": 46,
            "numPapers": 15,
            "cluster": "3",
            "visible": 1,
            "index": 1493,
            "x": -61.06639279369059,
            "y": 381.60306035351283,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Exploration of Sparse Traffic Trajectory Data",
                "DOI": "10.1109/tvcg.2014.2346746",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346746",
                "FirstPage": 1813,
                "LastPage": 1822,
                "PaperType": "J",
                "Abstract": "In this paper, we present a visual analysis system to explore sparse traffic trajectory data recorded by transportation cells. Such data contains the movements of nearly all moving vehicles on the major roads of a city. Therefore it is very suitable for macro-traffic analysis. However, the vehicle movements are recorded only when they pass through the cells. The exact tracks between two consecutive cells are unknown. To deal with such uncertainties, we first design a local animation, showing the vehicle movements only in the vicinity of cells. Besides, we ignore the micro-behaviors of individual vehicles, and focus on the macro-traffic patterns. We apply existing trajectory aggregation techniques to the dataset, studying cell status pattern and inter-cell flow pattern. Beyond that, we propose to study the correlation between these two patterns with dynamic graph visualization techniques. It allows us to check how traffic congestion on one cell is correlated with traffic flows on neighbouring links, and with route selection in its neighbourhood. Case studies show the effectiveness of our system.",
                "AuthorNamesDeduped": "Zuchao Wang;Tangzhi Ye;Min Lu 0002;Xiaoru Yuan;Huamin Qu;Jacky Yuan;Qianliang Wu",
                "AuthorNames": "Zuchao Wang;Tangzhi Ye;Min Lu;Xiaoru Yuan;Huamin Qu;Jacky Yuan;Qianliang Wu",
                "AuthorAffiliation": "Peking University;Peking University;Peking University;Peking University;Hong Kong University of Science and Technology;Nanjing Intelligent Transportation Systems Co., Ltd;Nanjing Intelligent Transportation Systems Co., Ltd",
                "InternalReferences": "0.1109/vast.2012.6400556;10.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/infvis.2005.1532151;10.1109/vast.2011.6102458;10.1109/tvcg.2011.226;10.1109/tvcg.2013.228;10.1109/tvcg.2013.193;10.1109/vast.2009.5332584;10.1109/tvcg.2013.226;10.1109/vast.2011.6102454;10.1109/vast.2011.6102455;10.1109/tvcg.2009.182;10.1109/tvcg.2012.265",
                "AuthorKeywords": "Sparse Traffic Trajectory, Traffic Visualization, Dynamic Graph Visualization, Traffic Congestion",
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2576,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1252,
                "i": [
                    1252
                ]
            }
        },
        {
            "name": "Jacky Yuan",
            "value": 46,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 1494,
            "x": -212.8119162668669,
            "y": -322.7399700917506,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Exploration of Sparse Traffic Trajectory Data",
                "DOI": "10.1109/tvcg.2014.2346746",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346746",
                "FirstPage": 1813,
                "LastPage": 1822,
                "PaperType": "J",
                "Abstract": "In this paper, we present a visual analysis system to explore sparse traffic trajectory data recorded by transportation cells. Such data contains the movements of nearly all moving vehicles on the major roads of a city. Therefore it is very suitable for macro-traffic analysis. However, the vehicle movements are recorded only when they pass through the cells. The exact tracks between two consecutive cells are unknown. To deal with such uncertainties, we first design a local animation, showing the vehicle movements only in the vicinity of cells. Besides, we ignore the micro-behaviors of individual vehicles, and focus on the macro-traffic patterns. We apply existing trajectory aggregation techniques to the dataset, studying cell status pattern and inter-cell flow pattern. Beyond that, we propose to study the correlation between these two patterns with dynamic graph visualization techniques. It allows us to check how traffic congestion on one cell is correlated with traffic flows on neighbouring links, and with route selection in its neighbourhood. Case studies show the effectiveness of our system.",
                "AuthorNamesDeduped": "Zuchao Wang;Tangzhi Ye;Min Lu 0002;Xiaoru Yuan;Huamin Qu;Jacky Yuan;Qianliang Wu",
                "AuthorNames": "Zuchao Wang;Tangzhi Ye;Min Lu;Xiaoru Yuan;Huamin Qu;Jacky Yuan;Qianliang Wu",
                "AuthorAffiliation": "Peking University;Peking University;Peking University;Peking University;Hong Kong University of Science and Technology;Nanjing Intelligent Transportation Systems Co., Ltd;Nanjing Intelligent Transportation Systems Co., Ltd",
                "InternalReferences": "0.1109/vast.2012.6400556;10.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/infvis.2005.1532151;10.1109/vast.2011.6102458;10.1109/tvcg.2011.226;10.1109/tvcg.2013.228;10.1109/tvcg.2013.193;10.1109/vast.2009.5332584;10.1109/tvcg.2013.226;10.1109/vast.2011.6102454;10.1109/vast.2011.6102455;10.1109/tvcg.2009.182;10.1109/tvcg.2012.265",
                "AuthorKeywords": "Sparse Traffic Trajectory, Traffic Visualization, Dynamic Graph Visualization, Traffic Congestion",
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2576,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1252,
                "i": [
                    1252
                ]
            }
        },
        {
            "name": "Qianliang Wu",
            "value": 46,
            "numPapers": 13,
            "cluster": "3",
            "visible": 1,
            "index": 1495,
            "x": 375.0540164636091,
            "y": 94.25754470871192,
            "vy": 0,
            "vx": 0,
            "r": 1.052964881980426,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Visual Exploration of Sparse Traffic Trajectory Data",
                "DOI": "10.1109/tvcg.2014.2346746",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2346746",
                "FirstPage": 1813,
                "LastPage": 1822,
                "PaperType": "J",
                "Abstract": "In this paper, we present a visual analysis system to explore sparse traffic trajectory data recorded by transportation cells. Such data contains the movements of nearly all moving vehicles on the major roads of a city. Therefore it is very suitable for macro-traffic analysis. However, the vehicle movements are recorded only when they pass through the cells. The exact tracks between two consecutive cells are unknown. To deal with such uncertainties, we first design a local animation, showing the vehicle movements only in the vicinity of cells. Besides, we ignore the micro-behaviors of individual vehicles, and focus on the macro-traffic patterns. We apply existing trajectory aggregation techniques to the dataset, studying cell status pattern and inter-cell flow pattern. Beyond that, we propose to study the correlation between these two patterns with dynamic graph visualization techniques. It allows us to check how traffic congestion on one cell is correlated with traffic flows on neighbouring links, and with route selection in its neighbourhood. Case studies show the effectiveness of our system.",
                "AuthorNamesDeduped": "Zuchao Wang;Tangzhi Ye;Min Lu 0002;Xiaoru Yuan;Huamin Qu;Jacky Yuan;Qianliang Wu",
                "AuthorNames": "Zuchao Wang;Tangzhi Ye;Min Lu;Xiaoru Yuan;Huamin Qu;Jacky Yuan;Qianliang Wu",
                "AuthorAffiliation": "Peking University;Peking University;Peking University;Peking University;Hong Kong University of Science and Technology;Nanjing Intelligent Transportation Systems Co., Ltd;Nanjing Intelligent Transportation Systems Co., Ltd",
                "InternalReferences": "0.1109/vast.2012.6400556;10.1109/infvis.2004.27;10.1109/vast.2008.4677356;10.1109/infvis.2005.1532151;10.1109/vast.2011.6102458;10.1109/tvcg.2011.226;10.1109/tvcg.2013.228;10.1109/tvcg.2013.193;10.1109/vast.2009.5332584;10.1109/tvcg.2013.226;10.1109/vast.2011.6102454;10.1109/vast.2011.6102455;10.1109/tvcg.2009.182;10.1109/tvcg.2012.265",
                "AuthorKeywords": "Sparse Traffic Trajectory, Traffic Visualization, Dynamic Graph Visualization, Traffic Congestion",
                "AminerCitationCount": 136,
                "CitationCountCrossRef": 86,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 2576,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1252,
                "i": [
                    1252
                ]
            }
        },
        {
            "name": "Scott Barlowe",
            "value": 156,
            "numPapers": 25,
            "cluster": "5",
            "visible": 1,
            "index": 1496,
            "x": -340.33694592694786,
            "y": 183.90422299968466,
            "vy": 0,
            "vx": 0,
            "r": 1.1796200345423145,
            "node": {
                "Conference": "VAST",
                "Year": 2010,
                "Title": "Click2Annotate: Automated Insight Externalization with rich semantics",
                "DOI": "10.1109/vast.2010.5652885",
                "Link": "http://dx.doi.org/10.1109/VAST.2010.5652885",
                "FirstPage": 155,
                "LastPage": 162,
                "PaperType": "C",
                "Abstract": "Insight Externalization (IE) refers to the process of capturing and recording the semantics of insights in decision making and problem solving. To reduce human effort, Automated Insight Externalization (AIE) is desired. Most existing IE approaches achieve automation by capturing events (e.g., clicks and key presses) or actions (e.g., panning and zooming). In this paper, we propose a novel AIE approach named Click2Annotate. It allows semi-automatic insight annotation that captures low-level analytics task results (e.g., clusters and outliers), which have higher semantic richness and abstraction levels than actions and events. Click2Annotate has two significant benefits. First, it reduces human effort required in IE and generates annotations easy to understand. Second, the rich semantic information encoded in the annotations enables various insight management activities, such as insight browsing and insight retrieval. We present a formal user study that proved this first benefit. We also illustrate the second benefit by presenting the novel insight management activities we developed based on Click2Annotate, namely scented insight browsing and faceted insight search.",
                "AuthorNamesDeduped": "Yang Chen;Scott Barlowe;Jing Yang 0001",
                "AuthorNames": "Yang Chen;Scott Barlowe;Jing Yang",
                "AuthorAffiliation": "Department of Computer Science, UNC-Charlotte, USA;Department of Computer Science, UNC-Charlotte, USA;Department of Computer Science, UNC-Charlotte, USA",
                "InternalReferences": "0.1109/visual.1990.146375;10.1109/infvis.2005.1532136;10.1109/tvcg.2007.70541;10.1109/vast.2008.4677365;10.1109/tvcg.2007.70577;10.1109/tvcg.2009.139",
                "AuthorKeywords": "Visual Analytics, Decision Making, Annotation, Insight Management, Multidimensional Visualization",
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 569,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1729,
                "i": [
                    1729
                ]
            }
        },
        {
            "name": "Jonathan Feinberg",
            "value": 103,
            "numPapers": 1,
            "cluster": "1",
            "visible": 1,
            "index": 1497,
            "x": 126.77068878747268,
            "y": -365.6216520726717,
            "vy": 0,
            "vx": 0,
            "r": 1.1185952792170408,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "Participatory Visualization with Wordle",
                "DOI": "10.1109/tvcg.2009.171",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.171",
                "FirstPage": 1137,
                "LastPage": 1144,
                "PaperType": "J",
                "Abstract": "We discuss the design and usage of ldquoWordle,rdquo a Web-based tool for visualizing text. Wordle creates tag-cloud-like displays that give careful attention to typography, color, and composition. We describe the algorithms used to balance various aesthetic criteria and create the distinctive Wordle layouts. We then present the results of a study of Wordle usage, based both on spontaneous behaviour observed in the wild, and on a large-scale survey of Wordle users. The results suggest that Wordles have become a kind of medium of expression, and that a ldquoparticipatory culturerdquo has arisen around them.",
                "AuthorNamesDeduped": "Fernanda B. Viégas;Martin Wattenberg;Jonathan Feinberg",
                "AuthorNames": "Fernanda B. Viegas;Martin Wattenberg;Jonathan Feinberg",
                "AuthorAffiliation": "IBM Research, USA;IBM Research, USA;IBM Research, USA",
                "InternalReferences": "0.1109/infvis.2005.1532122;10.1109/tvcg.2007.70577",
                "AuthorKeywords": "Visualization, text, tag cloud, participatory culture, memory, educational visualization, social data analysis",
                "AminerCitationCount": 534,
                "CitationCountCrossRef": 258,
                "PubsCitedCrossRef": 15,
                "DownloadsXplore": 3482,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1815,
                "i": [
                    1815
                ]
            }
        },
        {
            "name": "Doris Dransch",
            "value": 31,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1498,
            "x": 153.54835935339392,
            "y": 355.3489852805,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "A Visual Analytics Approach to Multiscale Exploration of Environmental Time Series",
                "DOI": "10.1109/tvcg.2012.191",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.191",
                "FirstPage": 2899,
                "LastPage": 2907,
                "PaperType": "J",
                "Abstract": "We present a Visual Analytics approach that addresses the detection of interesting patterns in numerical time series, specifically from environmental sciences. Crucial for the detection of interesting temporal patterns are the time scale and the starting points one is looking at. Our approach makes no assumption about time scale and starting position of temporal patterns and consists of three main steps: an algorithm to compute statistical values for all possible time scales and starting positions of intervals, visual identification of potentially interesting patterns in a matrix visualization, and interactive exploration of detected patterns. We demonstrate the utility of this approach in two scientific scenarios and explain how it allowed scientists to gain new insight into the dynamics of environmental systems.",
                "AuthorNamesDeduped": "Mike Sips;Patrick Köthur;Andrea Unger;Hans-Christian Hege;Doris Dransch",
                "AuthorNames": "Mike Sips;Patrick Köthur;Andrea Unger;Hans-Christian Hege;Doris Dransch",
                "AuthorAffiliation": "GFZ German Research Centre for Geosciences, Germany;GFZ German Research Centre for Geosciences, Germany;GFZ German Research Centre for Geosciences, Germany;Zuse Institute Berlin, Germany;GFZ German Research Centre for Geosciences, Germany",
                "InternalReferences": "0.1109/infvis.2001.963273;10.1109/infvis.1995.528685;10.1109/infvis.2004.11",
                "AuthorKeywords": "Time series analysis, multiscale visualization, visual analytics",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 1315,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1499,
                "i": [
                    1499
                ]
            }
        },
        {
            "name": "Jimmy Lin",
            "value": 17,
            "numPapers": 13,
            "cluster": "1",
            "visible": 1,
            "index": 1499,
            "x": -353.3744225024819,
            "y": -158.35566778943357,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "Using Visualizations to Monitor Changes and Harvest Insights from a Global-Scale Logging Infrastructure at Twitter",
                "DOI": "10.1109/vast.2014.7042487",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042487",
                "FirstPage": 113,
                "LastPage": 122,
                "PaperType": "C",
                "Abstract": "Logging user activities is essential to data analysis for internet products and services. Twitter has built a unified logging infrastructure that captures user activities across all clients it owns, making it one of the largest datasets in the organization. This paper describes challenges and opportunities in applying information visualization to log analysis at this massive scale, and shows how various visualization techniques can be adapted to help data scientists extract insights. In particular, we focus on two scenarios: (1) monitoring and exploring a large collection of log events, and (2) performing visual funnel analysis on log data with tens of thousands of event types. Two interactive visualizations were developed for these purposes: we discuss design choices and the implementation of these systems, along with case studies of how they are being used in day-to-day operations at Twitter.",
                "AuthorNamesDeduped": "Krist Wongsuphasawat;Jimmy Lin",
                "AuthorNames": "Krist Wongsuphasawat;Jimmy Lin",
                "AuthorAffiliation": "Twitter, Inc.;Twitter, Inc.",
                "InternalReferences": "0.1109/infvis.2000.885091;10.1109/tvcg.2009.117;10.1109/infvis.1997.636718;10.1109/vast.2007.4389008;10.1109/infvis.1996.559227;10.1109/tvcg.2012.225;10.1109/tvcg.2007.70529;10.1109/vast.2012.6400494;10.1109/tvcg.2013.231;10.1109/infvis.2004.64;10.1109/visual.1991.175815;10.1109/tvcg.2013.200;10.1109/vast.2006.261421;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Information Visualization, Visual Analytics, Log Analysis, Log Visualization, Session Analysis, Funnel Analysis",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 374,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1279,
                "i": [
                    1279
                ]
            }
        },
        {
            "name": "Raphael Fuchs",
            "value": 220,
            "numPapers": 31,
            "cluster": "6",
            "visible": 1,
            "index": 1500,
            "x": 367.65755019341253,
            "y": -121.97510313903561,
            "vy": 0,
            "vx": 0,
            "r": 1.2533103051237766,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "World Lines",
                "DOI": "10.1109/tvcg.2010.223",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.223",
                "FirstPage": 1458,
                "LastPage": 1467,
                "PaperType": "J",
                "Abstract": "In this paper we present World Lines as a novel interactive visualization that provides complete control over multiple heterogeneous simulation runs. In many application areas, decisions can only be made by exploring alternative scenarios. The goal of the suggested approach is to support users in this decision making process. In this setting, the data domain is extended to a set of alternative worlds where only one outcome will actually happen. World Lines integrate simulation, visualization and computational steering into a single unified system that is capable of dealing with the extended solution space. World Lines represent simulation runs as causally connected tracks that share a common time axis. This setup enables users to interfere and add new information quickly. A World Line is introduced as a visual combination of user events and their effects in order to present a possible future. To quickly find the most attractive outcome, we suggest World Lines as the governing component in a system of multiple linked views and a simulation component. World Lines employ linking and brushing to enable comparative visual analysis of multiple simulations in linked views. Analysis results can be mapped to various visual variables that World Lines provide in order to highlight the most compelling solutions. To demonstrate this technique we present a flooding scenario and show the usefulness of the integrated approach to support informed decision making.",
                "AuthorNamesDeduped": "Jürgen Waser;Raphael Fuchs;Hrvoje Ribicic;Benjamin Schindler;Günter Blöschl;M. Eduard Gröller",
                "AuthorNames": "Jurgen Waser;Raphael Fuchs;Hrvoje Ribicic;Benjamin Schindler;Gunther Bloschl;Eduard Groller",
                "AuthorAffiliation": "VRVis Vienna, Austria;ETH Zürich, Switzerland;VRVis Vienna, Austria;ETH Zürich, Switzerland;Technical University of of Vienna, Austria;Technical University of of Vienna, Austria",
                "InternalReferences": "0.1109/infvis.2002.1173149;10.1109/infvis.2004.12;10.1109/visual.1999.809871;10.1109/infvis.2005.1532143;10.1109/tvcg.2009.199;10.1109/visual.1993.398857;10.1109/tvcg.2008.145;10.1109/tvcg.2007.70539;10.1109/visual.1998.745289",
                "AuthorKeywords": "Problem solving environment, decision making, simulation steering, parallel worlds, CFD, smoothed particle hydrodynamics",
                "AminerCitationCount": 142,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1211,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1771,
                "i": [
                    1771
                ]
            }
        },
        {
            "name": "Benjamin Schindler",
            "value": 196,
            "numPapers": 28,
            "cluster": "6",
            "visible": 1,
            "index": 1501,
            "x": -188.76910763659356,
            "y": 338.40245862298957,
            "vy": 0,
            "vx": 0,
            "r": 1.2256764536557283,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "World Lines",
                "DOI": "10.1109/tvcg.2010.223",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.223",
                "FirstPage": 1458,
                "LastPage": 1467,
                "PaperType": "J",
                "Abstract": "In this paper we present World Lines as a novel interactive visualization that provides complete control over multiple heterogeneous simulation runs. In many application areas, decisions can only be made by exploring alternative scenarios. The goal of the suggested approach is to support users in this decision making process. In this setting, the data domain is extended to a set of alternative worlds where only one outcome will actually happen. World Lines integrate simulation, visualization and computational steering into a single unified system that is capable of dealing with the extended solution space. World Lines represent simulation runs as causally connected tracks that share a common time axis. This setup enables users to interfere and add new information quickly. A World Line is introduced as a visual combination of user events and their effects in order to present a possible future. To quickly find the most attractive outcome, we suggest World Lines as the governing component in a system of multiple linked views and a simulation component. World Lines employ linking and brushing to enable comparative visual analysis of multiple simulations in linked views. Analysis results can be mapped to various visual variables that World Lines provide in order to highlight the most compelling solutions. To demonstrate this technique we present a flooding scenario and show the usefulness of the integrated approach to support informed decision making.",
                "AuthorNamesDeduped": "Jürgen Waser;Raphael Fuchs;Hrvoje Ribicic;Benjamin Schindler;Günter Blöschl;M. Eduard Gröller",
                "AuthorNames": "Jurgen Waser;Raphael Fuchs;Hrvoje Ribicic;Benjamin Schindler;Gunther Bloschl;Eduard Groller",
                "AuthorAffiliation": "VRVis Vienna, Austria;ETH Zürich, Switzerland;VRVis Vienna, Austria;ETH Zürich, Switzerland;Technical University of of Vienna, Austria;Technical University of of Vienna, Austria",
                "InternalReferences": "0.1109/infvis.2002.1173149;10.1109/infvis.2004.12;10.1109/visual.1999.809871;10.1109/infvis.2005.1532143;10.1109/tvcg.2009.199;10.1109/visual.1993.398857;10.1109/tvcg.2008.145;10.1109/tvcg.2007.70539;10.1109/visual.1998.745289",
                "AuthorKeywords": "Problem solving environment, decision making, simulation steering, parallel worlds, CFD, smoothed particle hydrodynamics",
                "AminerCitationCount": 142,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1211,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1771,
                "i": [
                    1771
                ]
            }
        },
        {
            "name": "Günter Blöschl",
            "value": 192,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1502,
            "x": -89.42487488720921,
            "y": -377.16467458049004,
            "vy": 0,
            "vx": 0,
            "r": 1.221070811744387,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "World Lines",
                "DOI": "10.1109/tvcg.2010.223",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.223",
                "FirstPage": 1458,
                "LastPage": 1467,
                "PaperType": "J",
                "Abstract": "In this paper we present World Lines as a novel interactive visualization that provides complete control over multiple heterogeneous simulation runs. In many application areas, decisions can only be made by exploring alternative scenarios. The goal of the suggested approach is to support users in this decision making process. In this setting, the data domain is extended to a set of alternative worlds where only one outcome will actually happen. World Lines integrate simulation, visualization and computational steering into a single unified system that is capable of dealing with the extended solution space. World Lines represent simulation runs as causally connected tracks that share a common time axis. This setup enables users to interfere and add new information quickly. A World Line is introduced as a visual combination of user events and their effects in order to present a possible future. To quickly find the most attractive outcome, we suggest World Lines as the governing component in a system of multiple linked views and a simulation component. World Lines employ linking and brushing to enable comparative visual analysis of multiple simulations in linked views. Analysis results can be mapped to various visual variables that World Lines provide in order to highlight the most compelling solutions. To demonstrate this technique we present a flooding scenario and show the usefulness of the integrated approach to support informed decision making.",
                "AuthorNamesDeduped": "Jürgen Waser;Raphael Fuchs;Hrvoje Ribicic;Benjamin Schindler;Günter Blöschl;M. Eduard Gröller",
                "AuthorNames": "Jurgen Waser;Raphael Fuchs;Hrvoje Ribicic;Benjamin Schindler;Gunther Bloschl;Eduard Groller",
                "AuthorAffiliation": "VRVis Vienna, Austria;ETH Zürich, Switzerland;VRVis Vienna, Austria;ETH Zürich, Switzerland;Technical University of of Vienna, Austria;Technical University of of Vienna, Austria",
                "InternalReferences": "0.1109/infvis.2002.1173149;10.1109/infvis.2004.12;10.1109/visual.1999.809871;10.1109/infvis.2005.1532143;10.1109/tvcg.2009.199;10.1109/visual.1993.398857;10.1109/tvcg.2008.145;10.1109/tvcg.2007.70539;10.1109/visual.1998.745289",
                "AuthorKeywords": "Problem solving environment, decision making, simulation steering, parallel worlds, CFD, smoothed particle hydrodynamics",
                "AminerCitationCount": 142,
                "CitationCountCrossRef": 85,
                "PubsCitedCrossRef": 45,
                "DownloadsXplore": 1211,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1771,
                "i": [
                    1771
                ]
            }
        },
        {
            "name": "Thomas Auzinger",
            "value": 17,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1503,
            "x": 320.81690445624207,
            "y": 217.77629305118225,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "YMCA - Your Mesh Comparison Application",
                "DOI": "10.1109/vast.2014.7042491",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042491",
                "FirstPage": 153,
                "LastPage": 162,
                "PaperType": "C",
                "Abstract": "Polygonal meshes can be created in several different ways. In this paper we focus on the reconstruction of meshes from point clouds, which are sets of points in 3D. Several algorithms that tackle this task already exist, but they have different benefits and drawbacks, which leads to a large number of possible reconstruction results (i.e., meshes). The evaluation of those techniques requires extensive comparisons between different meshes which is up to now done by either placing images of rendered meshes side-by-side, or by encoding differences by heat maps. A major drawback of both approaches is that they do not scale well with the number of meshes. This paper introduces a new comparative visual analysis technique for 3D meshes which enables the simultaneous comparison of several meshes and allows for the interactive exploration of their differences. Our approach gives an overview of the differences of the input meshes in a 2D view. By selecting certain areas of interest, the user can switch to a 3D representation and explore the spatial differences in detail. To inspect local variations, we provide a magic lens tool in 3D. The location and size of the lens provide further information on the variations of the reconstructions in the selected area. With our comparative visualization approach, differences between several mesh reconstruction algorithms can be easily localized and inspected.",
                "AuthorNamesDeduped": "Johanna Schmidt;Reinhold Preiner;Thomas Auzinger;Michael Wimmer 0001;M. Eduard Gröller;Stefan Bruckner",
                "AuthorNames": "Johanna Schmidt;Reinhold Preiner;Thomas Auzinger;Michael Wimmer;M. Eduard Gröller;Stefan Bruckner",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Vienna University of Technology, Austria;Vienna University of Technology, Austria;Vienna University of Technology, Austria;University of Bergen, Norway",
                "InternalReferences": "0.1109/infvis.2002.1173157;10.1109/visual.1990.146402;10.1109/tvcg.2013.213;10.1109/visual.2002.1183790",
                "AuthorKeywords": "Visual analysis, comparative visualization, 3D data exploration, focus+context, mesh comparison",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 373,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1286,
                "i": [
                    1286
                ]
            }
        },
        {
            "name": "Michael Wimmer 0001",
            "value": 17,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1504,
            "x": -383.7937429872061,
            "y": 56.14590674190263,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "VAST",
                "Year": 2014,
                "Title": "YMCA - Your Mesh Comparison Application",
                "DOI": "10.1109/vast.2014.7042491",
                "Link": "http://dx.doi.org/10.1109/VAST.2014.7042491",
                "FirstPage": 153,
                "LastPage": 162,
                "PaperType": "C",
                "Abstract": "Polygonal meshes can be created in several different ways. In this paper we focus on the reconstruction of meshes from point clouds, which are sets of points in 3D. Several algorithms that tackle this task already exist, but they have different benefits and drawbacks, which leads to a large number of possible reconstruction results (i.e., meshes). The evaluation of those techniques requires extensive comparisons between different meshes which is up to now done by either placing images of rendered meshes side-by-side, or by encoding differences by heat maps. A major drawback of both approaches is that they do not scale well with the number of meshes. This paper introduces a new comparative visual analysis technique for 3D meshes which enables the simultaneous comparison of several meshes and allows for the interactive exploration of their differences. Our approach gives an overview of the differences of the input meshes in a 2D view. By selecting certain areas of interest, the user can switch to a 3D representation and explore the spatial differences in detail. To inspect local variations, we provide a magic lens tool in 3D. The location and size of the lens provide further information on the variations of the reconstructions in the selected area. With our comparative visualization approach, differences between several mesh reconstruction algorithms can be easily localized and inspected.",
                "AuthorNamesDeduped": "Johanna Schmidt;Reinhold Preiner;Thomas Auzinger;Michael Wimmer 0001;M. Eduard Gröller;Stefan Bruckner",
                "AuthorNames": "Johanna Schmidt;Reinhold Preiner;Thomas Auzinger;Michael Wimmer;M. Eduard Gröller;Stefan Bruckner",
                "AuthorAffiliation": "Vienna University of Technology, Austria;Vienna University of Technology, Austria;Vienna University of Technology, Austria;Vienna University of Technology, Austria;Vienna University of Technology, Austria;University of Bergen, Norway",
                "InternalReferences": "0.1109/infvis.2002.1173157;10.1109/visual.1990.146402;10.1109/tvcg.2013.213;10.1109/visual.2002.1183790",
                "AuthorKeywords": "Visual analysis, comparative visualization, 3D data exploration, focus+context, mesh comparison",
                "AminerCitationCount": 30,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 373,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1286,
                "i": [
                    1286
                ]
            }
        },
        {
            "name": "Penny Rheingans",
            "value": 339,
            "numPapers": 52,
            "cluster": "6",
            "visible": 1,
            "index": 1505,
            "x": 245.15297926102897,
            "y": -300.7490926992823,
            "vy": 0,
            "vx": 0,
            "r": 1.390328151986183,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Visualizing high-dimensional predictive model quality",
                "DOI": "10.1109/visual.2000.885740",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885740",
                "FirstPage": 493,
                "LastPage": 496,
                "PaperType": "C",
                "Abstract": "Using inductive learning techniques to construct classification models from large, high-dimensional data sets is a useful way to make predictions in complex domains. However, these models can be difficult for users to understand. We have developed a set of visualization methods that help users to understand and analyze the behavior of learned models, including techniques for high-dimensional data space projection, display of probabilistic predictions, variable/class correlation, and instance mapping. We show the results of applying these techniques to models constructed from a benchmark data set of census data, and draw conclusions about the utility of these methods for model understanding.",
                "AuthorNamesDeduped": "Penny Rheingans;Marie desJardins",
                "AuthorNames": "P. Rheingans;M. DesJardins",
                "AuthorAffiliation": "Artificial Intelligence Center, SRI International, Inc., USA;Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, USA",
                "InternalReferences": "0.1109/visual.1997.663922;10.1109/infvis.1998.729565;10.1109/visual.1990.146402;10.1109/visual.1997.663868",
                "AuthorKeywords": null,
                "AminerCitationCount": 57,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 163,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2990,
                "i": [
                    2990
                ]
            }
        },
        {
            "name": "Lauro Didier Lins",
            "value": 161,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 1506,
            "x": 22.392348996160905,
            "y": 387.49010659168334,
            "vy": 0,
            "vx": 0,
            "r": 1.185377086931491,
            "node": {
                "Conference": "InfoVis",
                "Year": 2013,
                "Title": "Nanocubes for Real-Time Exploration of Spatiotemporal Datasets",
                "DOI": "10.1109/tvcg.2013.179",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.179",
                "FirstPage": 2456,
                "LastPage": 2465,
                "PaperType": "J",
                "Abstract": "Consider real-time exploration of large multidimensional spatiotemporal datasets with billions of entries, each defined by a location, a time, and other attributes. Are certain attributes correlated spatially or temporally? Are there trends or outliers in the data? Answering these questions requires aggregation over arbitrary regions of the domain and attributes of the data. Many relational databases implement the well-known data cube aggregation operation, which in a sense precomputes every possible aggregate query over the database. Data cubes are sometimes assumed to take a prohibitively large amount of space, and to consequently require disk storage. In contrast, we show how to construct a data cube that fits in a modern laptop's main memory, even for billions of entries; we call this data structure a nanocube. We present algorithms to compute and query a nanocube, and show how it can be used to generate well-known visual encodings such as heatmaps, histograms, and parallel coordinate plots. When compared to exact visualizations created by scanning an entire dataset, nanocube plots have bounded screen error across a variety of scales, thanks to a hierarchical structure in space and time. We demonstrate the effectiveness of our technique on a variety of real-world datasets, and present memory, timing, and network bandwidth measurements. We find that the timings for the queries in our examples are dominated by network and user-interaction latencies.",
                "AuthorNamesDeduped": "Lauro Didier Lins;James T. Klosowski;Carlos Eduardo Scheidegger",
                "AuthorNames": "Lauro Lins;James T. Klosowski;Carlos Scheidegger",
                "AuthorAffiliation": "AT&T Research, USA;AT&T Research, USA;AT&T Research, USA",
                "InternalReferences": "0.1109/tvcg.2006.161;10.1109/infvis.2002.1173141;10.1109/tvcg.2009.191;10.1109/vast.2008.4677357;10.1109/tvcg.2007.70594;10.1109/infvis.2002.1173156;10.1109/visual.1990.146386;10.1109/tvcg.2011.185",
                "AuthorKeywords": "Data cube, Data structures, Interactive exploration",
                "AminerCitationCount": 319,
                "CitationCountCrossRef": 173,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 2970,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1299,
                "i": [
                    1299
                ]
            }
        },
        {
            "name": "Wendy Cowley",
            "value": 73,
            "numPapers": 12,
            "cluster": "4",
            "visible": 1,
            "index": 1507,
            "x": -278.34956427753207,
            "y": -270.68712578641055,
            "vy": 0,
            "vx": 0,
            "r": 1.0840529648819806,
            "node": {
                "Conference": "InfoVis",
                "Year": 2000,
                "Title": "Visualizing sequential patterns for text mining",
                "DOI": "10.1109/infvis.2000.885097",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2000.885097",
                "FirstPage": 105,
                "LastPage": 111,
                "PaperType": "C",
                "Abstract": "A sequential pattern in data mining is a finite series of elements such as A/spl rarr/B/spl rarr/C/spl rarr/D where A, B, C, and D are elements of the same domain. The mining of sequential patterns is designed to find patterns of discrete events that frequently happen in the same arrangement along a timeline. Like association and clustering, the mining of sequential patterns is among the most popular knowledge discovery techniques that apply statistical measures to extract useful information from large datasets. As out computers become more powerful, we are able to mine bigger datasets and obtain hundreds of thousands of sequential patterns in full detail. With this vast amount of data, we argue that neither data mining nor visualization by itself can manage the information and reflect the knowledge effectively. Subsequently, we apply visualization to augment data mining in a study of sequential patterns in large text corpora. The result shows that we can learn more and more quickly in an integrated visual data-mining environment.",
                "AuthorNamesDeduped": "Pak Chung Wong;Wendy Cowley;Harlan Foote;Elizabeth Jurrus;James J. Thomas",
                "AuthorNames": "Pak Chung Wong;W. Cowley;H. Foote;E. Jurrus;J. Thomas",
                "AuthorAffiliation": "Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA;Pacific Northwest National Laboratory, USA",
                "InternalReferences": "0.1109/infvis.1998.729565;10.1109/infvis.1998.729570;10.1109/visual.1998.745302;10.1109/infvis.1995.528686;10.1109/infvis.1999.801866;10.1109/infvis.1997.636791;10.1109/visual.1990.146402",
                "AuthorKeywords": null,
                "AminerCitationCount": 109,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 559,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2947,
                "i": [
                    2947
                ]
            }
        },
        {
            "name": "R. Daniel Bergeron",
            "value": 110,
            "numPapers": 22,
            "cluster": "4",
            "visible": 1,
            "index": 1508,
            "x": 388.22153150956825,
            "y": 11.57767128421349,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Multiresolution multidimensional wavelet brushing",
                "DOI": "10.1109/visual.1996.567800",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.567800",
                "FirstPage": 141,
                "LastPage": 148,
                "PaperType": "C",
                "Abstract": "Brushing is a data visualization technique that identifies and highlights data subsets. We introduce a form of brushing in which the brushed data is usually displayed at a different resolution than the non brushed data. The paper presents the rationale behind the multiresolution support of multivariate data visualization and describes the construction of multiresolution brushing using wavelet approximations. The idea is implemented in an enhanced version of XmdvTool. Real scientific data is used for demonstration and practical applications are suggested.",
                "AuthorNamesDeduped": "Pak Chung Wong;R. Daniel Bergeron",
                "AuthorNames": "Pak Chung Wong;R.D. Bergeron",
                "AuthorAffiliation": "Department of Computer Science, University of New Hampshire, USA;Department of Computer Science, University of New Hampshire, USA",
                "InternalReferences": "0.1109/visual.1990.146386;10.1109/visual.1993.398864;10.1109/visual.1995.480811;10.1109/visual.1994.346302;10.1109/visual.1995.485139;10.1109/visual.1990.146402;10.1109/infvis.1996.559224",
                "AuthorKeywords": null,
                "AminerCitationCount": 92,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 132,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3344,
                "i": [
                    3344
                ]
            }
        },
        {
            "name": "Yves Chiricota",
            "value": 62,
            "numPapers": 15,
            "cluster": "4",
            "visible": 1,
            "index": 1509,
            "x": -294.180538901718,
            "y": 253.7869392452945,
            "vy": 0,
            "vx": 0,
            "r": 1.0713874496257916,
            "node": {
                "Conference": "InfoVis",
                "Year": 2010,
                "Title": "The FlowVizMenu and Parallel Scatterplot Matrix: Hybrid Multidimensional Visualizations for Network Exploration",
                "DOI": "10.1109/tvcg.2010.205",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.205",
                "FirstPage": 1100,
                "LastPage": 1108,
                "PaperType": "J",
                "Abstract": "A standard approach for visualizing multivariate networks is to use one or more multidimensional views (for example, scatterplots) for selecting nodes by various metrics, possibly coordinated with a node-link view of the network. In this paper, we present three novel approaches for achieving a tighter integration of these views through hybrid techniques for multidimensional visualization, graph selection and layout. First, we present the FlowVizMenu, a radial menu containing a scatterplot that can be popped up transiently and manipulated with rapid, fluid gestures to select and modify the axes of its scatterplot. Second, the FlowVizMenu can be used to steer an attribute-driven layout of the network, causing certain nodes of a node-link diagram to move toward their corresponding positions in a scatterplot while others can be positioned manually or by force-directed layout. Third, we describe a novel hybrid approach that combines a scatterplot matrix (SPLOM) and parallel coordinates called the Parallel Scatterplot Matrix (P-SPLOM), which can be used to visualize and select features within the network. We also describe a novel arrangement of scatterplots called the Scatterplot Staircase (SPLOS) that requires less space than a traditional scatterplot matrix. Initial user feedback is reported.",
                "AuthorNamesDeduped": "Christophe Viau;Michael J. McGuffin;Yves Chiricota;Igor Jurisica",
                "AuthorNames": "Christophe Viau;Michael J. McGuffin;Yves Chiricota;Igor Jurisica",
                "AuthorAffiliation": "École de technologie supérieure, Montreal, Canada;École de technologie supérieure, Montreal, Canada;Université du Quàbec à Chicoutimi, Chicoutimi, Canada;Ontario Cancer Institute, PMH UHN, Toronto, Canada",
                "InternalReferences": "0.1109/tvcg.2009.151;10.1109/infvis.2005.1532142;10.1109/tvcg.2007.70523;10.1109/tvcg.2009.179;10.1109/vast.2009.5332586;10.1109/infvis.2005.1532141;10.1109/tvcg.2006.187;10.1109/infvis.2004.47;10.1109/tvcg.2007.70521;10.1109/infvis.2003.1249011;10.1109/tvcg.2008.153",
                "AuthorKeywords": "Interactive graph drawing, network layout, attribute-driven layout, parallel coordinates, scatterplot matrix, radial menu",
                "AminerCitationCount": 104,
                "CitationCountCrossRef": 67,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 1437,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1699,
                "i": [
                    1699
                ]
            }
        },
        {
            "name": "Udeepta Bordoloi",
            "value": 85,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1510,
            "x": 45.50402471595074,
            "y": -385.978475740099,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "View selection for volume rendering",
                "DOI": "10.1109/visual.2005.1532833",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532833",
                "FirstPage": 487,
                "LastPage": 494,
                "PaperType": "C",
                "Abstract": "In a visualization of a three-dimensional dataset, the insights gained are dependent on what is occluded and what is not. Suggestion of interesting viewpoints can improve both the speed and efficiency of data understanding. This paper presents a view selection method designed for volume rendering. It can be used to find informative views for a given scene, or to find a minimal set of representative views which capture the entire scene. It becomes particularly useful when the visualization process is non-interactive - for example, when visualizing large datasets or time-varying sequences. We introduce a viewpoint \"goodness\" measure based on the formulation of entropy from information theory. The measure takes into account the transfer function, the data distribution and the visibility of the voxels. Combined with viewpoint properties like view-likelihood and view-stability, this technique can be used as a guide, which suggests \"interesting\" viewpoints for further exploration. Domain knowledge is incorporated into the algorithm via an importance transfer function or volume. This allows users to obtain view selection behaviors tailored to their specific situations. We generate a view space partitioning, and select one representative view for each partition. Together, this set of views encapsulates the \"interesting\" and distinct views of the data. Viewpoints in this set can be used as starting points for interactive exploration of the data, thus reducing the human effort in visualization. In non-interactive situations, such a set can be used as a representative visualization of the dataset from all directions.",
                "AuthorNamesDeduped": "Udeepta Bordoloi;Han-Wei Shen",
                "AuthorNames": "U.D. Bordoloi;H.-W. Shen",
                "AuthorAffiliation": "Ohio State Uinversity, USA;Ohio State Uinversity, USA",
                "InternalReferences": "0.1109/visual.2000.885694;10.1109/visual.2003.1250386;10.1109/visual.2005.1532834;10.1109/visual.2001.964516",
                "AuthorKeywords": "viewpoint selection, view space partitioning, volume rendering, entropy, visibility",
                "AminerCitationCount": 258,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 789,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2366,
                "i": [
                    2366
                ]
            }
        },
        {
            "name": "Martin Kraus 0001",
            "value": 152,
            "numPapers": 15,
            "cluster": "6",
            "visible": 1,
            "index": 1511,
            "x": 227.2466475005397,
            "y": 315.45041004849793,
            "vy": 0,
            "vx": 0,
            "r": 1.1750143926309728,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Hardware-accelerated volume and isosurface rendering based on cell-projection",
                "DOI": "10.1109/visual.2000.885683",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885683",
                "FirstPage": 109,
                "LastPage": 116,
                "PaperType": "C",
                "Abstract": "We present two beneficial rendering extensions to the projected tetrahedra (PT) algorithm proposed by Shirley and Tuchman (1990). These extensions are compatible with any cell sorting technique, for example the BSP-XMPVO sorting algorithm for unstructured meshes. Using 3D texture mapping our first extension solves the longstanding problem of hardware-accelerated but accurate rendering of tetrahedral volume cells with arbitrary transfer functions. By employing 2D texture mapping our second extension realizes the hardware-accelerated rendering of multiple shaded isosurfaces within the PT algorithm without reconstructing the isosurfaces. Additionally, two methods are presented to combine projected tetrahedral volumes with isosurfaces. The time complexity of all our algorithms is linear in the number of tetrahedra and does neither depend on the number of isosurfaces nor on the employed transfer functions.",
                "AuthorNamesDeduped": "Stefan Röttger;Martin Kraus 0001;Thomas Ertl",
                "AuthorNames": "S. Rottger;M. Kraus;T. Ertl",
                "AuthorAffiliation": "Visualization and Interactive Systems Group, Universität Stuttgart, Germany;Visualization and Interactive Systems Group, Universität Stuttgart, Germany;Visualization and Interactive Systems Group, Universität Stuttgart, Germany",
                "InternalReferences": "0.1109/visual.1993.398846;10.1109/visual.1994.346320;10.1109/visual.1999.809887;10.1109/visual.1994.346308;10.1109/visual.2000.885688;10.1109/visual.1994.346306;10.1109/visual.1997.663853;10.1109/visual.1999.809878;10.1109/visual.1996.568127;10.1109/visual.1995.480806;10.1109/visual.1998.745300;10.1109/visual.1996.568121;10.1109/visual.1998.745713",
                "AuthorKeywords": "Volume Rendering, Isosurfaces, Unstructured Meshes, Cell Projection, Graphics Hardware, Texture Mapping, Compositing",
                "AminerCitationCount": 237,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 105,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2963,
                "i": [
                    2963
                ]
            }
        },
        {
            "name": "Issei Fujishiro",
            "value": 154,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 1512,
            "x": -380.7741924019862,
            "y": -79.12657202618573,
            "vy": 0,
            "vx": 0,
            "r": 1.1773172135866437,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "A feature-driven approach to locating optimal viewpoints for volume visualization",
                "DOI": "10.1109/visual.2005.1532834",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532834",
                "FirstPage": 495,
                "LastPage": 502,
                "PaperType": "C",
                "Abstract": "Optimal viewpoint selection is an important task because it considerably influences the amount of information contained in the 2D projected images of 3D objects, and thus dominates their first impressions from a psychological point of view. Although several methods have been proposed that calculate the optimal positions of viewpoints especially for 3D surface meshes, none has been done for solid objects such as volumes. This paper presents a new method of locating such optimal viewpoints when visualizing volumes using direct volume rendering. The major idea behind our method is to decompose an entire volume into a set of feature components, and then find a globally optimal viewpoint by finding a compromise between locally optimal viewpoints for the components. As the feature components, the method employs interval volumes and their combinations that characterize the topological transitions of isosurfaces according to the scalar field. Furthermore, opacity transfer functions are also utilized to assign different weights to the decomposed components so that users can emphasize features of specific interest in the volumes. Several examples of volume datasets together with their optimal positions of viewpoints are exhibited in order to demonstrate that the method can effectively guide naive users to find optimal projections of volumes.",
                "AuthorNamesDeduped": "Shigeo Takahashi;Issei Fujishiro;Yuriko Takeshima;Tomoyuki Nishita",
                "AuthorNames": "S. Takahashi;I. Fujishiro;Y. Takeshima;T. Nishita",
                "AuthorAffiliation": "University of Tokyo, Japan;University of Tohoku, Japan;Tohoku University;University of Tokyo, Japan",
                "InternalReferences": "0.1109/visual.1995.480789;10.1109/visual.2004.96;10.1109/visual.2002.1183774;10.1109/visual.2005.1532833;10.1109/visual.1997.663875;10.1109/visual.2002.1183785",
                "AuthorKeywords": "viewpoint selection, viewpoint entropy, direct volume rendering, interval volumes, level-set graphs",
                "AminerCitationCount": 234,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 549,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2372,
                "i": [
                    2372
                ]
            }
        },
        {
            "name": "Yuriko Takeshima",
            "value": 125,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1513,
            "x": 334.3307382582106,
            "y": -198.9295288681894,
            "vy": 0,
            "vx": 0,
            "r": 1.1439263097294186,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "A feature-driven approach to locating optimal viewpoints for volume visualization",
                "DOI": "10.1109/visual.2005.1532834",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532834",
                "FirstPage": 495,
                "LastPage": 502,
                "PaperType": "C",
                "Abstract": "Optimal viewpoint selection is an important task because it considerably influences the amount of information contained in the 2D projected images of 3D objects, and thus dominates their first impressions from a psychological point of view. Although several methods have been proposed that calculate the optimal positions of viewpoints especially for 3D surface meshes, none has been done for solid objects such as volumes. This paper presents a new method of locating such optimal viewpoints when visualizing volumes using direct volume rendering. The major idea behind our method is to decompose an entire volume into a set of feature components, and then find a globally optimal viewpoint by finding a compromise between locally optimal viewpoints for the components. As the feature components, the method employs interval volumes and their combinations that characterize the topological transitions of isosurfaces according to the scalar field. Furthermore, opacity transfer functions are also utilized to assign different weights to the decomposed components so that users can emphasize features of specific interest in the volumes. Several examples of volume datasets together with their optimal positions of viewpoints are exhibited in order to demonstrate that the method can effectively guide naive users to find optimal projections of volumes.",
                "AuthorNamesDeduped": "Shigeo Takahashi;Issei Fujishiro;Yuriko Takeshima;Tomoyuki Nishita",
                "AuthorNames": "S. Takahashi;I. Fujishiro;Y. Takeshima;T. Nishita",
                "AuthorAffiliation": "University of Tokyo, Japan;University of Tohoku, Japan;Tohoku University;University of Tokyo, Japan",
                "InternalReferences": "0.1109/visual.1995.480789;10.1109/visual.2004.96;10.1109/visual.2002.1183774;10.1109/visual.2005.1532833;10.1109/visual.1997.663875;10.1109/visual.2002.1183785",
                "AuthorKeywords": "viewpoint selection, viewpoint entropy, direct volume rendering, interval volumes, level-set graphs",
                "AminerCitationCount": 234,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 549,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2372,
                "i": [
                    2372
                ]
            }
        },
        {
            "name": "Gabriel Mistelbauer",
            "value": 15,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1514,
            "x": -112.18715918867969,
            "y": 372.6446582378095,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Smart super views---A knowledge-assisted interface for medical visualization",
                "DOI": "10.1109/vast.2012.6400555",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400555",
                "FirstPage": 163,
                "LastPage": 172,
                "PaperType": "C",
                "Abstract": "Due to the ever growing volume of acquired data and information, users have to be constantly aware of the methods for their exploration and for interaction. Of these, not each might be applicable to the data at hand or might reveal the desired result. Owing to this, innovations may be used inappropriately and users may become skeptical. In this paper we propose a knowledge-assisted interface for medical visualization, which reduces the necessary effort to use new visualization methods, by providing only the most relevant ones in a smart way. Consequently, we are able to expand such a system with innovations without the users to worry about when, where, and especially how they may or should use them. We present an application of our system in the medical domain and give qualitative feedback from domain experts.",
                "AuthorNamesDeduped": "Gabriel Mistelbauer;Hamed Bouzari;Rüdiger Schernthaner;Ivan Baclija;Arnold Köchl;Stefan Bruckner;Milos Srámek;M. Eduard Gröller",
                "AuthorNames": "Gabriel Mistelbauer;Arnold Köchl;Rudiger Schernthaner;Ivan Baclija;Rüdiger Schernthaner;Stefan Bruckner;Milos Sramek;Meister Eduard Gröller",
                "AuthorAffiliation": "Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Medical University of Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Vienna University of Technoloqy, Austria",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/tvcg.2006.152;10.1109/tvcg.2007.70576;10.1109/tvcg.2007.70591;10.1109/visual.2002.1183754;10.1109/visual.2005.1532856;10.1109/tvcg.2011.183;10.1109/visual.2005.1532818;10.1109/tvcg.2006.148;10.1109/tvcg.2010.199",
                "AuthorKeywords": "Visualization, Fuzzy Logic, Interaction",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 401,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1515,
                "i": [
                    1515
                ]
            }
        },
        {
            "name": "Ivan Baclija",
            "value": 15,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1515,
            "x": -169.05031312124677,
            "y": -350.6736255175294,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Smart super views---A knowledge-assisted interface for medical visualization",
                "DOI": "10.1109/vast.2012.6400555",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400555",
                "FirstPage": 163,
                "LastPage": 172,
                "PaperType": "C",
                "Abstract": "Due to the ever growing volume of acquired data and information, users have to be constantly aware of the methods for their exploration and for interaction. Of these, not each might be applicable to the data at hand or might reveal the desired result. Owing to this, innovations may be used inappropriately and users may become skeptical. In this paper we propose a knowledge-assisted interface for medical visualization, which reduces the necessary effort to use new visualization methods, by providing only the most relevant ones in a smart way. Consequently, we are able to expand such a system with innovations without the users to worry about when, where, and especially how they may or should use them. We present an application of our system in the medical domain and give qualitative feedback from domain experts.",
                "AuthorNamesDeduped": "Gabriel Mistelbauer;Hamed Bouzari;Rüdiger Schernthaner;Ivan Baclija;Arnold Köchl;Stefan Bruckner;Milos Srámek;M. Eduard Gröller",
                "AuthorNames": "Gabriel Mistelbauer;Arnold Köchl;Rudiger Schernthaner;Ivan Baclija;Rüdiger Schernthaner;Stefan Bruckner;Milos Sramek;Meister Eduard Gröller",
                "AuthorAffiliation": "Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Medical University of Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Vienna University of Technoloqy, Austria",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/tvcg.2006.152;10.1109/tvcg.2007.70576;10.1109/tvcg.2007.70591;10.1109/visual.2002.1183754;10.1109/visual.2005.1532856;10.1109/tvcg.2011.183;10.1109/visual.2005.1532818;10.1109/tvcg.2006.148;10.1109/tvcg.2010.199",
                "AuthorKeywords": "Visualization, Fuzzy Logic, Interaction",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 401,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1515,
                "i": [
                    1515
                ]
            }
        },
        {
            "name": "Rüdiger Schernthaner",
            "value": 15,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1516,
            "x": 361.648327705301,
            "y": 144.43159996330152,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Smart super views---A knowledge-assisted interface for medical visualization",
                "DOI": "10.1109/vast.2012.6400555",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400555",
                "FirstPage": 163,
                "LastPage": 172,
                "PaperType": "C",
                "Abstract": "Due to the ever growing volume of acquired data and information, users have to be constantly aware of the methods for their exploration and for interaction. Of these, not each might be applicable to the data at hand or might reveal the desired result. Owing to this, innovations may be used inappropriately and users may become skeptical. In this paper we propose a knowledge-assisted interface for medical visualization, which reduces the necessary effort to use new visualization methods, by providing only the most relevant ones in a smart way. Consequently, we are able to expand such a system with innovations without the users to worry about when, where, and especially how they may or should use them. We present an application of our system in the medical domain and give qualitative feedback from domain experts.",
                "AuthorNamesDeduped": "Gabriel Mistelbauer;Hamed Bouzari;Rüdiger Schernthaner;Ivan Baclija;Arnold Köchl;Stefan Bruckner;Milos Srámek;M. Eduard Gröller",
                "AuthorNames": "Gabriel Mistelbauer;Arnold Köchl;Rudiger Schernthaner;Ivan Baclija;Rüdiger Schernthaner;Stefan Bruckner;Milos Sramek;Meister Eduard Gröller",
                "AuthorAffiliation": "Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Medical University of Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Vienna University of Technoloqy, Austria",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/tvcg.2006.152;10.1109/tvcg.2007.70576;10.1109/tvcg.2007.70591;10.1109/visual.2002.1183754;10.1109/visual.2005.1532856;10.1109/tvcg.2011.183;10.1109/visual.2005.1532818;10.1109/tvcg.2006.148;10.1109/tvcg.2010.199",
                "AuthorKeywords": "Visualization, Fuzzy Logic, Interaction",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 401,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1515,
                "i": [
                    1515
                ]
            }
        },
        {
            "name": "Arnold Köchl",
            "value": 47,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1517,
            "x": -364.350434969586,
            "y": 137.83599144444634,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Smart super views---A knowledge-assisted interface for medical visualization",
                "DOI": "10.1109/vast.2012.6400555",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400555",
                "FirstPage": 163,
                "LastPage": 172,
                "PaperType": "C",
                "Abstract": "Due to the ever growing volume of acquired data and information, users have to be constantly aware of the methods for their exploration and for interaction. Of these, not each might be applicable to the data at hand or might reveal the desired result. Owing to this, innovations may be used inappropriately and users may become skeptical. In this paper we propose a knowledge-assisted interface for medical visualization, which reduces the necessary effort to use new visualization methods, by providing only the most relevant ones in a smart way. Consequently, we are able to expand such a system with innovations without the users to worry about when, where, and especially how they may or should use them. We present an application of our system in the medical domain and give qualitative feedback from domain experts.",
                "AuthorNamesDeduped": "Gabriel Mistelbauer;Hamed Bouzari;Rüdiger Schernthaner;Ivan Baclija;Arnold Köchl;Stefan Bruckner;Milos Srámek;M. Eduard Gröller",
                "AuthorNames": "Gabriel Mistelbauer;Arnold Köchl;Rudiger Schernthaner;Ivan Baclija;Rüdiger Schernthaner;Stefan Bruckner;Milos Sramek;Meister Eduard Gröller",
                "AuthorAffiliation": "Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Medical University of Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Vienna University of Technoloqy, Austria",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/tvcg.2006.152;10.1109/tvcg.2007.70576;10.1109/tvcg.2007.70591;10.1109/visual.2002.1183754;10.1109/visual.2005.1532856;10.1109/tvcg.2011.183;10.1109/visual.2005.1532818;10.1109/tvcg.2006.148;10.1109/tvcg.2010.199",
                "AuthorKeywords": "Visualization, Fuzzy Logic, Interaction",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 401,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1515,
                "i": [
                    1515
                ]
            }
        },
        {
            "name": "Milos Srámek",
            "value": 66,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1518,
            "x": 175.6116306102565,
            "y": -347.86571431287507,
            "vy": 0,
            "vx": 0,
            "r": 1.075993091537133,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Smart super views---A knowledge-assisted interface for medical visualization",
                "DOI": "10.1109/vast.2012.6400555",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400555",
                "FirstPage": 163,
                "LastPage": 172,
                "PaperType": "C",
                "Abstract": "Due to the ever growing volume of acquired data and information, users have to be constantly aware of the methods for their exploration and for interaction. Of these, not each might be applicable to the data at hand or might reveal the desired result. Owing to this, innovations may be used inappropriately and users may become skeptical. In this paper we propose a knowledge-assisted interface for medical visualization, which reduces the necessary effort to use new visualization methods, by providing only the most relevant ones in a smart way. Consequently, we are able to expand such a system with innovations without the users to worry about when, where, and especially how they may or should use them. We present an application of our system in the medical domain and give qualitative feedback from domain experts.",
                "AuthorNamesDeduped": "Gabriel Mistelbauer;Hamed Bouzari;Rüdiger Schernthaner;Ivan Baclija;Arnold Köchl;Stefan Bruckner;Milos Srámek;M. Eduard Gröller",
                "AuthorNames": "Gabriel Mistelbauer;Arnold Köchl;Rudiger Schernthaner;Ivan Baclija;Rüdiger Schernthaner;Stefan Bruckner;Milos Sramek;Meister Eduard Gröller",
                "AuthorAffiliation": "Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Medical University of Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Vienna University of Technoloqy, Austria",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/tvcg.2006.152;10.1109/tvcg.2007.70576;10.1109/tvcg.2007.70591;10.1109/visual.2002.1183754;10.1109/visual.2005.1532856;10.1109/tvcg.2011.183;10.1109/visual.2005.1532818;10.1109/tvcg.2006.148;10.1109/tvcg.2010.199",
                "AuthorKeywords": "Visualization, Fuzzy Logic, Interaction",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 401,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1515,
                "i": [
                    1515
                ]
            }
        },
        {
            "name": "Petr Felkel",
            "value": 122,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1519,
            "x": 105.52409169334112,
            "y": 375.25280288399625,
            "vy": 0,
            "vx": 0,
            "r": 1.1404720782959126,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "CPR - curved planar reformation",
                "DOI": "10.1109/visual.2002.1183754",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183754",
                "FirstPage": 37,
                "LastPage": 44,
                "PaperType": "C",
                "Abstract": "Visualization of tubular structures such as blood vessels is an important topic in medical imaging. One way to display tubular structures for diagnostic purposes is to generate longitudinal cross-sections in order to show their lumen, wall, and surrounding tissue in a curved plane. This process is called curved planar reformation (CPR). We present three different methods to generate CPR images. A tube-phantom was scanned with computed tomography (CT) to illustrate the properties of the different CPR methods. Furthermore we introduce enhancements to these methods: thick-CPR, rotating-CPR and multi-path-CPR.",
                "AuthorNamesDeduped": "Armin Kanitsar;Dominik Fleischmann;Rainer Wegenkittl;Petr Felkel;Meister Eduard Gröller",
                "AuthorNames": "A. Kanitsar;D. Fleischmann;R. Wegenkittl;P. Felkel;E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Department of Radiology, University of Technology, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2001.964555;10.1109/visual.2001.964538",
                "AuthorKeywords": "computed tomography angiography, vessel analysis, curved planar reformation",
                "AminerCitationCount": 286,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 842,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2766,
                "i": [
                    2766
                ]
            }
        },
        {
            "name": "Meister Eduard Gröller",
            "value": 110,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1520,
            "x": -331.3988021847666,
            "y": -205.48682174412536,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "CPR - curved planar reformation",
                "DOI": "10.1109/visual.2002.1183754",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183754",
                "FirstPage": 37,
                "LastPage": 44,
                "PaperType": "C",
                "Abstract": "Visualization of tubular structures such as blood vessels is an important topic in medical imaging. One way to display tubular structures for diagnostic purposes is to generate longitudinal cross-sections in order to show their lumen, wall, and surrounding tissue in a curved plane. This process is called curved planar reformation (CPR). We present three different methods to generate CPR images. A tube-phantom was scanned with computed tomography (CT) to illustrate the properties of the different CPR methods. Furthermore we introduce enhancements to these methods: thick-CPR, rotating-CPR and multi-path-CPR.",
                "AuthorNamesDeduped": "Armin Kanitsar;Dominik Fleischmann;Rainer Wegenkittl;Petr Felkel;Meister Eduard Gröller",
                "AuthorNames": "A. Kanitsar;D. Fleischmann;R. Wegenkittl;P. Felkel;E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;Department of Radiology, University of Technology, Vienna, Austria;VRVis Research Center, Vienna, Austria;VRVis Research Center, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2001.964555;10.1109/visual.2001.964538",
                "AuthorKeywords": "computed tomography angiography, vessel analysis, curved planar reformation",
                "AminerCitationCount": 286,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 842,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2766,
                "i": [
                    2766
                ]
            }
        },
        {
            "name": "Dominique Sandner",
            "value": 34,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1521,
            "x": 383.2934964388774,
            "y": -72.36087055626325,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "Computed tomography angiography: a case study of peripheral vessel investigation",
                "DOI": "10.1109/visual.2001.964555",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964555",
                "FirstPage": 477,
                "LastPage": 480,
                "PaperType": "C",
                "Abstract": "This paper deals with vessel exploration based on computed tomography angiography. Large image sequences of the lower extremities are investigated in a clinical environment. Two different approaches for peripheral vessel diagnosis dealing with stenosis and calcification detection are introduced. The paper presents an automated vessel-tracking tool for curved planar reformation. An interactive segmentation tool for bone removal is proposed.",
                "AuthorNamesDeduped": "Armin Kanitsar;Rainer Wegenkittl;Petr Felkel;Dominik Fleischmann;Dominique Sandner;M. Eduard Gröller",
                "AuthorNames": "A. Kanitsar;D. Fleischmann;R. Wegenkittl;D. Sandner;P. Felkel;E. Groller",
                "AuthorAffiliation": "Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria;TIANI Medgraph, Austria;VRVis Center Vienna, Austria;Department of Radiology, University of Technology, Vienna, Austria;Department of Radiology, University of Technology, Vienna, Austria;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": null,
                "AuthorKeywords": "Computed Tomography Angiography (CTA), semi automatic segmentation, optimal path computation",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 10,
                "PubsCitedCrossRef": 7,
                "DownloadsXplore": 150,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2900,
                "i": [
                    2900
                ]
            }
        },
        {
            "name": "Tim Lammarsch",
            "value": 22,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 1522,
            "x": -233.82643260202042,
            "y": 312.37029214797747,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "TimeBench: A Data Model and Software Library for Visual Analytics of Time-Oriented Data",
                "DOI": "10.1109/tvcg.2013.206",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.206",
                "FirstPage": 2247,
                "LastPage": 2256,
                "PaperType": "J",
                "Abstract": "Time-oriented data play an essential role in many Visual Analytics scenarios such as extracting medical insights from collections of electronic health records or identifying emerging problems and vulnerabilities in network traffic. However, many software libraries for Visual Analytics treat time as a flat numerical data type and insufficiently tackle the complexity of the time domain such as calendar granularities and intervals. Therefore, developers of advanced Visual Analytics designs need to implement temporal foundations in their application code over and over again. We present TimeBench, a software library that provides foundational data structures and algorithms for time-oriented data in Visual Analytics. Its expressiveness and developer accessibility have been evaluated through application examples demonstrating a variety of challenges with time-oriented data and long-term developer studies conducted in the scope of research and student projects.",
                "AuthorNamesDeduped": "Alexander Rind;Tim Lammarsch;Wolfgang Aigner;Bilal Alsallakh;Silvia Miksch",
                "AuthorNames": "Alexander Rind;Tim Lammarsch;Wolfgang Aigner;Bilal Alsallakh;Silvia Miksch",
                "AuthorAffiliation": "Institute of Software Technology & Interactive Systems, Vienna University of Technology, Austria;Institute of Software Technology & Interactive Systems, Vienna University of Technology, Austria;Institute of Software Technology & Interactive Systems, Vienna University of Technology, Austria;Institute of Software Technology & Interactive Systems, Vienna University of Technology, Austria;Institute of Software Technology & Interactive Systems, Vienna University of Technology, Austria",
                "InternalReferences": "0.1109/tvcg.2009.174;10.1109/infvis.2004.12;10.1109/vast.2011.6102446;10.1109/vast.2006.261428;10.1109/infvis.2000.885086;10.1109/tvcg.2010.144;10.1109/tvcg.2006.178;10.1109/infvis.2004.64;10.1109/tvcg.2013.222;10.1109/infvis.2002.1173155;10.1109/tvcg.2011.185;10.1109/tvcg.2010.126;10.1109/infvis.1997.636792",
                "AuthorKeywords": "Visual Analytics, information visualization, toolkits, software infrastructure, time, temporal data",
                "AminerCitationCount": 40,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 52,
                "DownloadsXplore": 1212,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1390,
                "i": [
                    1390
                ]
            }
        },
        {
            "name": "Alexander Rind",
            "value": 38,
            "numPapers": 31,
            "cluster": "5",
            "visible": 1,
            "index": 1523,
            "x": -38.599436381490555,
            "y": -388.4071105309882,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "VAST",
                "Year": 2017,
                "Title": "The Role of Explicit Knowledge: A Conceptual Model of Knowledge-Assisted Visual Analytics",
                "DOI": "10.1109/vast.2017.8585498",
                "Link": "http://dx.doi.org/10.1109/VAST.2017.8585498",
                "FirstPage": 92,
                "LastPage": 103,
                "PaperType": "C",
                "Abstract": "Visual Analytics (VA) aims to combine the strengths of humans and computers for effective data analysis. In this endeavor, humans' tacit knowledge from prior experience is an important asset that can be leveraged by both human and computer to improve the analytic process. While VA environments are starting to include features to formalize, store, and utilize such knowledge, the mechanisms and degree in which these environments integrate explicit knowledge varies widely. Additionally, this important class of VA environments has never been elaborated on by existing work on VA theory. This paper proposes a conceptual model of Knowledge-assisted VA conceptually grounded on the visualization model by van Wijk. We apply the model to describe various examples of knowledge-assisted VA from the literature and elaborate on three of them in finer detail. Moreover, we illustrate the utilization of the model to compare different design alternatives and to evaluate existing approaches with respect to their use of knowledge. Finally, the model can inspire designers to generate novel VA environments using explicit knowledge effectively.",
                "AuthorNamesDeduped": "Paolo Federico 0001;Markus Wagner 0008;Alexander Rind;Albert Amor-Amoros;Silvia Miksch;Wolfgang Aigner",
                "AuthorNames": "Paolo Federico;Markus Wagner;Alexander Rind;Albert Amor-Amorós;Silvia Miksch;Wolfgang Aigner",
                "AuthorAffiliation": "TU Wien, Austria;St. Poelten University of Applied Sciences, Austria and TU Wien, Austria;St. Poelten University of Applied Sciences, Austria and TU Wien, Austria;TU Wien, Austria;TU Wien, Austria;TU Wien, Austria",
                "InternalReferences": "0.1109/tvcg.2013.146;10.1109/tvcg.2014.2346575;10.1109/infvis.1997.636792;10.1109/tvcg.2016.2598468;10.1109/infvis.2000.885092;10.1109/infvis.1998.729560;10.1109/tvcg.2016.2598460;10.1109/tvcg.2016.2598471;10.1109/vast.2008.4677352;10.1109/tvcg.2008.109;10.1109/vast.2012.6400555;10.1109/vast.2010.5654451;10.1109/tvcg.2014.2346481;10.1109/tvcg.2016.2598839;10.1109/vast.2007.4389021;10.1109/tvcg.2014.2346574;10.1109/tvcg.2016.2598829;10.1109/visual.2005.1532781",
                "AuthorKeywords": "Automated analysis,tacit knowledge,explicit knowledge,visual analytics,information visualization,theory and model",
                "AminerCitationCount": 56,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 81,
                "DownloadsXplore": 901,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 869,
                "i": [
                    869
                ]
            }
        },
        {
            "name": "Victoria Interrante",
            "value": 204,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 1524,
            "x": 290.9226879088125,
            "y": 260.41119342284753,
            "vy": 0,
            "vx": 0,
            "r": 1.234887737478411,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Illustrating transparent surfaces with curvature-directed strokes",
                "DOI": "10.1109/visual.1996.568110",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568110",
                "FirstPage": 211,
                "LastPage": 218,
                "PaperType": "C",
                "Abstract": "Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for \"texturing\" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose.",
                "AuthorNamesDeduped": "Victoria Interrante;Henry Fuchs;Stephen M. Pizer",
                "AuthorNames": "V. Interrante;H. Fuchs;S. Pizer",
                "AuthorAffiliation": "ICASE, NASA-Langley Research Center, USA;North Carolina State University, Chapel Hill, USA;North Carolina State University, Chapel Hill, USA",
                "InternalReferences": "0.1109/visual.1995.480795;10.1109/visual.1990.146395;10.1109/visual.1996.568111",
                "AuthorKeywords": null,
                "AminerCitationCount": 89,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 88,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3345,
                "i": [
                    3345
                ]
            }
        },
        {
            "name": "Philip A. Legg",
            "value": 57,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1525,
            "x": -390.5505980566359,
            "y": 4.497816982053551,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Hierarchical Event Selection for Video Storyboards with a Case Study on Snooker Video Visualization",
                "DOI": "10.1109/tvcg.2011.208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.208",
                "FirstPage": 1747,
                "LastPage": 1756,
                "PaperType": "J",
                "Abstract": "Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas.",
                "AuthorNamesDeduped": "Matthew L. Parry;Philip A. Legg;David H. S. Chung;Iwan W. Griffiths;Min Chen 0001",
                "AuthorNames": "Matthew L. Parry;Philip A. Legg;David H.S. Chung;Iwan W. Griffiths;Min Chen",
                "AuthorAffiliation": "Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Sports Science, the College of Engineering, Swansea University, UK;Swansea University, UK and Oxford e-Research Centre, University of Oxford, UK",
                "InternalReferences": "0.1109/tvcg.2008.185;10.1109/infvis.2004.27;10.1109/visual.2003.1250401;10.1109/tvcg.2007.70544;10.1109/tvcg.2006.194",
                "AuthorKeywords": "Multimedia visualization, Time series data, Illustrative visualization",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1658,
                "i": [
                    1658
                ]
            }
        },
        {
            "name": "David H. S. Chung",
            "value": 57,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1526,
            "x": 285.0350021676906,
            "y": -267.2172291212986,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Hierarchical Event Selection for Video Storyboards with a Case Study on Snooker Video Visualization",
                "DOI": "10.1109/tvcg.2011.208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.208",
                "FirstPage": 1747,
                "LastPage": 1756,
                "PaperType": "J",
                "Abstract": "Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas.",
                "AuthorNamesDeduped": "Matthew L. Parry;Philip A. Legg;David H. S. Chung;Iwan W. Griffiths;Min Chen 0001",
                "AuthorNames": "Matthew L. Parry;Philip A. Legg;David H.S. Chung;Iwan W. Griffiths;Min Chen",
                "AuthorAffiliation": "Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Sports Science, the College of Engineering, Swansea University, UK;Swansea University, UK and Oxford e-Research Centre, University of Oxford, UK",
                "InternalReferences": "0.1109/tvcg.2008.185;10.1109/infvis.2004.27;10.1109/visual.2003.1250401;10.1109/tvcg.2007.70544;10.1109/tvcg.2006.194",
                "AuthorKeywords": "Multimedia visualization, Time series data, Illustrative visualization",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1658,
                "i": [
                    1658
                ]
            }
        },
        {
            "name": "Matthew L. Parry",
            "value": 57,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1527,
            "x": -29.683012828427117,
            "y": 389.7036293767707,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Hierarchical Event Selection for Video Storyboards with a Case Study on Snooker Video Visualization",
                "DOI": "10.1109/tvcg.2011.208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.208",
                "FirstPage": 1747,
                "LastPage": 1756,
                "PaperType": "J",
                "Abstract": "Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas.",
                "AuthorNamesDeduped": "Matthew L. Parry;Philip A. Legg;David H. S. Chung;Iwan W. Griffiths;Min Chen 0001",
                "AuthorNames": "Matthew L. Parry;Philip A. Legg;David H.S. Chung;Iwan W. Griffiths;Min Chen",
                "AuthorAffiliation": "Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Sports Science, the College of Engineering, Swansea University, UK;Swansea University, UK and Oxford e-Research Centre, University of Oxford, UK",
                "InternalReferences": "0.1109/tvcg.2008.185;10.1109/infvis.2004.27;10.1109/visual.2003.1250401;10.1109/tvcg.2007.70544;10.1109/tvcg.2006.194",
                "AuthorKeywords": "Multimedia visualization, Time series data, Illustrative visualization",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1658,
                "i": [
                    1658
                ]
            }
        },
        {
            "name": "Iwan W. Griffiths",
            "value": 57,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1528,
            "x": -241.43267933368364,
            "y": -307.5065224507582,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Hierarchical Event Selection for Video Storyboards with a Case Study on Snooker Video Visualization",
                "DOI": "10.1109/tvcg.2011.208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.208",
                "FirstPage": 1747,
                "LastPage": 1756,
                "PaperType": "J",
                "Abstract": "Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas.",
                "AuthorNamesDeduped": "Matthew L. Parry;Philip A. Legg;David H. S. Chung;Iwan W. Griffiths;Min Chen 0001",
                "AuthorNames": "Matthew L. Parry;Philip A. Legg;David H.S. Chung;Iwan W. Griffiths;Min Chen",
                "AuthorAffiliation": "Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Computer Science, the College of Science, and the College of Engineering, Swansea University, UK;Department of Sports Science, the College of Engineering, Swansea University, UK;Swansea University, UK and Oxford e-Research Centre, University of Oxford, UK",
                "InternalReferences": "0.1109/tvcg.2008.185;10.1109/infvis.2004.27;10.1109/visual.2003.1250401;10.1109/tvcg.2007.70544;10.1109/tvcg.2006.194",
                "AuthorKeywords": "Multimedia visualization, Time series data, Illustrative visualization",
                "AminerCitationCount": 46,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1658,
                "i": [
                    1658
                ]
            }
        },
        {
            "name": "Thomas Baudel",
            "value": 29,
            "numPapers": 21,
            "cluster": "5",
            "visible": 1,
            "index": 1529,
            "x": 385.8687779850184,
            "y": 63.68112888720288,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "VAST",
                "Year": 2013,
                "Title": "Decision Exploration Lab: A Visual Analytics Solution for Decision Management",
                "DOI": "10.1109/tvcg.2013.146",
                "Link": "http://dx.doi.org/10.1109/TVCG.2013.146",
                "FirstPage": 1972,
                "LastPage": 1981,
                "PaperType": "J",
                "Abstract": "We present a visual analytics solution designed to address prevalent issues in the area of Operational Decision Management (ODM). In ODM, which has its roots in Artificial Intelligence (Expert Systems) and Management Science, it is increasingly important to align business decisions with business goals. In our work, we consider decision models (executable models of the business domain) as ontologies that describe the business domain, and production rules that describe the business logic of decisions to be made over this ontology. Executing a decision model produces an accumulation of decisions made over time for individual cases. We are interested, first, to get insight in the decision logic and the accumulated facts by themselves. Secondly and more importantly, we want to see how the accumulated facts reveal potential divergences between the reality as captured by the decision model, and the reality as captured by the executed decisions. We illustrate the motivation, added value for visual analytics, and our proposed solution and tooling through a business case from the car insurance industry.",
                "AuthorNamesDeduped": "Bertjan Broeksema;Thomas Baudel;Arthur G. Telea;Paolo Crisafulli",
                "AuthorNames": "Bertjan Broeksema;Thomas Baudel;Alex Telea;Paolo Crisafulli",
                "AuthorAffiliation": "Rijksuniversiteit Groningen, Groningen, Groningen, NL;Center for Advanced Studies, IBM France, France;INRIA, University of Bordeaux, France;IBM France, France",
                "InternalReferences": "0.1109/visual.1991.175815;10.1109/vast.2011.6102463;10.1109/vast.2010.5652398;10.1109/vast.2008.4677361;10.1109/vast.2008.4677363;10.1109/tvcg.2011.185;10.1109/vast.2011.6102457",
                "AuthorKeywords": "Decision support systems, model validation and analysis, multivariate Statistics, program analysis",
                "AminerCitationCount": 24,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 1230,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1396,
                "i": [
                    1396
                ]
            }
        },
        {
            "name": "Kasper Dinkla",
            "value": 50,
            "numPapers": 21,
            "cluster": "3",
            "visible": 1,
            "index": 1530,
            "x": -327.6506702913754,
            "y": 213.76397792334532,
            "vy": 0,
            "vx": 0,
            "r": 1.0575705238917674,
            "node": {
                "Conference": "InfoVis",
                "Year": 2016,
                "Title": "Screenit: Visual Analysis of Cellular Screens",
                "DOI": "10.1109/tvcg.2016.2598587",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598587",
                "FirstPage": 591,
                "LastPage": 600,
                "PaperType": "J",
                "Abstract": "High-throughput and high-content screening enables large scale, cost-effective experiments in which cell cultures are exposed to a wide spectrum of drugs. The resulting multivariate data sets have a large but shallow hierarchical structure. The deepest level of this structure describes cells in terms of numeric features that are derived from image data. The subsequent level describes enveloping cell cultures in terms of imposed experiment conditions (exposure to drugs). We present Screenit, a visual analysis approach designed in close collaboration with screening experts. Screenit enables the navigation and analysis of multivariate data at multiple hierarchy levels and at multiple levels of detail. Screenit integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures (hits). In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed to match workflows of screening experts. We demonstrate analyses for a real-world data set, CellMorph, with 6 million cells across 20,000 cell cultures.",
                "AuthorNamesDeduped": "Kasper Dinkla;Hendrik Strobelt;Bryan Genest;Stephan Reiling;Mark Borowsky;Hanspeter Pfister",
                "AuthorNames": "Kasper Dinkla;Hendrik Strobelt;Bryan Genest;Stephan Reiling;Mark Borowsky;Hanspeter Pfister",
                "AuthorAffiliation": "Harvard University;Harvard University;Novartis Institute of BioMedical Research;Novartis Institute of BioMedical Research;Novartis Institute of BioMedical Research;Harvard University",
                "InternalReferences": "0.1109/vast.2012.6400492;10.1109/tvcg.2014.2346752;10.1109/tvcg.2015.2466971;10.1109/tvcg.2011.253;10.1109/vast.2010.5652443;10.1109/tvcg.2012.213;10.1109/tvcg.2014.2346578;10.1109/tvcg.2013.173;10.1109/vast.2011.6102453;10.1109/tvcg.2014.2346482",
                "AuthorKeywords": "High-content screening;visual analysis;feature selection;image classification;biology;multivariate;hierarchy",
                "AminerCitationCount": 9,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 48,
                "DownloadsXplore": 688,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 924,
                "i": [
                    924
                ]
            }
        },
        {
            "name": "Michel A. Westenberg",
            "value": 31,
            "numPapers": 12,
            "cluster": "3",
            "visible": 1,
            "index": 1531,
            "x": 97.23566522655226,
            "y": -379.0715307273152,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "Compressed Adjacency Matrices: Untangling Gene Regulatory Networks",
                "DOI": "10.1109/tvcg.2012.208",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.208",
                "FirstPage": 2457,
                "LastPage": 2466,
                "PaperType": "J",
                "Abstract": "We present a novel technique-Compressed Adjacency Matrices-for visualizing gene regulatory networks. These directed networks have strong structural characteristics: out-degrees with a scale-free distribution, in-degrees bound by a low maximum, and few and small cycles. Standard visualization techniques, such as node-link diagrams and adjacency matrices, are impeded by these network characteristics. The scale-free distribution of out-degrees causes a high number of intersecting edges in node-link diagrams. Adjacency matrices become space-inefficient due to the low in-degrees and the resulting sparse network. Compressed adjacency matrices, however, exploit these structural characteristics. By cutting open and rearranging an adjacency matrix, we achieve a compact and neatly-arranged visualization. Compressed adjacency matrices allow for easy detection of subnetworks with a specific structure, so-called motifs, which provide important knowledge about gene regulatory networks to domain experts. We summarize motifs commonly referred to in the literature, and relate them to network analysis tasks common to the visualization domain. We show that a user can easily find the important motifs in compressed adjacency matrices, and that this is hard in standard adjacency matrix and node-link diagrams. We also demonstrate that interaction techniques for standard adjacency matrices can be used for our compressed variant. These techniques include rearrangement clustering, highlighting, and filtering.",
                "AuthorNamesDeduped": "Kasper Dinkla;Michel A. Westenberg;Jarke J. van Wijk",
                "AuthorNames": "Kasper Dinkla;Michel A. Westenberg;Jarke J. van Wijk",
                "AuthorAffiliation": "Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands;Eindhoven University of Technology, Netherlands",
                "InternalReferences": "0.1109/tvcg.2011.187;10.1109/tvcg.2006.160;10.1109/tvcg.2007.70582;10.1109/infvis.2004.1;10.1109/infvis.2005.1532126;10.1109/infvis.2004.46;10.1109/tvcg.2006.147;10.1109/tvcg.2008.141;10.1109/tvcg.2007.70556;10.1109/infvis.2004.5;10.1109/tvcg.2006.156;10.1109/tvcg.2010.159;10.1109/infvis.2003.1249030",
                "AuthorKeywords": "Network, gene regulation, scale-free, adjacency matrix",
                "AminerCitationCount": 66,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 43,
                "DownloadsXplore": 1008,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1424,
                "i": [
                    1424
                ]
            }
        },
        {
            "name": "Annika Frank",
            "value": 25,
            "numPapers": 12,
            "cluster": "5",
            "visible": 1,
            "index": 1532,
            "x": 184.42076618610875,
            "y": 345.30997813461545,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "InfoVis",
                "Year": 2012,
                "Title": "RelEx: Visualization for Actively Changing Overlay Network Specifications",
                "DOI": "10.1109/tvcg.2012.255",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.255",
                "FirstPage": 2729,
                "LastPage": 2738,
                "PaperType": "J",
                "Abstract": "We present a network visualization design study focused on supporting automotive engineers who need to specify and optimize traffic patterns for in-car communication networks. The task and data abstractions that we derived support actively making changes to an overlay network, where logical communication specifications must be mapped to an underlying physical network. These abstractions are very different from the dominant use case in visual network analysis, namely identifying clusters and central nodes, that stems from the domain of social network analysis. Our visualization tool RelEx was created and iteratively refined through a full user-centered design process that included a full problem characterization phase before tool design began, paper prototyping, iterative refinement in close collaboration with expert users for formative evaluation, deployment in the field with real analysts using their own data, usability testing with non-expert users, and summative evaluation at the end of the deployment. In the summative post-deployment study, which entailed domain experts using the tool over several weeks in their daily practice, we documented many examples where the use of RelEx simplified or sped up their work compared to previous practices.",
                "AuthorNamesDeduped": "Michael Sedlmair;Annika Frank;Tamara Munzner;Andreas Butz",
                "AuthorNames": "Michael Sedlmair;Annika Frank;Tamara Munzner;Andreas Butz",
                "AuthorAffiliation": "University of British Columbia, Vancouver, Canada;Bertrand AG, Munich, Germany;University of British Columbia, Vancouver, Canada;University of Munich (LMU), Germany",
                "InternalReferences": "0.1109/tvcg.2006.160;10.1109/infvis.2004.12;10.1109/vast.2011.6102443;10.1109/tvcg.2007.70582;10.1109/tvcg.2009.111;10.1109/tvcg.2009.116;10.1109/infvis.1999.801869;10.1109/tvcg.2008.141;10.1109/tvcg.2008.117;10.1109/infvis.2005.1532126;10.1109/tvcg.2012.213;10.1109/infvis.2003.1249030;10.1109/vast.2006.261426",
                "AuthorKeywords": "Network visualization, change management, traffic routing, traffic optimization, automotive, design study",
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 49,
                "DownloadsXplore": 750,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1435,
                "i": [
                    1435
                ]
            }
        },
        {
            "name": "Andreas Butz",
            "value": 56,
            "numPapers": 14,
            "cluster": "5",
            "visible": 1,
            "index": 1533,
            "x": -369.3601222661189,
            "y": -130.0888161202098,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "InfoVis",
                "Year": 2014,
                "Title": "Activity Sculptures: Exploring the Impact of Physical Visualizations on Running Activity",
                "DOI": "10.1109/tvcg.2014.2352953",
                "Link": "http://dx.doi.org/10.1109/TVCG.2014.2352953",
                "FirstPage": 2201,
                "LastPage": 2210,
                "PaperType": "J",
                "Abstract": "Data sculptures are a promising type of visualizations in which data is given a physical form. In the past, they have mostly been used for artistic, communicative or educational purposes, and designers of data sculptures argue that in such situations, physical visualizations can be more enriching than pixel-based visualizations. We present the design of Activity Sculptures: data sculptures of running activity. In a three-week field study we investigated the impact of the sculptures on 14 participants' running activity, the personal and social behaviors generated by the sculptures, as well as participants' experiences when receiving these individual physical tokens generated from the specific data of their runs. The physical rewards generated curiosity and personal experimentation but also social dynamics such as discussion on runs or envy/competition. We argue that such passive (or calm) visualizations can complement nudging and other mechanisms of persuasion with a more playful and reflective look at ones' activity.",
                "AuthorNamesDeduped": "Simon Stusak;Aurélien Tabard;Franziska Sauka;Rohit Ashok Khot;Andreas Butz",
                "AuthorNames": "Simon Stusak;Aurélien Tabard;Franziska Sauka;Rohit Ashok Khot;Andreas Butz",
                "AuthorAffiliation": "University of Munich (LMU);Université de Lyon & CNRS, Université Lyon 1, LIRIS, UMR5205, France;University of Munich (LMU);Exertion Games Lab, RMIT University;University of Munich (LMU)",
                "InternalReferences": "0.1109/tvcg.2007.70541;10.1109/infvis.2003.1249031;10.1109/tvcg.2013.134",
                "AuthorKeywords": "Physical Visualizations, Activity Sculptures, Physical Activity, Data Sculptures, Behavioral Change",
                "AminerCitationCount": 122,
                "CitationCountCrossRef": 67,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1428,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1178,
                "i": [
                    1178
                ]
            }
        },
        {
            "name": "Stefan Guthe",
            "value": 100,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1534,
            "x": 360.34582551503917,
            "y": -153.62579872496997,
            "vy": 0,
            "vx": 0,
            "r": 1.1151410477835348,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Interactive rendering of large volume data sets",
                "DOI": "10.1109/visual.2002.1183757",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183757",
                "FirstPage": 53,
                "LastPage": 60,
                "PaperType": "C",
                "Abstract": "We present a new algorithm for rendering very large volume data sets at interactive frame rates on standard PC hardware. The algorithm accepts scalar data sampled on a regular grid as input. The input data is converted into a compressed hierarchical wavelet representation in a preprocessing step. During rendering, the wavelet representation is decompressed on-the-fly and rendered using hardware texture mapping. The level of detail used for rendering is adapted to the local frequency spectrum of the data and its position relative to the viewer. Using a prototype implementation of the algorithm we were able to perform an interactive walkthrough of large data sets such as the visible human on a single off-the-shelf PC.",
                "AuthorNamesDeduped": "Stefan Guthe;Michael Wand 0001;Julius Gonser;Wolfgang Straßer",
                "AuthorNames": "S. Guthe;M. Wand;J. Gonser;W. Strasser",
                "AuthorAffiliation": "WSI/GRIS, University of Tübingen, Germany and WSI/GRIS, University of Tiibingen;WSI/GRIS, University of Tübingen, Germany and WSI/GRIS, University of Tiibingen;WSI/GRIS, University of Tübingen, Germany and WSI/GRIS, University of Tiibingen and Eberhard Karls Universitat Tubingen, Tubingen, Baden-Württemberg, DE;WSI/GRIS, Tubingen Univ., Germany",
                "InternalReferences": "0.1109/visual.2001.964531;10.1109/visual.1999.809908;10.1109/visual.1999.809889;10.1109/visual.1993.398845;10.1109/visual.2001.964519",
                "AuthorKeywords": "Compression Algorithms, Level of Detail Algorithms, Scientific Visualization, Volume Rendering, Wavelets",
                "AminerCitationCount": 338,
                "CitationCountCrossRef": 78,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 575,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2765,
                "i": [
                    2765
                ]
            }
        },
        {
            "name": "Wolfgang Straßer",
            "value": 160,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1535,
            "x": -161.98781726385246,
            "y": 356.8051948305864,
            "vy": 0,
            "vx": 0,
            "r": 1.1842256764536556,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Interactive rendering of large volume data sets",
                "DOI": "10.1109/visual.2002.1183757",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183757",
                "FirstPage": 53,
                "LastPage": 60,
                "PaperType": "C",
                "Abstract": "We present a new algorithm for rendering very large volume data sets at interactive frame rates on standard PC hardware. The algorithm accepts scalar data sampled on a regular grid as input. The input data is converted into a compressed hierarchical wavelet representation in a preprocessing step. During rendering, the wavelet representation is decompressed on-the-fly and rendered using hardware texture mapping. The level of detail used for rendering is adapted to the local frequency spectrum of the data and its position relative to the viewer. Using a prototype implementation of the algorithm we were able to perform an interactive walkthrough of large data sets such as the visible human on a single off-the-shelf PC.",
                "AuthorNamesDeduped": "Stefan Guthe;Michael Wand 0001;Julius Gonser;Wolfgang Straßer",
                "AuthorNames": "S. Guthe;M. Wand;J. Gonser;W. Strasser",
                "AuthorAffiliation": "WSI/GRIS, University of Tübingen, Germany and WSI/GRIS, University of Tiibingen;WSI/GRIS, University of Tübingen, Germany and WSI/GRIS, University of Tiibingen;WSI/GRIS, University of Tübingen, Germany and WSI/GRIS, University of Tiibingen and Eberhard Karls Universitat Tubingen, Tubingen, Baden-Württemberg, DE;WSI/GRIS, Tubingen Univ., Germany",
                "InternalReferences": "0.1109/visual.2001.964531;10.1109/visual.1999.809908;10.1109/visual.1999.809889;10.1109/visual.1993.398845;10.1109/visual.2001.964519",
                "AuthorKeywords": "Compression Algorithms, Level of Detail Algorithms, Scientific Visualization, Volume Rendering, Wavelets",
                "AminerCitationCount": 338,
                "CitationCountCrossRef": 78,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 575,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2765,
                "i": [
                    2765
                ]
            }
        },
        {
            "name": "David A. Ellsworth",
            "value": 85,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1536,
            "x": -121.61325217810732,
            "y": -372.6395267475849,
            "vy": 0,
            "vx": 0,
            "r": 1.0978698906160045,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Visualization of AMR Data With Multi-Level Dual-Mesh Interpolation",
                "DOI": "10.1109/tvcg.2011.252",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.252",
                "FirstPage": 1862,
                "LastPage": 1871,
                "PaperType": "J",
                "Abstract": "We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C&lt;sup&gt;0&lt;/sup&gt; continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates \"stitching cells\" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level.",
                "AuthorNamesDeduped": "Patrick J. Moran;David A. Ellsworth",
                "AuthorNames": "Patrick Moran;David Ellsworth",
                "AuthorAffiliation": "NASA Ames Research Center, USA;Computer Sciences Corporation, NASA Ames, USA",
                "InternalReferences": "0.1109/visual.1991.175782;10.1109/tvcg.2009.149;10.1109/visual.2002.1183820",
                "AuthorKeywords": "Adaptive mesh refinement, AMR, Enzo, interpolation, ray casting, isosurfaces, dual meshes, stitching cells",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 546,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1674,
                "i": [
                    1674
                ]
            }
        },
        {
            "name": "Blake Nelson",
            "value": 27,
            "numPapers": 6,
            "cluster": "11",
            "visible": 1,
            "index": 1537,
            "x": 341.49928563635456,
            "y": 192.68689086146858,
            "vy": 0,
            "vx": 0,
            "r": 1.0310880829015543,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "ElVis: A System for the Accurate and Interactive Visualization of High-Order finite Element Solutions",
                "DOI": "10.1109/tvcg.2012.218",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.218",
                "FirstPage": 2325,
                "LastPage": 2334,
                "PaperType": "J",
                "Abstract": "This paper presents the Element Visualizer (ElVis), a new, open-source scientific visualization system for use with high-order finite element solutions to PDEs in three dimensions. This system is designed to minimize visualization errors of these types of fields by querying the underlying finite element basis functions (e.g., high-order polynomials) directly, leading to pixel-exact representations of solutions and geometry. The system interacts with simulation data through runtime plugins, which only require users to implement a handful of operations fundamental to finite element solvers. The data in turn can be visualized through the use of cut surfaces, contours, isosurfaces, and volume rendering. These visualization algorithms are implemented using NVIDIA's OptiX GPU-based ray-tracing engine, which provides accelerated ray traversal of the high-order geometry, and CUDA, which allows for effective parallel evaluation of the visualization algorithms. The direct interface between ElVis and the underlying data differentiates it from existing visualization tools. Current tools assume the underlying data is composed of linear primitives; high-order data must be interpolated with linear functions as a result. In this work, examples drawn from aerodynamic simulations-high-order discontinuous Galerkin finite element solutions of aerodynamic flows in particular-will demonstrate the superiority of ElVis' pixel-exact approach when compared with traditional linear-interpolation methods. Such methods can introduce a number of inaccuracies in the resulting visualization, making it unclear if visual artifacts are genuine to the solution data or if these artifacts are the result of interpolation errors. Linear methods additionally cannot properly visualize curved geometries (elements or boundaries) which can greatly inhibit developers' debugging efforts. As we will show, pixel-exact visualization exhibits none of these issues, removing the visualization scheme as a source of uncertainty for engineers using ElVis.",
                "AuthorNamesDeduped": "Blake Nelson;Eric Liu;Robert M. Kirby;Robert Haimes",
                "AuthorNames": "Blake Nelson;Eric Liu;Robert M. Kirby;Robert Haimes",
                "AuthorAffiliation": "School of Computing and the Scientific Computing and Imaging Institute, University of Utah, USA;Department of Aeronautics and Astronautics, MIT, USA;Department of Aeronautics and Astronautics, MIT, USA;School of Computing and the Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.2005.1532776;10.1109/visual.1991.175837;10.1109/visual.2004.91;10.1109/tvcg.2006.154;10.1109/tvcg.2011.206",
                "AuthorKeywords": "High-order finite elements, spectral/hp elements, discontinuous Galerkin, fluid flow simulation, cut surface extraction, contours, isosurfaces",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 667,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1456,
                "i": [
                    1456
                ]
            }
        },
        {
            "name": "Ralf Kähler",
            "value": 66,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1538,
            "x": -382.0932671148125,
            "y": 88.62694412834371,
            "vy": 0,
            "vx": 0,
            "r": 1.075993091537133,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Interactive exploration of large remote micro-CT scans",
                "DOI": "10.1109/visual.2004.51",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.51",
                "FirstPage": 345,
                "LastPage": 352,
                "PaperType": "C",
                "Abstract": "Datasets of tens of gigabytes are becoming common in computational and experimental science. This development is driven by advances in imaging technology, producing detectors with growing resolutions, as well as availability of cheap processing power and memory capacity in commodity-based computing clusters. We describe the design of a visualization system that allows scientists to interactively explore large remote data sets in an efficient and flexible way. The system is broadly applicable and currently used by medical scientists conducting an osteoporosis research project. Human vertebral bodies are scanned using a high resolution microCT scanner producing scans of roughly 8 GB size each. All participating research groups require access to the centrally stored data. Due to the rich internal bone structure, scientists need to interactively explore the full dataset at coarse levels, as well as visualize subvolumes of interest at the highest resolution. Our solution is based on HDF5 and GridFTP. When accessing data remotely, the HDF5 data processing pipeline is modified to support efficient retrieval of subvolumes. We reduce the overall latency and optimize throughput by executing high-level operations on the remote side. The GridFTP protocol is used to pass the HDF5 requests to a customized server. The approach takes full advantage of local graphics hardware for rendering. Interactive visualization is accomplished using a background thread to access the datasets stored in a multiresolution format. A hierarchical volume tenderer provides seamless integration of high resolution details with low resolution overviews.",
                "AuthorNamesDeduped": "Steffen Prohaska;Andrei Hutanu;Ralf Kähler;Hans-Christian Hege",
                "AuthorNames": "S. Prohaska;A. Hutanu;R. Kahler;H.-C. Hege",
                "AuthorAffiliation": "Scientific Visualization Dept., Zuse Institute Berlin (ZIB), Germany;Scientific Visualization Dept., Zuse Institute Berlin (ZIB), Germany;Scientific Visualization Dept., Zuse Institute Berlin (ZIB), Germany;Scientific Visualization Dept., Zuse Institute Berlin (ZIB)",
                "InternalReferences": "0.1109/visual.2000.885729;10.1109/visual.2002.1183758;10.1109/visual.2002.1183757;10.1109/visual.1999.809891;10.1109/visual.2002.1183764;10.1109/visual.1999.809908;10.1109/visual.1997.663888",
                "AuthorKeywords": "large data, out-of-core-methods, remote visualization, multiresolution visualization",
                "AminerCitationCount": 54,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 162,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2549,
                "i": [
                    2549
                ]
            }
        },
        {
            "name": "Oliver Hahn",
            "value": 0,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1539,
            "x": 221.94913964779593,
            "y": -323.55614568356316,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "A Novel Approach to Visualizing Dark Matter Simulations",
                "DOI": "10.1109/tvcg.2012.187",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.187",
                "FirstPage": 2078,
                "LastPage": 2087,
                "PaperType": "J",
                "Abstract": "In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.",
                "AuthorNamesDeduped": "Ralf Kähler;Oliver Hahn;Tom Abel",
                "AuthorNames": "Ralf Kaehler;Oliver Hahn;Tom Abel",
                "AuthorAffiliation": "KIPAC, SLAC National Accelerator Laboratory, USA;Stanford/SLAC, USA;KIPAC, SLAC National Accelerator Laboratory, USA",
                "InternalReferences": "0.1109/tvcg.2010.148;10.1109/visual.2003.1250390;10.1109/visual.2004.85;10.1109/tvcg.2006.154;10.1109/visual.2003.1250404;10.1109/tvcg.2011.216;10.1109/tvcg.2009.142;10.1109/visual.2001.964512;10.1109/visual.2003.1250404",
                "AuthorKeywords": "Astrophysics, dark matter, n-body simulations, tetrahedral grids",
                "AminerCitationCount": 39,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 882,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1459,
                "i": [
                    1459
                ]
            }
        },
        {
            "name": "Tom Abel",
            "value": 28,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1540,
            "x": 54.91847574566317,
            "y": 388.63088017008243,
            "vy": 0,
            "vx": 0,
            "r": 1.0322394933793897,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Rendering the first star in the Universe - A case study",
                "DOI": "10.1109/visual.2002.1183824",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183824",
                "FirstPage": 537,
                "LastPage": 540,
                "PaperType": "C",
                "Abstract": "For quantitative examination of phenomena that simultaneously occur on very different spatial and temporal scales, adaptive hierarchical schemes are required. A special numerical multilevel technique, associated with a particular hierarchical data structure, is so-called adaptive mesh refinement (AMR). It allows one to bridge a wide range of spatial and temporal resolutions and therefore gains increasing popularity. We describe the interplay of several visualization and VR software packages for rendering time dependent AMR simulations of the evolution of the first star in the universe. The work was done in the framework of a television production for Discovery Channel television, \"The Unfolding Universe.\". Parts of the data were taken from one of the most complex AMR simulation ever carried out: It contained up to 27 levels of resolution, requiring modifications to the texture based AMR volume rendering algorithm that was used to depict the density distribution of the gaseous interstellar matter. A voice and gesture controlled CAVE application was utilized to define camera paths following the interesting features deep inside the computational domains. Background images created from cosmological computational data were combined with the final renderings.",
                "AuthorNamesDeduped": "Ralf Kähler;Donna J. Cox;Robert Patterson;Stuart Levy;Hans-Christian Hege;Tom Abel",
                "AuthorNames": "R. Kahler;D. Cox;R. Patterson;S. Levy;H.-C. Hege;T. Abel",
                "AuthorAffiliation": "Zuse Institute Berlin, Berlin, Germany and MPI für Gravitationsphysik AEI, Golm, Germany;National Center for Supercomputing Applications, Urbana-Champaign, USA;National Center for Supercomputing Applications, Urbana-Champaign, USA;National Center for Supercomputing Applications, Urbana-Champaign, USA;Zuse Institute Berlin, Berlin, Germany;Department of Astronomy, Astrophysics Penn State University, PA, USA",
                "InternalReferences": "0.1109/visual.2002.1183820",
                "AuthorKeywords": "3D texture based volume rendering, adaptive mesh refinement data, CAVE applications, data visualization",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 141,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2803,
                "i": [
                    2803
                ]
            }
        },
        {
            "name": "Stefan Lindholm",
            "value": 24,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1541,
            "x": -303.10989494236355,
            "y": -249.54837524622224,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Automatic Tuning of Spatially Varying Transfer Functions for Blood Vessel Visualization",
                "DOI": "10.1109/tvcg.2012.203",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.203",
                "FirstPage": 2345,
                "LastPage": 2354,
                "PaperType": "J",
                "Abstract": "Computed Tomography Angiography (CTA) is commonly used in clinical routine for diagnosing vascular diseases. The procedure involves the injection of a contrast agent into the blood stream to increase the contrast between the blood vessels and the surrounding tissue in the image data. CTA is often visualized with Direct Volume Rendering (DVR) where the enhanced image contrast is important for the construction of Transfer Functions (TFs). For increased efficiency, clinical routine heavily relies on preset TFs to simplify the creation of such visualizations for a physician. In practice, however, TF presets often do not yield optimal images due to variations in mixture concentration of contrast agent in the blood stream. In this paper we propose an automatic, optimization-based method that shifts TF presets to account for general deviations and local variations of the intensity of contrast enhanced blood vessels. Some of the advantages of this method are the following. It computationally automates large parts of a process that is currently performed manually. It performs the TF shift locally and can thus optimize larger portions of the image than is possible with manual interaction. The method is based on a well known vesselness descriptor in the definition of the optimization criterion. The performance of the method is illustrated by clinically relevant CT angiography datasets displaying both improved structural overviews of vessel trees and improved adaption to local variations of contrast concentration.",
                "AuthorNamesDeduped": "Gunnar Läthén;Stefan Lindholm;Reiner Lenz;Anders Persson;Magnus Borga",
                "AuthorNames": "Gunnar Läthén;Stefan Lindholm;Reiner Lenz;Anders Persson;Magnus Borga",
                "AuthorAffiliation": "Center for Medical Image Science and Visualization (CMIV), Department of Science and Technology, Linköping University, Sweden;Center for Medical Image Science and Visualization (CMIV), Department of Science and Technology, Linköping University, Sweden;Center for Medical Image Science and Visualization (CMIV), Department of Science and Technology, Linköping University, Sweden;Center for Medical Image Science and Visualization (CMIV), Department of Medical and Health Sciences, Linköping University, Sweden;Center for Medical Image Science and Visualization (CMIV), Department of Biomedical Engineering, Linköping University, Sweden",
                "InternalReferences": "0.1109/visual.2003.1250414;10.1109/tvcg.2009.120;10.1109/visual.2001.964516;10.1109/visual.1996.568113;10.1109/tvcg.2008.162;10.1109/tvcg.2010.195;10.1109/tvcg.2008.123",
                "AuthorKeywords": "Direct volume rendering, transfer functions, vessel visualization",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 505,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1462,
                "i": [
                    1462
                ]
            }
        },
        {
            "name": "Nafees U. Ahmed",
            "value": 24,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 1542,
            "x": 392.19845998818573,
            "y": -20.7453123113504,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "iView: A Feature Clustering Framework for Suggesting Informative Views in Volume Visualization",
                "DOI": "10.1109/tvcg.2011.218",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.218",
                "FirstPage": 1959,
                "LastPage": 1968,
                "PaperType": "J",
                "Abstract": "The unguided visual exploration of volumetric data can be both a challenging and a time-consuming undertaking. Identifying a set of favorable vantage points at which to start exploratory expeditions can greatly reduce this effort and can also ensure that no important structures are being missed. Recent research efforts have focused on entropy-based viewpoint selection criteria that depend on scalar values describing the structures of interest. In contrast, we propose a viewpoint suggestion pipeline that is based on feature-clustering in high-dimensional space. We use gradient/normal variation as a metric to identify interesting local events and then cluster these via k-means to detect important salient composite features. Next, we compute the maximum possible exposure of these composite feature for different viewpoints and calculate a 2D entropy map parameterized in longitude and latitude to point out promising view orientations. Superimposed onto an interactive track-ball interface, users can then directly use this entropy map to quickly navigate to potentially interesting viewpoints where visibility-based transfer functions can be employed to generate volume renderings that minimize occlusions. To give full exploration freedom to the user, the entropy map is updated on the fly whenever a view has been selected, pointing to new and promising but so far unseen view directions. Alternatively, our system can also use a set-cover optimization algorithm to provide a minimal set of views needed to observe all features. The views so generated could then be saved into a list for further inspection or into a gallery for a summary presentation.",
                "AuthorNamesDeduped": "Ziyi Zheng;Nafees U. Ahmed;Klaus Mueller 0001",
                "AuthorNames": "Ziyi Zheng;Nafees Ahmed;Klaus Mueller",
                "AuthorAffiliation": "Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA;Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA;Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA",
                "InternalReferences": "0.1109/tvcg.2009.156;10.1109/tvcg.2007.70576;10.1109/tvcg.2008.162;10.1109/tvcg.2008.159;10.1109/tvcg.2010.214;10.1109/tvcg.2009.172;10.1109/visual.2005.1532833;10.1109/visual.2005.1532818;10.1109/tvcg.2006.124;10.1109/tvcg.2009.185;10.1109/tvcg.2009.189;10.1109/visual.2003.1250414;10.1109/visual.2005.1532834",
                "AuthorKeywords": "Direct volume rendering, k-means, entropy, view suggestion, set-cover problem, ant colony optimization",
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 807,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1670,
                "i": [
                    1670
                ]
            }
        },
        {
            "name": "Ziyi Zheng",
            "value": 26,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 1543,
            "x": -275.2708667521396,
            "y": 280.314020194363,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "iView: A Feature Clustering Framework for Suggesting Informative Views in Volume Visualization",
                "DOI": "10.1109/tvcg.2011.218",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.218",
                "FirstPage": 1959,
                "LastPage": 1968,
                "PaperType": "J",
                "Abstract": "The unguided visual exploration of volumetric data can be both a challenging and a time-consuming undertaking. Identifying a set of favorable vantage points at which to start exploratory expeditions can greatly reduce this effort and can also ensure that no important structures are being missed. Recent research efforts have focused on entropy-based viewpoint selection criteria that depend on scalar values describing the structures of interest. In contrast, we propose a viewpoint suggestion pipeline that is based on feature-clustering in high-dimensional space. We use gradient/normal variation as a metric to identify interesting local events and then cluster these via k-means to detect important salient composite features. Next, we compute the maximum possible exposure of these composite feature for different viewpoints and calculate a 2D entropy map parameterized in longitude and latitude to point out promising view orientations. Superimposed onto an interactive track-ball interface, users can then directly use this entropy map to quickly navigate to potentially interesting viewpoints where visibility-based transfer functions can be employed to generate volume renderings that minimize occlusions. To give full exploration freedom to the user, the entropy map is updated on the fly whenever a view has been selected, pointing to new and promising but so far unseen view directions. Alternatively, our system can also use a set-cover optimization algorithm to provide a minimal set of views needed to observe all features. The views so generated could then be saved into a list for further inspection or into a gallery for a summary presentation.",
                "AuthorNamesDeduped": "Ziyi Zheng;Nafees U. Ahmed;Klaus Mueller 0001",
                "AuthorNames": "Ziyi Zheng;Nafees Ahmed;Klaus Mueller",
                "AuthorAffiliation": "Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA;Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA;Visual Analytics and Imaging (VAI) Laboratory, Center for Visual Computing, Computer Science Department, Stony Brook University, NY, USA",
                "InternalReferences": "0.1109/tvcg.2009.156;10.1109/tvcg.2007.70576;10.1109/tvcg.2008.162;10.1109/tvcg.2008.159;10.1109/tvcg.2010.214;10.1109/tvcg.2009.172;10.1109/visual.2005.1532833;10.1109/visual.2005.1532818;10.1109/tvcg.2006.124;10.1109/tvcg.2009.185;10.1109/tvcg.2009.189;10.1109/visual.2003.1250414;10.1109/visual.2005.1532834",
                "AuthorKeywords": "Direct volume rendering, k-means, entropy, view suggestion, set-cover problem, ant colony optimization",
                "AminerCitationCount": 31,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 807,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1670,
                "i": [
                    1670
                ]
            }
        },
        {
            "name": "Lujin Wang",
            "value": 96,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1544,
            "x": 13.631183726876486,
            "y": -392.7648034513838,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Color Design for Illustrative Visualization",
                "DOI": "10.1109/tvcg.2008.118",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.118",
                "FirstPage": 1739,
                "LastPage": 1754,
                "PaperType": "J",
                "Abstract": "Professional designers and artists are quite cognizant of the rules that guide the design of effective color palettes, from both aesthetic and attention-guiding points of view. In the field of visualization, however, the use of systematic rules embracing these aspects has received less attention. The situation is further complicated by the fact that visualization often uses semi-transparencies to reveal occluded objects, in which case the resulting color mixing effects add additional constraints to the choice of the color palette. Color design forms a crucial part in visual aesthetics. Thus, the consideration of these issues can be of great value in the emerging field of illustrative visualization. We describe a knowledge-based system that captures established color design rules into a comprehensive interactive framework, aimed to aid users in the selection of colors for scene objects and incorporating individual preferences, importance functions, and overall scene composition. Our framework also offers new knowledge and solutions for the mixing, ordering and choice of colors in the rendering of semi-transparent layers and surfaces. All design rules are evaluated via user studies, for which we extend the method of conjoint analysis to task-based testing scenarios. Our framework's use of principles rooted in color design with application for the illustration of features in pre-classified data distinguishes it from existing systems which target the exploration of continuous-range density data via perceptual color maps.",
                "AuthorNamesDeduped": "Lujin Wang;Joachim Giesen;Kevin T. McDonnell;Peter Zolliker;Klaus Mueller 0001",
                "AuthorNames": "Lujin Wang;Joachim Giesen;Kevin T. McDonnell;Peter Zolliker;Klaus Mueller",
                "AuthorAffiliation": "Center for Visual Computing, Stony Brook University, USA;Friedrich-Schiller-Universität Jena;Dowling College;EMPA Dübendorf;Center for Visual Computing, Stony Brook University, USA",
                "InternalReferences": "0.1109/visual.1993.398874;10.1109/visual.1996.568118;10.1109/tvcg.2007.70542;10.1109/tvcg.2006.174;10.1109/visual.2001.964510;10.1109/visual.2000.885697;10.1109/visual.1995.480803",
                "AuthorKeywords": "Color design, volume rendering, transparency, user study evaluation, conjoint analysis, illustrative visualization",
                "AminerCitationCount": 172,
                "CitationCountCrossRef": 88,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 3090,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2041,
                "i": [
                    2041
                ]
            }
        },
        {
            "name": "Mark A. Livingston",
            "value": 33,
            "numPapers": 28,
            "cluster": "11",
            "visible": 1,
            "index": 1545,
            "x": 255.34022301682649,
            "y": 298.91699602016155,
            "vy": 0,
            "vx": 0,
            "r": 1.0379965457685665,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Evaluation of Trend Localization with Multi-Variate Visualizations",
                "DOI": "10.1109/tvcg.2011.194",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.194",
                "FirstPage": 2053,
                "LastPage": 2062,
                "PaperType": "J",
                "Abstract": "Multi-valued data sets are increasingly common, with the number of dimensions growing. A number of multi-variate visualization techniques have been presented to display such data. However, evaluating the utility of such techniques for general data sets remains difficult. Thus most techniques are studied on only one data set. Another criticism that could be levied against previous evaluations of multi-variate visualizations is that the task doesn't require the presence of multiple variables. At the same time, the taxonomy of tasks that users may perform visually is extensive. We designed a task, trend localization, that required comparison of multiple data values in a multi-variate visualization. We then conducted a user study with this task, evaluating five multivariate visualization techniques from the literature (Brush Strokes, Data-Driven Spots, Oriented Slivers, Color Blending, Dimensional Stacking) and juxtaposed grayscale maps. We report the results and discuss the implications for both the techniques and the task.",
                "AuthorNamesDeduped": "Mark A. Livingston;Jonathan W. Decker",
                "AuthorNames": "Mark Livingston;Jonathan Decker",
                "AuthorAffiliation": "Naval Research Laboratory, USA;Naval Research Laboratory, USA",
                "InternalReferences": "0.1109/tvcg.2009.126;10.1109/visual.1998.745292;10.1109/visual.1990.146387;10.1109/visual.1990.146386;10.1109/tvcg.2007.70623;10.1109/visual.1991.175795;10.1109/visual.1999.809905;10.1109/visual.2003.1250362;10.1109/visual.1998.745294;10.1109/visual.2003.1250362",
                "AuthorKeywords": "User study, multi-variate visualization, visual task design, visual analytics",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 617,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1676,
                "i": [
                    1676
                ]
            }
        },
        {
            "name": "Jonathan W. Decker",
            "value": 31,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 1546,
            "x": -390.32167895820606,
            "y": -47.947752139668594,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Evaluation of Trend Localization with Multi-Variate Visualizations",
                "DOI": "10.1109/tvcg.2011.194",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.194",
                "FirstPage": 2053,
                "LastPage": 2062,
                "PaperType": "J",
                "Abstract": "Multi-valued data sets are increasingly common, with the number of dimensions growing. A number of multi-variate visualization techniques have been presented to display such data. However, evaluating the utility of such techniques for general data sets remains difficult. Thus most techniques are studied on only one data set. Another criticism that could be levied against previous evaluations of multi-variate visualizations is that the task doesn't require the presence of multiple variables. At the same time, the taxonomy of tasks that users may perform visually is extensive. We designed a task, trend localization, that required comparison of multiple data values in a multi-variate visualization. We then conducted a user study with this task, evaluating five multivariate visualization techniques from the literature (Brush Strokes, Data-Driven Spots, Oriented Slivers, Color Blending, Dimensional Stacking) and juxtaposed grayscale maps. We report the results and discuss the implications for both the techniques and the task.",
                "AuthorNamesDeduped": "Mark A. Livingston;Jonathan W. Decker",
                "AuthorNames": "Mark Livingston;Jonathan Decker",
                "AuthorAffiliation": "Naval Research Laboratory, USA;Naval Research Laboratory, USA",
                "InternalReferences": "0.1109/tvcg.2009.126;10.1109/visual.1998.745292;10.1109/visual.1990.146387;10.1109/visual.1990.146386;10.1109/tvcg.2007.70623;10.1109/visual.1991.175795;10.1109/visual.1999.809905;10.1109/visual.2003.1250362;10.1109/visual.1998.745294;10.1109/visual.2003.1250362",
                "AuthorKeywords": "User study, multi-variate visualization, visual task design, visual analytics",
                "AminerCitationCount": 28,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 617,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1676,
                "i": [
                    1676
                ]
            }
        },
        {
            "name": "Christopher G. Healey",
            "value": 214,
            "numPapers": 16,
            "cluster": "11",
            "visible": 1,
            "index": 1547,
            "x": 320.30280683551933,
            "y": -228.37712655449536,
            "vy": 0,
            "vx": 0,
            "r": 1.2464018422567644,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Choosing effective colours for data visualization",
                "DOI": "10.1109/visual.1996.568118",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568118",
                "FirstPage": 263,
                "LastPage": 270,
                "PaperType": "C",
                "Abstract": "We describe a technique for choosing multiple colours for use during data visualization. Our goal is a systematic method for maximizing the total number of colours available for use, while still allowing an observer to rapidly and accurately search a display for any one of the given colours. Previous research suggests that we need to consider three separate effects during colour selection: colour distance, linear separation, and colour category. We describe a simple method for measuring and controlling all of these effects. Our method was tested by performing a set of target identification studies; we analysed the ability of thirty eight observers to find a colour target in displays that contained differently coloured background elements. Results showed our method can be used to select a group of colours that will provide good differentiation between data elements during data visualization.",
                "AuthorNamesDeduped": "Christopher G. Healey",
                "AuthorNames": "C.G. Healey",
                "AuthorAffiliation": "Department of Computer Science, University of British Columbia, Vancouver, BC, Canada",
                "InternalReferences": "0.1109/visual.1995.480803;10.1109/visual.1993.398874",
                "AuthorKeywords": null,
                "AminerCitationCount": 417,
                "CitationCountCrossRef": 84,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 2067,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3329,
                "i": [
                    3329
                ]
            }
        },
        {
            "name": "Feng Dong 0005",
            "value": 4,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1548,
            "x": -81.94125144225276,
            "y": 384.8839192692746,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Structure-Aware Lighting Design for Volume Visualization",
                "DOI": "10.1109/tvcg.2012.267",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.267",
                "FirstPage": 2372,
                "LastPage": 2381,
                "PaperType": "J",
                "Abstract": "Lighting design is a complex, but fundamental, problem in many fields. In volume visualization, direct volume rendering generates an informative image without external lighting, as each voxel itself emits radiance. However, external lighting further improves the shape and detail perception of features, and it also determines the effectiveness of the communication of feature information. The human visual system is highly effective in extracting structural information from images, and to assist it further, this paper presents an approach to structure-aware automatic lighting design by measuring the structural changes between the images with and without external lighting. Given a transfer function and a viewpoint, the optimal lighting parameters are those that provide the greatest enhancement to structural information - the shape and detail information of features are conveyed most clearly by the optimal lighting parameters. Besides lighting goodness, the proposed metric can also be used to evaluate lighting similarity and stability between two sets of lighting parameters. Lighting similarity can be used to optimize the selection of multiple light sources so that different light sources can reveal distinct structural information. Our experiments with several volume data sets demonstrate the effectiveness of the structure-aware lighting design approach. It is well suited to use by novices as it requires little technical understanding of the rendering parameters associated with direct volume rendering.",
                "AuthorNamesDeduped": "Yubo Tao;Hai Lin 0003;Feng Dong 0005;Chao Wang 0063;Gordon Clapworthy;Hujun Bao",
                "AuthorNames": "Yubo Tao;Hai Lin;Feng Dong;Chao Wang;Gordon Clapworthy;Hujun Bao",
                "AuthorAffiliation": "Zhejiang University, State Key Lab of CAD&CG;Zhejiang University, State Key Lab of CAD&CG;Visualisation, University of Bedfordshire, UK;Visualisation, University of Bedfordshire, UK;Visualisation, University of Bedfordshire, UK;Zhejiang University, State Key Lab of CAD&CG",
                "InternalReferences": "0.1109/tvcg.2006.137;10.1109/tvcg.2011.218;10.1109/visual.2004.62;10.1109/visual.2005.1532834;10.1109/visual.2005.1532833;10.1109/visual.2003.1250395;10.1109/visual.2002.1183785",
                "AuthorKeywords": "Automatic lighting design, structural dissimilarity, lighting similarity, lighting stability, volume rendering",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1471,
                "i": [
                    1471
                ]
            }
        },
        {
            "name": "Gordon Clapworthy",
            "value": 4,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1549,
            "x": -199.62885081922533,
            "y": -339.2614359466686,
            "vy": 0,
            "vx": 0,
            "r": 1.0046056419113414,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Structure-Aware Lighting Design for Volume Visualization",
                "DOI": "10.1109/tvcg.2012.267",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.267",
                "FirstPage": 2372,
                "LastPage": 2381,
                "PaperType": "J",
                "Abstract": "Lighting design is a complex, but fundamental, problem in many fields. In volume visualization, direct volume rendering generates an informative image without external lighting, as each voxel itself emits radiance. However, external lighting further improves the shape and detail perception of features, and it also determines the effectiveness of the communication of feature information. The human visual system is highly effective in extracting structural information from images, and to assist it further, this paper presents an approach to structure-aware automatic lighting design by measuring the structural changes between the images with and without external lighting. Given a transfer function and a viewpoint, the optimal lighting parameters are those that provide the greatest enhancement to structural information - the shape and detail information of features are conveyed most clearly by the optimal lighting parameters. Besides lighting goodness, the proposed metric can also be used to evaluate lighting similarity and stability between two sets of lighting parameters. Lighting similarity can be used to optimize the selection of multiple light sources so that different light sources can reveal distinct structural information. Our experiments with several volume data sets demonstrate the effectiveness of the structure-aware lighting design approach. It is well suited to use by novices as it requires little technical understanding of the rendering parameters associated with direct volume rendering.",
                "AuthorNamesDeduped": "Yubo Tao;Hai Lin 0003;Feng Dong 0005;Chao Wang 0063;Gordon Clapworthy;Hujun Bao",
                "AuthorNames": "Yubo Tao;Hai Lin;Feng Dong;Chao Wang;Gordon Clapworthy;Hujun Bao",
                "AuthorAffiliation": "Zhejiang University, State Key Lab of CAD&CG;Zhejiang University, State Key Lab of CAD&CG;Visualisation, University of Bedfordshire, UK;Visualisation, University of Bedfordshire, UK;Visualisation, University of Bedfordshire, UK;Zhejiang University, State Key Lab of CAD&CG",
                "InternalReferences": "0.1109/tvcg.2006.137;10.1109/tvcg.2011.218;10.1109/visual.2004.62;10.1109/visual.2005.1532834;10.1109/visual.2005.1532833;10.1109/visual.2003.1250395;10.1109/visual.2002.1183785",
                "AuthorKeywords": "Automatic lighting design, structural dissimilarity, lighting similarity, lighting stability, volume rendering",
                "AminerCitationCount": 14,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 780,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1471,
                "i": [
                    1471
                ]
            }
        },
        {
            "name": "Han Krishnan",
            "value": 40,
            "numPapers": 4,
            "cluster": "11",
            "visible": 1,
            "index": 1550,
            "x": 376.4893375511443,
            "y": 115.35067711244935,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Generation of Accurate Integral Surfaces in Time-Dependent Vector fields",
                "DOI": "10.1109/tvcg.2008.133",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.133",
                "FirstPage": 1404,
                "LastPage": 1411,
                "PaperType": "J",
                "Abstract": "We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces.",
                "AuthorNamesDeduped": "Christoph Garth;Han Krishnan;Xavier Tricoche;Tom Tricoche;Kenneth I. Joy",
                "AuthorNames": "Christoph Garth;Han Krishnan;Xavier Tricoche;Tom Tricoche;Kenneth I. Joy",
                "AuthorAffiliation": "Institute of Data Analysis and Visualization, University of California, Davis, USA;Institute of Data Analysis and Visualization, University of California, Davis, USA;Computer Science Dept., Purdue University, USA;Geometric Algorithms Group, University of Kaiserslautern;Institute of Data Analysis and Visualization, University of California, Davis, USA",
                "InternalReferences": "0.1109/visual.1993.398875;10.1109/visual.2001.964506;10.1109/visual.2004.28;10.1109/visual.1992.235211;10.1109/visual.1992.235226",
                "AuthorKeywords": "3D vector field visualization, flow visualization, time-varying and time-series visualization, surface extraction",
                "AminerCitationCount": 103,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 360,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2049,
                "i": [
                    2049
                ]
            }
        },
        {
            "name": "Tom Tricoche",
            "value": 40,
            "numPapers": 4,
            "cluster": "11",
            "visible": 1,
            "index": 1551,
            "x": -355.64441482744627,
            "y": 169.31346727901868,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Generation of Accurate Integral Surfaces in Time-Dependent Vector fields",
                "DOI": "10.1109/tvcg.2008.133",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.133",
                "FirstPage": 1404,
                "LastPage": 1411,
                "PaperType": "J",
                "Abstract": "We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces.",
                "AuthorNamesDeduped": "Christoph Garth;Han Krishnan;Xavier Tricoche;Tom Tricoche;Kenneth I. Joy",
                "AuthorNames": "Christoph Garth;Han Krishnan;Xavier Tricoche;Tom Tricoche;Kenneth I. Joy",
                "AuthorAffiliation": "Institute of Data Analysis and Visualization, University of California, Davis, USA;Institute of Data Analysis and Visualization, University of California, Davis, USA;Computer Science Dept., Purdue University, USA;Geometric Algorithms Group, University of Kaiserslautern;Institute of Data Analysis and Visualization, University of California, Davis, USA",
                "InternalReferences": "0.1109/visual.1993.398875;10.1109/visual.2001.964506;10.1109/visual.2004.28;10.1109/visual.1992.235211;10.1109/visual.1992.235226",
                "AuthorKeywords": "3D vector field visualization, flow visualization, time-varying and time-series visualization, surface extraction",
                "AminerCitationCount": 103,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 360,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2049,
                "i": [
                    2049
                ]
            }
        },
        {
            "name": "Jeff P. Hultquist",
            "value": 114,
            "numPapers": 6,
            "cluster": "11",
            "visible": 1,
            "index": 1552,
            "x": 147.91916601807623,
            "y": -365.1984670347301,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "Vis",
                "Year": 1992,
                "Title": "Constructing stream surfaces in steady 3D vector fields",
                "DOI": "10.1109/visual.1992.235211",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1992.235211",
                "FirstPage": 171,
                "LastPage": 178,
                "PaperType": "C",
                "Abstract": "Maintenance of a front of particles, an efficient method of generating a set of sample points over a two-dimensional stream surface, is described. The particles are repeatedly advanced a short distance through the flow field. New polygons are appended to the downstream edge of the surface. The spacing of the particles is adjusted to maintain an adequate sampling across the width of the growing surface. Curve and ribbon methods of vector field visualization are reviewed.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Jeff P. Hultquist",
                "AuthorNames": "J.P.M. Hultquist",
                "AuthorAffiliation": "Numerical Aerodynamic Simulation Systems Division, NASA Ames Research Center, CA, USA",
                "InternalReferences": "0.1109/visual.1990.146359;10.1109/visual.1991.175837;10.1109/visual.1990.146373;10.1109/visual.1992.235202;10.1109/visual.1991.175789",
                "AuthorKeywords": null,
                "AminerCitationCount": 361,
                "CitationCountCrossRef": 96,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 277,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3584,
                "i": [
                    3584
                ]
            }
        },
        {
            "name": "Markus Rütten",
            "value": 35,
            "numPapers": 29,
            "cluster": "11",
            "visible": 1,
            "index": 1553,
            "x": 137.6613444381812,
            "y": 369.3228320148545,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Visualization of intricate flow structures for vortex breakdown analysis",
                "DOI": "10.1109/visual.2004.113",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.113",
                "FirstPage": 187,
                "LastPage": 194,
                "PaperType": "C",
                "Abstract": "Vortex breakdowns and flow recirculation are essential phenomena in aeronautics where they appear as a limiting factor in the design of modern aircrafts. Because of the inherent intricacy of these features, standard flow visualization techniques typically yield cluttered depictions. The paper addresses the challenges raised by the visual exploration and validation of two CFD simulations involving vortex breakdown. To permit accurate and insightful visualization we propose a new approach that unfolds the geometry of the breakdown region by letting a plane travel through the structure along a curve. We track the continuous evolution of the associated projected vector field using the theoretical framework of parametric topology. To improve the understanding of the spatial relationship between the resulting curves and lines we use direct volume rendering and multidimensional transfer functions for the display of flow-derived scalar quantities. This enriches the visualization and provides an intuitive context for the extracted topological information. Our results offer clear, synthetic depictions that permit new insight into the structural properties of vortex breakdowns.",
                "AuthorNamesDeduped": "Xavier Tricoche;Christoph Garth;Gordon L. Kindlmann;Eduard Deines;Gerik Scheuermann;Markus Rütten;Charles D. Hansen",
                "AuthorNames": "X. Tricoche;C. Garth;G. Kindlmann;E. Deines;G. Scheuermann;M. Ruetten;C. Hansen",
                "AuthorAffiliation": "University of Utah, USA;University of Kaiserslautern, Germany;University of Utah, USA;University of Kaiserslautern, Germany;University of Leipzig, Germany;;University of Utah, USA",
                "InternalReferences": "0.1109/visual.2001.964519;10.1109/visual.1998.745296;10.1109/visual.1991.175773;10.1109/visual.1997.663910;10.1109/visual.1999.809896;10.1109/visual.2003.1250414;10.1109/visual.2003.1250376;10.1109/visual.2001.964489;10.1109/visual.1993.398875;10.1109/visual.1991.175789;10.1109/visual.1994.346314",
                "AuthorKeywords": "flow visualization, vortex analysis, parametric topology, cutting planes, volume rendering",
                "AminerCitationCount": 70,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 324,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2528,
                "i": [
                    2528
                ]
            }
        },
        {
            "name": "Eser Kandogan",
            "value": 34,
            "numPapers": 10,
            "cluster": "3",
            "visible": 1,
            "index": 1554,
            "x": -351.0941260590118,
            "y": -179.3959716572774,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Just-in-time annotation of clusters, outliers, and trends in point-based data visualizations",
                "DOI": "10.1109/vast.2012.6400487",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400487",
                "FirstPage": 73,
                "LastPage": 82,
                "PaperType": "C",
                "Abstract": "We introduce the concept of just-in-time descriptive analytics as a novel application of computational and statistical techniques performed at interaction-time to help users easily understand the structure of data as seen in visualizations. Fundamental to just-intime descriptive analytics is (a) identifying visual features, such as clusters, outliers, and trends, user might observe in visualizations automatically, (b) determining the semantics of such features by performing statistical analysis as the user is interacting, and (c) enriching visualizations with annotations that not only describe semantics of visual features but also facilitate interaction to support high-level understanding of data. In this paper, we demonstrate just-in-time descriptive analytics applied to a point-based multi-dimensional visualization technique to identify and describe clusters, outliers, and trends. We argue that it provides a novel user experience of computational techniques working alongside of users allowing them to build faster qualitative mental models of data by demonstrating its application on a few use-cases. Techniques used to facilitate just-in-time descriptive analytics are described in detail along with their runtime performance characteristics. We believe this is just a starting point and much remains to be researched, as we discuss open issues and opportunities in improving accessibility and collaboration.",
                "AuthorNamesDeduped": "Eser Kandogan",
                "AuthorNames": "Eser Kandogan",
                "AuthorAffiliation": "IBM Center for Advanced Visualization, IBM Research",
                "InternalReferences": "0.1109/infvis.2003.1249015;10.1109/infvis.2005.1532142;10.1109/infvis.2004.3;10.1109/tvcg.2011.220;10.1109/infvis.2004.15;10.1109/infvis.1998.729559;10.1109/vast.2006.261423;10.1109/tvcg.2009.153;10.1109/vast.2010.5652885;10.1109/vast.2009.5332628;10.1109/tvcg.2011.229",
                "AuthorKeywords": "Just-in-time descriptive analytics, feature identification and characterization, point-based visualizations",
                "AminerCitationCount": 83,
                "CitationCountCrossRef": 47,
                "PubsCitedCrossRef": 41,
                "DownloadsXplore": 1048,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1493,
                "i": [
                    1493
                ]
            }
        },
        {
            "name": "Matthew Chalmers",
            "value": 99,
            "numPapers": 12,
            "cluster": "11",
            "visible": 1,
            "index": 1555,
            "x": 380.1883469254679,
            "y": -104.91339696187552,
            "vy": 0,
            "vx": 0,
            "r": 1.1139896373056994,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "A linear iteration time layout algorithm for visualising high-dimensional data",
                "DOI": "10.1109/visual.1996.567787",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.567787",
                "FirstPage": 127,
                "LastPage": 131,
                "PaperType": "C",
                "Abstract": "A technique is presented for the layout of high dimensional data in a low dimensional space. This technique builds upon the force based methods that have been used previously to make visualisations of various types of data such as bibliographies and sets of software modules. The canonical force based model, related to solutions of the N body problem, has a computational complexity of O(N/sup 2/) per iteration. The paper presents a stochastically based algorithm of linear complexity per iteration which produces good layouts, has low overhead, and is easy to implement. Its performance and accuracy are discussed, in particular with regard to the data to which it is applied. Experience with application to bibliographic and time series data, which may have a dimensionality in the tens of thousands, is described.",
                "AuthorNamesDeduped": "Matthew Chalmers",
                "AuthorNames": "M. Chalmers",
                "AuthorAffiliation": "UBILAB, Union Bank of Switzerland, Switzerland",
                "InternalReferences": "0.1109/infvis.1995.528686;10.1109/visual.1995.480814",
                "AuthorKeywords": "layout algorithms, visualization, high-dimensional data, spring models, stochastic algorithms, force-directed placement",
                "AminerCitationCount": 304,
                "CitationCountCrossRef": 69,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 356,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3332,
                "i": [
                    3332
                ]
            }
        },
        {
            "name": "Hamed Bouzari",
            "value": 9,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1556,
            "x": -209.53839501956531,
            "y": 334.2808116129681,
            "vy": 0,
            "vx": 0,
            "r": 1.0103626943005182,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "Smart super views---A knowledge-assisted interface for medical visualization",
                "DOI": "10.1109/vast.2012.6400555",
                "Link": "http://dx.doi.org/10.1109/VAST.2012.6400555",
                "FirstPage": 163,
                "LastPage": 172,
                "PaperType": "C",
                "Abstract": "Due to the ever growing volume of acquired data and information, users have to be constantly aware of the methods for their exploration and for interaction. Of these, not each might be applicable to the data at hand or might reveal the desired result. Owing to this, innovations may be used inappropriately and users may become skeptical. In this paper we propose a knowledge-assisted interface for medical visualization, which reduces the necessary effort to use new visualization methods, by providing only the most relevant ones in a smart way. Consequently, we are able to expand such a system with innovations without the users to worry about when, where, and especially how they may or should use them. We present an application of our system in the medical domain and give qualitative feedback from domain experts.",
                "AuthorNamesDeduped": "Gabriel Mistelbauer;Hamed Bouzari;Rüdiger Schernthaner;Ivan Baclija;Arnold Köchl;Stefan Bruckner;Milos Srámek;M. Eduard Gröller",
                "AuthorNames": "Gabriel Mistelbauer;Arnold Köchl;Rudiger Schernthaner;Ivan Baclija;Rüdiger Schernthaner;Stefan Bruckner;Milos Sramek;Meister Eduard Gröller",
                "AuthorAffiliation": "Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Medical University of Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Kaiser-Franz-Josef Hospital Vienna, Austria;Vienna University of Technoloqy, Austria;Austrian Academy of Sciences;Vienna University of Technoloqy, Austria",
                "InternalReferences": "0.1109/visual.2003.1250400;10.1109/tvcg.2006.152;10.1109/tvcg.2007.70576;10.1109/tvcg.2007.70591;10.1109/visual.2002.1183754;10.1109/visual.2005.1532856;10.1109/tvcg.2011.183;10.1109/visual.2005.1532818;10.1109/tvcg.2006.148;10.1109/tvcg.2010.199",
                "AuthorKeywords": "Visualization, Fuzzy Logic, Interaction",
                "AminerCitationCount": 12,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 401,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1515,
                "i": [
                    1515
                ]
            }
        },
        {
            "name": "Mike Sips",
            "value": 101,
            "numPapers": 19,
            "cluster": "3",
            "visible": 1,
            "index": 1557,
            "x": -71.31925165461614,
            "y": -388.1540471815611,
            "vy": 0,
            "vx": 0,
            "r": 1.1162924582613702,
            "node": {
                "Conference": "VAST",
                "Year": 2012,
                "Title": "A Visual Analytics Approach to Multiscale Exploration of Environmental Time Series",
                "DOI": "10.1109/tvcg.2012.191",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.191",
                "FirstPage": 2899,
                "LastPage": 2907,
                "PaperType": "J",
                "Abstract": "We present a Visual Analytics approach that addresses the detection of interesting patterns in numerical time series, specifically from environmental sciences. Crucial for the detection of interesting temporal patterns are the time scale and the starting points one is looking at. Our approach makes no assumption about time scale and starting position of temporal patterns and consists of three main steps: an algorithm to compute statistical values for all possible time scales and starting positions of intervals, visual identification of potentially interesting patterns in a matrix visualization, and interactive exploration of detected patterns. We demonstrate the utility of this approach in two scientific scenarios and explain how it allowed scientists to gain new insight into the dynamics of environmental systems.",
                "AuthorNamesDeduped": "Mike Sips;Patrick Köthur;Andrea Unger;Hans-Christian Hege;Doris Dransch",
                "AuthorNames": "Mike Sips;Patrick Köthur;Andrea Unger;Hans-Christian Hege;Doris Dransch",
                "AuthorAffiliation": "GFZ German Research Centre for Geosciences, Germany;GFZ German Research Centre for Geosciences, Germany;GFZ German Research Centre for Geosciences, Germany;Zuse Institute Berlin, Germany;GFZ German Research Centre for Geosciences, Germany",
                "InternalReferences": "0.1109/infvis.2001.963273;10.1109/infvis.1995.528685;10.1109/infvis.2004.11",
                "AuthorKeywords": "Time series analysis, multiscale visualization, visual analytics",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 1315,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1499,
                "i": [
                    1499
                ]
            }
        },
        {
            "name": "Yu-Shuen Wang",
            "value": 16,
            "numPapers": 10,
            "cluster": "10",
            "visible": 1,
            "index": 1558,
            "x": 314.8839258087693,
            "y": 238.1136561964842,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Focus+Context Metro Maps",
                "DOI": "10.1109/tvcg.2011.205",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.205",
                "FirstPage": 2528,
                "LastPage": 2535,
                "PaperType": "J",
                "Abstract": "We introduce a focus+context method to visualize a complicated metro map of a modern city on a small displaying area. The context of our work is with regard the popularity of mobile devices. The best route to the destination, which can be obtained from the arrival time of trains, is highlighted. The stations on the route enjoy larger spaces, whereas the other stations are rendered smaller and closer to fit the whole map into a screen. To simplify the navigation and route planning for visitors, we formulate various map characteristics such as octilinear transportation lines and regular station distances into energy terms. We then solve for the optimal layout in a least squares sense. In addition, we label the names of stations that are on the route of a passenger according to human preferences, occlusions, and consistencies of label positions using the graph cuts method. Our system achieves real-time performance by being able to report instant information because of the carefully designed energy terms. We apply our method to layout a number of metro maps and show the results and timing statistics to demonstrate the feasibility of our technique.",
                "AuthorNamesDeduped": "Yu-Shuen Wang;Ming-Te Chi",
                "AuthorNames": "Yu-Shuen Wang;Ming-Te Chi",
                "AuthorAffiliation": "National Chiao Tung University, Taiwan;National Chiao Tung University, Taiwan",
                "InternalReferences": "0.1109/infvis.1997.636786;10.1109/infvis.1996.559214;10.1109/tvcg.2008.132;10.1109/infvis.1998.729558;10.1109/visual.2005.1532818",
                "AuthorKeywords": "Focus+context visualization, metro map, octilinear layout, graph labeling, optimization",
                "AminerCitationCount": 67,
                "CitationCountCrossRef": 62,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 1152,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1556,
                "i": [
                    1556
                ]
            }
        },
        {
            "name": "Alan Keahey",
            "value": 47,
            "numPapers": 9,
            "cluster": "10",
            "visible": 1,
            "index": 1559,
            "x": -393.1551426687554,
            "y": 37.13534425733303,
            "vy": 0,
            "vx": 0,
            "r": 1.0541162924582614,
            "node": {
                "Conference": "InfoVis",
                "Year": 1996,
                "Title": "Techniques for non-linear magnification transformations",
                "DOI": "10.1109/infvis.1996.559214",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1996.559214",
                "FirstPage": 38,
                "LastPage": 45,
                "PaperType": "C",
                "Abstract": "This paper presents efficient methods for implementing general non-linear magnification transformations. Techniques are provided for: combining linear and non-linear magnifications, constraining the domain of magnifications, combining multiple transformations, and smoothly interpolating between magnified and normal views. In addition, piecewise linear methods are introduced which allow greater efficiency and expressiveness than their continuous counterparts.",
                "AuthorNamesDeduped": "Alan Keahey;Edward L. Robertson",
                "AuthorNames": "T.A. Keahey;E.L. Robertson",
                "AuthorAffiliation": "Department of Computer Science, Indiana University, Bloomington, IN, USA;Department of Computer Science, Indiana University, Bloomington, IN, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 217,
                "CitationCountCrossRef": 48,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 163,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3307,
                "i": [
                    3307
                ]
            }
        },
        {
            "name": "Rosane Minghim",
            "value": 71,
            "numPapers": 11,
            "cluster": "11",
            "visible": 1,
            "index": 1560,
            "x": 264.9006923064794,
            "y": -293.0488410070034,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "InfoVis",
                "Year": 2011,
                "Title": "Improved Similarity Trees and their Application to Visual Data Classification",
                "DOI": "10.1109/tvcg.2011.212",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.212",
                "FirstPage": 2459,
                "LastPage": 2468,
                "PaperType": "J",
                "Abstract": "An alternative form to multidimensional projections for the visual analysis of data represented in multidimensional spaces is the deployment of similarity trees, such as Neighbor Joining trees. They organize data objects on the visual plane emphasizing their levels of similarity with high capability of detecting and separating groups and subgroups of objects. Besides this similarity-based hierarchical data organization, some of their advantages include the ability to decrease point clutter; high precision; and a consistent view of the data set during focusing, offering a very intuitive way to view the general structure of the data set as well as to drill down to groups and subgroups of interest. Disadvantages of similarity trees based on neighbor joining strategies include their computational cost and the presence of virtual nodes that utilize too much of the visual space. This paper presents a highly improved version of the similarity tree technique. The improvements in the technique are given by two procedures. The first is a strategy that replaces virtual nodes by promoting real leaf nodes to their place, saving large portions of space in the display and maintaining the expressiveness and precision of the technique. The second improvement is an implementation that significantly accelerates the algorithm, impacting its use for larger data sets. We also illustrate the applicability of the technique in visual data mining, showing its advantages to support visual classification of data sets, with special attention to the case of image classification. We demonstrate the capabilities of the tree for analysis and iterative manipulation and employ those capabilities to support evolving to a satisfactory data organization and classification.",
                "AuthorNamesDeduped": "Jose Gustavo Paiva;Laura Florian;Hélio Pedrini;Guilherme P. Telles;Rosane Minghim",
                "AuthorNames": "Jose Gustavo Paiva;Laura Florian;Helio Pedrini;Guilherme Telles;Rosane Minghim",
                "AuthorAffiliation": "Federal University of Uberlândia and ICMC, Brazil;Universidade de Sao Paulo, Sao Paulo, São Paulo, BR;IC-University of Campinas, Brazil;IC-University of Campinas, Brazil;ICMC-University of São Paulo, Brazil",
                "InternalReferences": "0.1109/infvis.1999.801855;10.1109/tvcg.2009.140;10.1109/vast.2007.4389002;10.1109/tvcg.2008.138;10.1109/visual.1996.567787;10.1109/tvcg.2010.207;10.1109/tvcg.2010.170;10.1109/infvis.2002.1173148",
                "AuthorKeywords": "Similarity Trees, Multidimensional Projections, Image Classification",
                "AminerCitationCount": 65,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 42,
                "DownloadsXplore": 1471,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1571,
                "i": [
                    1571
                ]
            }
        },
        {
            "name": "S. Todd Barlow",
            "value": 34,
            "numPapers": 1,
            "cluster": "6",
            "visible": 1,
            "index": 1561,
            "x": 2.6229615763611904,
            "y": 395.1494907912308,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "InfoVis",
                "Year": 2001,
                "Title": "A comparison of 2-D visualizations of hierarchies",
                "DOI": "10.1109/infvis.2001.963290",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2001.963290",
                "FirstPage": 131,
                "LastPage": 138,
                "PaperType": "C",
                "Abstract": null,
                "AuthorNamesDeduped": "S. Todd Barlow;Padraic Neville",
                "AuthorNames": "T. Barlow;P. Neville",
                "AuthorAffiliation": "SAS Institute, Inc., USA;SAS Institute, Inc., USA",
                "InternalReferences": "0.1109/infvis.1998.729557;10.1109/visual.1992.235217",
                "AuthorKeywords": null,
                "AminerCitationCount": 197,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 8,
                "DownloadsXplore": 425,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2844,
                "i": [
                    2844
                ]
            }
        },
        {
            "name": "Padraic Neville",
            "value": 34,
            "numPapers": 1,
            "cluster": "6",
            "visible": 1,
            "index": 1562,
            "x": -268.9398105601687,
            "y": -289.6918678455793,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "InfoVis",
                "Year": 2001,
                "Title": "A comparison of 2-D visualizations of hierarchies",
                "DOI": "10.1109/infvis.2001.963290",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2001.963290",
                "FirstPage": 131,
                "LastPage": 138,
                "PaperType": "C",
                "Abstract": null,
                "AuthorNamesDeduped": "S. Todd Barlow;Padraic Neville",
                "AuthorNames": "T. Barlow;P. Neville",
                "AuthorAffiliation": "SAS Institute, Inc., USA;SAS Institute, Inc., USA",
                "InternalReferences": "0.1109/infvis.1998.729557;10.1109/visual.1992.235217",
                "AuthorKeywords": null,
                "AminerCitationCount": 197,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 8,
                "DownloadsXplore": 425,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2844,
                "i": [
                    2844
                ]
            }
        },
        {
            "name": "Christopher D. Shaw",
            "value": 79,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 1563,
            "x": 394.11794867885664,
            "y": 31.95375610425429,
            "vy": 0,
            "vx": 0,
            "r": 1.0909614277489925,
            "node": {
                "Conference": "VAST",
                "Year": 2009,
                "Title": "Capturing and supporting the analysis process",
                "DOI": "10.1109/vast.2009.5333020",
                "Link": "http://dx.doi.org/10.1109/VAST.2009.5333020",
                "FirstPage": 131,
                "LastPage": 138,
                "PaperType": "C",
                "Abstract": "Visual analytics tools provide powerful visual representations in order to support the sense-making process. In this process, analysts typically iterate through sequences of steps many times, varying parameters each time. Few visual analytics tools support this process well, nor do they provide support for visualizing and understanding the analysis process itself. To help analysts understand, explore, reference, and reuse their analysis process, we present a visual analytics system named CzSaw (See-Saw) that provides an editable and re-playable history navigation channel in addition to multiple visual representations of document collections and the entities within them (in a manner inspired by Jigsaw). Conventional history navigation tools range from basic undo and redo to branching timelines of user actions. In CzSaw's approach to this, first, user interactions are translated into a script language that drives the underlying scripting-driven propagation system. The latter allows analysts to edit analysis steps, and ultimately to program them. Second, on this base, we build both a history view showing progress and alternative paths, and a dependency graph showing the underlying logic of the analysis and dependency relations among the results of each step. These tools result in a visual model of the sense-making process, providing a way for analysts to visualize their analysis process, to reinterpret the problem, explore alternative paths, extract analysis patterns from existing history, and reuse them with other related analyses.",
                "AuthorNamesDeduped": "Nazanin Kadivar;Victor Y. Chen;Dustin T. Dunsmuir;Eric Lee;Cheryl Z. Qian;John Dill;Christopher D. Shaw;Robert F. Woodbury",
                "AuthorNames": "Nazanin Kadivar;Victor Chen;Dustin Dunsmuir;Eric Lee;Cheryl Qian;John Dill;Christopher Shaw;Robert Woodbury",
                "AuthorAffiliation": "School of Interactive Arts and Technology, Simon Fraser University, Canada;School of Interactive Arts and Technology, Simon Fraser University, Canada;School of Interactive Arts and Technology, Simon Fraser University, Canada;School of Interactive Arts and Technology, Simon Fraser University, Canada;School of Interactive Arts and Technology, Simon Fraser University, Canada;School of Interactive Arts and Technology, Simon Fraser University, Canada;School of Interactive Arts and Technology, Simon Fraser University, Canada;School of Interactive Arts and Technology, Simon Fraser University, Canada",
                "InternalReferences": "0.1109/infvis.2005.1532136;10.1109/vast.2008.4677362;10.1109/vast.2007.4388992;10.1109/tvcg.2008.137;10.1109/vast.2007.4389006;10.1109/vast.2008.4677365;10.1109/infvis.2004.2;10.1109/vast.2007.4389002;10.1109/tvcg.2007.70515;10.1109/vast.2007.4389001;10.1109/vast.2008.4677378",
                "AuthorKeywords": "Visual Analytics, Sense-making, Analysis Process, Visual History",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 688,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1859,
                "i": [
                    1859
                ]
            }
        },
        {
            "name": "Thomas Butkiewicz",
            "value": 15,
            "numPapers": 26,
            "cluster": "5",
            "visible": 1,
            "index": 1564,
            "x": -312.2945842298235,
            "y": 242.7387333342614,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "SciVis",
                "Year": 2016,
                "Title": "Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields",
                "DOI": "10.1109/tvcg.2016.2598448",
                "Link": "http://dx.doi.org/10.1109/TVCG.2016.2598448",
                "FirstPage": 990,
                "LastPage": 999,
                "PaperType": "J",
                "Abstract": "Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.",
                "AuthorNamesDeduped": "Andrew H. Stevens;Thomas Butkiewicz;Colin Ware",
                "AuthorNames": "Andrew H. Stevens;Thomas Butkiewicz;Colin Ware",
                "AuthorAffiliation": "The Center for Coastal and Ocean Mapping, The University of New Hampshire;The Center for Coastal and Ocean Mapping, The University of New Hampshire;The Center for Coastal and Ocean Mapping, The University of New Hampshire",
                "InternalReferences": "0.1109/visual.1996.568139;10.1109/tvcg.2009.126;10.1109/visual.2005.1532859;10.1109/visual.2004.59;10.1109/visual.1991.175792;10.1109/tvcg.2012.216;10.1109/visual.1999.809918;10.1109/visual.1998.745317;10.1109/visual.2005.1532772;10.1109/tvcg.2009.138;10.1109/visual.1990.146360;10.1109/visual.1996.567777",
                "AuthorKeywords": "Flow visualization;3D vector fields;Cutting planes;Glyphs;Perception;Evaluation;Human factors",
                "AminerCitationCount": 2,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 711,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 953,
                "i": [
                    953
                ]
            }
        },
        {
            "name": "Mario Hlawitschka",
            "value": 82,
            "numPapers": 17,
            "cluster": "11",
            "visible": 1,
            "index": 1565,
            "x": 66.32983705191707,
            "y": -390.06454942312575,
            "vy": 0,
            "vx": 0,
            "r": 1.0944156591824985,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Interactive Comparison of Scalar fields Based on Largest Contours with Applications to Flow Visualization",
                "DOI": "10.1109/tvcg.2008.143",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.143",
                "FirstPage": 1475,
                "LastPage": 1482,
                "PaperType": "J",
                "Abstract": "Understanding fluid flow data, especially vortices, is still a challenging task. Sophisticated visualization tools help to gain insight. In this paper, we present a novel approach for the interactive comparison of scalar fields using isosurfaces, and its application to fluid flow datasets. Features in two scalar fields are defined by largest contour segmentation after topological simplification. These features are matched using a volumetric similarity measure based on spatial overlap of individual features. The relationships defined by this similarity measure are ranked and presented in a thumbnail gallery of feature pairs and a graph representation showing all relationships between individual contours. Additionally, linked views of the contour trees are provided to ease navigation. The main render view shows the selected features overlapping each other. Thus, by displaying individual features and their relationships in a structured fashion, we enable exploratory visualization of correlations between similar structures in two scalar fields. We demonstrate the utility of our approach by applying it to a number of complex fluid flow datasets, where the emphasis is put on the comparison of vortex related scalar quantities.",
                "AuthorNamesDeduped": "Dominic Schneider;Alexander Wiebel;Hamish A. Carr;Mario Hlawitschka;Gerik Scheuermann",
                "AuthorNames": "Dominic Schneider;Alexander Wiebel;Hamish Carr;Mario Hlawitschka;Gerik Scheuermann",
                "AuthorAffiliation": "University of Leipzig, Germany;University of Leipzig, Germany;University College Dublin, Ireland;University of Leipzig, Germany;University of Leipzig, Germany",
                "InternalReferences": "0.1109/tvcg.2006.164;10.1109/visual.2001.964519;10.1109/visual.2004.107;10.1109/tvcg.2007.70615;10.1109/visual.2005.1532830;10.1109/tvcg.2006.165;10.1109/visual.2004.96;10.1109/visual.2003.1250374;10.1109/tvcg.2007.70519;10.1109/visual.2005.1532848;10.1109/visual.1997.663875;10.1109/visual.2005.1532835",
                "AuthorKeywords": "Scalar topology, comparative visualization, contour tree, largest contours, flow visualization",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 36,
                "DownloadsXplore": 625,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2047,
                "i": [
                    2047
                ]
            }
        },
        {
            "name": "Marc Ruiz 0002",
            "value": 10,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1566,
            "x": 214.64378130074778,
            "y": 332.532776052101,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Automatic Transfer Functions Based on Informational Divergence",
                "DOI": "10.1109/tvcg.2011.173",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.173",
                "FirstPage": 1932,
                "LastPage": 1941,
                "PaperType": "J",
                "Abstract": "In this paper we present a framework to define transfer functions from a target distribution provided by the user. A target distribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is based on a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques.",
                "AuthorNamesDeduped": "Marc Ruiz 0002;Anton Bardera;Imma Boada;Ivan Viola;Miquel Feixas;Mateu Sbert",
                "AuthorNames": "Marc Ruiz;Anton Bardera;Imma Boada;Ivan Viola;Miquel Feixas;Mateu Sbert",
                "AuthorAffiliation": "University of Girona, Spain;University of Girona, Spain;University of Girona, Spain;University of Bergen, Norway;University of Girona, Spain;University of Girona, Spain",
                "InternalReferences": "0.1109/tvcg.2010.132;10.1109/tvcg.2006.137;10.1109/tvcg.2006.159;10.1109/tvcg.2010.131;10.1109/tvcg.2006.152;10.1109/visual.2003.1250414;10.1109/tvcg.2007.70576;10.1109/tvcg.2009.120;10.1109/visual.1996.568113;10.1109/tvcg.2008.140;10.1109/visual.2005.1532834;10.1109/visual.2002.1183785;10.1109/tvcg.2006.148;10.1109/visual.2005.1532833",
                "AuthorKeywords": "Transfer function, Information theory, Informational divergence, Kullback-Leibler distance",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 840,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1646,
                "i": [
                    1646
                ]
            }
        },
        {
            "name": "Anton Bardera",
            "value": 10,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1567,
            "x": -383.01650076542455,
            "y": -100.24150907388386,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Automatic Transfer Functions Based on Informational Divergence",
                "DOI": "10.1109/tvcg.2011.173",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.173",
                "FirstPage": 1932,
                "LastPage": 1941,
                "PaperType": "J",
                "Abstract": "In this paper we present a framework to define transfer functions from a target distribution provided by the user. A target distribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is based on a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques.",
                "AuthorNamesDeduped": "Marc Ruiz 0002;Anton Bardera;Imma Boada;Ivan Viola;Miquel Feixas;Mateu Sbert",
                "AuthorNames": "Marc Ruiz;Anton Bardera;Imma Boada;Ivan Viola;Miquel Feixas;Mateu Sbert",
                "AuthorAffiliation": "University of Girona, Spain;University of Girona, Spain;University of Girona, Spain;University of Bergen, Norway;University of Girona, Spain;University of Girona, Spain",
                "InternalReferences": "0.1109/tvcg.2010.132;10.1109/tvcg.2006.137;10.1109/tvcg.2006.159;10.1109/tvcg.2010.131;10.1109/tvcg.2006.152;10.1109/visual.2003.1250414;10.1109/tvcg.2007.70576;10.1109/tvcg.2009.120;10.1109/visual.1996.568113;10.1109/tvcg.2008.140;10.1109/visual.2005.1532834;10.1109/visual.2002.1183785;10.1109/tvcg.2006.148;10.1109/visual.2005.1532833",
                "AuthorKeywords": "Transfer function, Information theory, Informational divergence, Kullback-Leibler distance",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 840,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1646,
                "i": [
                    1646
                ]
            }
        },
        {
            "name": "Imma Boada",
            "value": 10,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1568,
            "x": 350.2482824295204,
            "y": -184.8679005538034,
            "vy": 0,
            "vx": 0,
            "r": 1.0115141047783534,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Automatic Transfer Functions Based on Informational Divergence",
                "DOI": "10.1109/tvcg.2011.173",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.173",
                "FirstPage": 1932,
                "LastPage": 1941,
                "PaperType": "J",
                "Abstract": "In this paper we present a framework to define transfer functions from a target distribution provided by the user. A target distribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is based on a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques.",
                "AuthorNamesDeduped": "Marc Ruiz 0002;Anton Bardera;Imma Boada;Ivan Viola;Miquel Feixas;Mateu Sbert",
                "AuthorNames": "Marc Ruiz;Anton Bardera;Imma Boada;Ivan Viola;Miquel Feixas;Mateu Sbert",
                "AuthorAffiliation": "University of Girona, Spain;University of Girona, Spain;University of Girona, Spain;University of Bergen, Norway;University of Girona, Spain;University of Girona, Spain",
                "InternalReferences": "0.1109/tvcg.2010.132;10.1109/tvcg.2006.137;10.1109/tvcg.2006.159;10.1109/tvcg.2010.131;10.1109/tvcg.2006.152;10.1109/visual.2003.1250414;10.1109/tvcg.2007.70576;10.1109/tvcg.2009.120;10.1109/visual.1996.568113;10.1109/tvcg.2008.140;10.1109/visual.2005.1532834;10.1109/visual.2002.1183785;10.1109/tvcg.2006.148;10.1109/visual.2005.1532833",
                "AuthorKeywords": "Transfer function, Information theory, Informational divergence, Kullback-Leibler distance",
                "AminerCitationCount": 82,
                "CitationCountCrossRef": 43,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 840,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1646,
                "i": [
                    1646
                ]
            }
        },
        {
            "name": "Christopher Koehler",
            "value": 0,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 1569,
            "x": -133.42822389426615,
            "y": 373.0240060216254,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Vortex Visualization in Ultra Low Reynolds Number Insect Flight",
                "DOI": "10.1109/tvcg.2011.260",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.260",
                "FirstPage": 2071,
                "LastPage": 2079,
                "PaperType": "J",
                "Abstract": "We present the visual analysis of a biologically inspired CFD simulation of the deformable flapping wings of a dragonfly as it takes off and begins to maneuver, using vortex detection and integration-based flow lines. The additional seed placement and perceptual challenges introduced by having multiple dynamically deforming objects in the highly unsteady 3D flow domain are addressed. A brief overview of the high speed photogrammetry setup used to capture the dragonfly takeoff, parametric surfaces used for wing reconstruction, CFD solver and underlying flapping flight theory is presented to clarify the importance of several unsteady flight mechanisms, such as the leading edge vortex, that are captured visually. A novel interactive seed placement method is used to simplify the generation of seed curves that stay in the vicinity of relevant flow phenomena as they move with the flapping wings. This method allows a user to define and evaluate the quality of a seed's trajectory over time while working with a single time step. The seed curves are then used to place particles, streamlines and generalized streak lines. The novel concept of flowing seeds is also introduced in order to add visual context about the instantaneous vector fields surrounding smoothly animate streak lines. Tests show this method to be particularly effective at visually capturing vortices that move quickly or that exist for a very brief period of time. In addition, an automatic camera animation method is used to address occlusion issues caused when animating the immersed wing boundaries alongside many geometric flow lines. Each visualization method is presented at multiple time steps during the up-stroke and down-stroke to highlight the formation, attachment and shedding of the leading edge vortices in pairs of wings. Also, the visualizations show evidence of wake capture at stroke reversal which suggests the existence of previously unknown unsteady lift generation mechanisms that are unique to quad wing insects.",
                "AuthorNamesDeduped": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorNames": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorAffiliation": "College of Engineering and Computer Science, Wright State University, USA;College of Engineering and Computer Science, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA",
                "InternalReferences": "0.1109/visual.2002.1183789;10.1109/visual.2005.1532830;10.1109/tvcg.2007.70557;10.1109/visual.2005.1532831;10.1109/tvcg.2008.163;10.1109/tvcg.2010.169;10.1109/visual.2005.1532848;10.1109/visual.2005.1532850;10.1109/tvcg.2010.212;10.1109/visual.2000.885690;10.1109/tvcg.2010.198;10.1109/visual.2004.113;10.1109/visual.1998.745296;10.1109/tvcg.2007.70595;10.1109/tvcg.2009.190;10.1109/tvcg.2008.133;10.1109/tvcg.2006.199;10.1109/visual.2002.1183821;10.1109/tvcg.2007.70545;10.1109/tvcg.2010.166;10.1109/tvcg.2006.201",
                "AuthorKeywords": "Flow visualization, flowing seed points, streak lines, streamlines, insect flight, vortex visualization, unsteady flow",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1261,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1657,
                "i": [
                    1657
                ]
            }
        },
        {
            "name": "Thomas Wischgoll",
            "value": 0,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 1570,
            "x": -153.6371970921531,
            "y": -365.3020827611949,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Vortex Visualization in Ultra Low Reynolds Number Insect Flight",
                "DOI": "10.1109/tvcg.2011.260",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.260",
                "FirstPage": 2071,
                "LastPage": 2079,
                "PaperType": "J",
                "Abstract": "We present the visual analysis of a biologically inspired CFD simulation of the deformable flapping wings of a dragonfly as it takes off and begins to maneuver, using vortex detection and integration-based flow lines. The additional seed placement and perceptual challenges introduced by having multiple dynamically deforming objects in the highly unsteady 3D flow domain are addressed. A brief overview of the high speed photogrammetry setup used to capture the dragonfly takeoff, parametric surfaces used for wing reconstruction, CFD solver and underlying flapping flight theory is presented to clarify the importance of several unsteady flight mechanisms, such as the leading edge vortex, that are captured visually. A novel interactive seed placement method is used to simplify the generation of seed curves that stay in the vicinity of relevant flow phenomena as they move with the flapping wings. This method allows a user to define and evaluate the quality of a seed's trajectory over time while working with a single time step. The seed curves are then used to place particles, streamlines and generalized streak lines. The novel concept of flowing seeds is also introduced in order to add visual context about the instantaneous vector fields surrounding smoothly animate streak lines. Tests show this method to be particularly effective at visually capturing vortices that move quickly or that exist for a very brief period of time. In addition, an automatic camera animation method is used to address occlusion issues caused when animating the immersed wing boundaries alongside many geometric flow lines. Each visualization method is presented at multiple time steps during the up-stroke and down-stroke to highlight the formation, attachment and shedding of the leading edge vortices in pairs of wings. Also, the visualizations show evidence of wake capture at stroke reversal which suggests the existence of previously unknown unsteady lift generation mechanisms that are unique to quad wing insects.",
                "AuthorNamesDeduped": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorNames": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorAffiliation": "College of Engineering and Computer Science, Wright State University, USA;College of Engineering and Computer Science, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA",
                "InternalReferences": "0.1109/visual.2002.1183789;10.1109/visual.2005.1532830;10.1109/tvcg.2007.70557;10.1109/visual.2005.1532831;10.1109/tvcg.2008.163;10.1109/tvcg.2010.169;10.1109/visual.2005.1532848;10.1109/visual.2005.1532850;10.1109/tvcg.2010.212;10.1109/visual.2000.885690;10.1109/tvcg.2010.198;10.1109/visual.2004.113;10.1109/visual.1998.745296;10.1109/tvcg.2007.70595;10.1109/tvcg.2009.190;10.1109/tvcg.2008.133;10.1109/tvcg.2006.199;10.1109/visual.2002.1183821;10.1109/tvcg.2007.70545;10.1109/tvcg.2010.166;10.1109/tvcg.2006.201",
                "AuthorKeywords": "Flow visualization, flowing seed points, streak lines, streamlines, insect flight, vortex visualization, unsteady flow",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1261,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1657,
                "i": [
                    1657
                ]
            }
        },
        {
            "name": "Haibo Dong",
            "value": 0,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 1571,
            "x": 360.1599083908259,
            "y": 165.63465937994945,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Vortex Visualization in Ultra Low Reynolds Number Insect Flight",
                "DOI": "10.1109/tvcg.2011.260",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.260",
                "FirstPage": 2071,
                "LastPage": 2079,
                "PaperType": "J",
                "Abstract": "We present the visual analysis of a biologically inspired CFD simulation of the deformable flapping wings of a dragonfly as it takes off and begins to maneuver, using vortex detection and integration-based flow lines. The additional seed placement and perceptual challenges introduced by having multiple dynamically deforming objects in the highly unsteady 3D flow domain are addressed. A brief overview of the high speed photogrammetry setup used to capture the dragonfly takeoff, parametric surfaces used for wing reconstruction, CFD solver and underlying flapping flight theory is presented to clarify the importance of several unsteady flight mechanisms, such as the leading edge vortex, that are captured visually. A novel interactive seed placement method is used to simplify the generation of seed curves that stay in the vicinity of relevant flow phenomena as they move with the flapping wings. This method allows a user to define and evaluate the quality of a seed's trajectory over time while working with a single time step. The seed curves are then used to place particles, streamlines and generalized streak lines. The novel concept of flowing seeds is also introduced in order to add visual context about the instantaneous vector fields surrounding smoothly animate streak lines. Tests show this method to be particularly effective at visually capturing vortices that move quickly or that exist for a very brief period of time. In addition, an automatic camera animation method is used to address occlusion issues caused when animating the immersed wing boundaries alongside many geometric flow lines. Each visualization method is presented at multiple time steps during the up-stroke and down-stroke to highlight the formation, attachment and shedding of the leading edge vortices in pairs of wings. Also, the visualizations show evidence of wake capture at stroke reversal which suggests the existence of previously unknown unsteady lift generation mechanisms that are unique to quad wing insects.",
                "AuthorNamesDeduped": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorNames": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorAffiliation": "College of Engineering and Computer Science, Wright State University, USA;College of Engineering and Computer Science, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA",
                "InternalReferences": "0.1109/visual.2002.1183789;10.1109/visual.2005.1532830;10.1109/tvcg.2007.70557;10.1109/visual.2005.1532831;10.1109/tvcg.2008.163;10.1109/tvcg.2010.169;10.1109/visual.2005.1532848;10.1109/visual.2005.1532850;10.1109/tvcg.2010.212;10.1109/visual.2000.885690;10.1109/tvcg.2010.198;10.1109/visual.2004.113;10.1109/visual.1998.745296;10.1109/tvcg.2007.70595;10.1109/tvcg.2009.190;10.1109/tvcg.2008.133;10.1109/tvcg.2006.199;10.1109/visual.2002.1183821;10.1109/tvcg.2007.70545;10.1109/tvcg.2010.166;10.1109/tvcg.2006.201",
                "AuthorKeywords": "Flow visualization, flowing seed points, streak lines, streamlines, insect flight, vortex visualization, unsteady flow",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1261,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1657,
                "i": [
                    1657
                ]
            }
        },
        {
            "name": "Zachary Gaston",
            "value": 0,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 1572,
            "x": -377.57538724325104,
            "y": 121.1892196035151,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Vortex Visualization in Ultra Low Reynolds Number Insect Flight",
                "DOI": "10.1109/tvcg.2011.260",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.260",
                "FirstPage": 2071,
                "LastPage": 2079,
                "PaperType": "J",
                "Abstract": "We present the visual analysis of a biologically inspired CFD simulation of the deformable flapping wings of a dragonfly as it takes off and begins to maneuver, using vortex detection and integration-based flow lines. The additional seed placement and perceptual challenges introduced by having multiple dynamically deforming objects in the highly unsteady 3D flow domain are addressed. A brief overview of the high speed photogrammetry setup used to capture the dragonfly takeoff, parametric surfaces used for wing reconstruction, CFD solver and underlying flapping flight theory is presented to clarify the importance of several unsteady flight mechanisms, such as the leading edge vortex, that are captured visually. A novel interactive seed placement method is used to simplify the generation of seed curves that stay in the vicinity of relevant flow phenomena as they move with the flapping wings. This method allows a user to define and evaluate the quality of a seed's trajectory over time while working with a single time step. The seed curves are then used to place particles, streamlines and generalized streak lines. The novel concept of flowing seeds is also introduced in order to add visual context about the instantaneous vector fields surrounding smoothly animate streak lines. Tests show this method to be particularly effective at visually capturing vortices that move quickly or that exist for a very brief period of time. In addition, an automatic camera animation method is used to address occlusion issues caused when animating the immersed wing boundaries alongside many geometric flow lines. Each visualization method is presented at multiple time steps during the up-stroke and down-stroke to highlight the formation, attachment and shedding of the leading edge vortices in pairs of wings. Also, the visualizations show evidence of wake capture at stroke reversal which suggests the existence of previously unknown unsteady lift generation mechanisms that are unique to quad wing insects.",
                "AuthorNamesDeduped": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorNames": "Christopher Koehler;Thomas Wischgoll;Haibo Dong;Zachary Gaston",
                "AuthorAffiliation": "College of Engineering and Computer Science, Wright State University, USA;College of Engineering and Computer Science, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA;College of Mechanical and Materials Engineering, Wright State University, USA",
                "InternalReferences": "0.1109/visual.2002.1183789;10.1109/visual.2005.1532830;10.1109/tvcg.2007.70557;10.1109/visual.2005.1532831;10.1109/tvcg.2008.163;10.1109/tvcg.2010.169;10.1109/visual.2005.1532848;10.1109/visual.2005.1532850;10.1109/tvcg.2010.212;10.1109/visual.2000.885690;10.1109/tvcg.2010.198;10.1109/visual.2004.113;10.1109/visual.1998.745296;10.1109/tvcg.2007.70595;10.1109/tvcg.2009.190;10.1109/tvcg.2008.133;10.1109/tvcg.2006.199;10.1109/visual.2002.1183821;10.1109/tvcg.2007.70545;10.1109/tvcg.2010.166;10.1109/tvcg.2006.201",
                "AuthorKeywords": "Flow visualization, flowing seed points, streak lines, streamlines, insect flight, vortex visualization, unsteady flow",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 1261,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1657,
                "i": [
                    1657
                ]
            }
        },
        {
            "name": "Manfred Weiler",
            "value": 71,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1573,
            "x": 196.6126842260305,
            "y": -344.51916115280903,
            "vy": 0,
            "vx": 0,
            "r": 1.0817501439263097,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Hardware-based ray casting for tetrahedral meshes",
                "DOI": "10.1109/visual.2003.1250390",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250390",
                "FirstPage": 333,
                "LastPage": 340,
                "PaperType": "C",
                "Abstract": "We present the first implementation of a volume ray casting algorithm for tetrahedral meshes running on off-the-shelf programmable graphics hardware. Our implementation avoids the memory transfer bottleneck of the graphics bus since the complete mesh data is stored in the local memory of the graphics adapter and all computations, in particular ray traversal and ray integration, are performed by the graphics processing unit. Analogously to other ray casting algorithms, our algorithm does not require an expensive cell sorting. Provided that the graphics adapter offers enough texture memory, our implementation performs comparable to the fastest published volume rendering algorithms for unstructured meshes. Our approach works with cyclic and/or non-convex meshes and supports early ray termination. Accurate ray integration is guaranteed by applying pre-integrated volume rendering. In order to achieve almost interactive modifications of transfer functions, we propose a new method for computing three-dimensional pre-integration tables.",
                "AuthorNamesDeduped": "Manfred Weiler;Martin Kraus 0001;Markus Merz;Thomas Ertl",
                "AuthorNames": "M. Weiler;M. Kraus;M. Merz;T. Ertl",
                "AuthorAffiliation": "Visualization and Interactive Systems Group, University of Stuttgart, Stuttgart, Germany;Visualization and Interactive Systems Group, University of Stuttgart, Stuttgart, Germany;Visualization and Interactive Systems Group, University of Stuttgart, Stuttgart, Germany;Visualization and Interactive Systems Group, University of Stuttgart, Stuttgart, Germany",
                "InternalReferences": "0.1109/visual.2000.885683",
                "AuthorKeywords": "ray casting, pixel shading, programmable graphics hardware, cell projection, tetrahedral meshes, unstructured meshes, volume visualization, pre-integrated volume rendering",
                "AminerCitationCount": 263,
                "CitationCountCrossRef": 53,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 308,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2663,
                "i": [
                    2663
                ]
            }
        },
        {
            "name": "Darrel Palke",
            "value": 22,
            "numPapers": 17,
            "cluster": "11",
            "visible": 1,
            "index": 1574,
            "x": 87.77115229552824,
            "y": 386.97057359018294,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Asymmetric Tensor field Visualization for Surfaces",
                "DOI": "10.1109/tvcg.2011.170",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.170",
                "FirstPage": 1979,
                "LastPage": 1988,
                "PaperType": "J",
                "Abstract": "Asymmetric tensor field visualization can provide important insight into fluid flows and solid deformations. Existing techniques for asymmetric tensor fields focus on the analysis, and simply use evenly-spaced hyperstreamlines on surfaces following eigenvectors and dual-eigenvectors in the tensor field. In this paper, we describe a hybrid visualization technique in which hyperstreamlines and elliptical glyphs are used in real and complex domains, respectively. This enables a more faithful representation of flow behaviors inside complex domains. In addition, we encode tensor magnitude, an important quantity in tensor field analysis, using the density of hyperstreamlines and sizes of glyphs. This allows colors to be used to encode other important tensor quantities. To facilitate quick visual exploration of the data from different viewpoints and at different resolutions, we employ an efficient image-space approach in which hyperstreamlines and glyphs are generated quickly in the image plane. The combination of these techniques leads to an efficient tensor field visualization system for domain scientists. We demonstrate the effectiveness of our visualization technique through applications to complex simulated engine fluid flow and earthquake deformation data. Feedback from domain expert scientists, who are also co-authors, is provided.",
                "AuthorNamesDeduped": "Darrel Palke;Zhongzang Lin;Guoning Chen;Harry Yeh;Paul Vincent;Robert S. Laramee;Eugene Zhang",
                "AuthorNames": "Darrel Palke;Zhongzang Lin;Guoning Chen;Harry Yeh;Paul Vincent;Robert Laramee;Eugene Zhang",
                "AuthorAffiliation": "SCI, University of Utah, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Swansea University, UK;Oregon State University, USA",
                "InternalReferences": "0.1109/visual.2003.1250379;10.1109/tvcg.2010.199;10.1109/visual.2004.105;10.1109/visual.1993.398849;10.1109/visual.2005.1532773;10.1109/visual.2005.1532770;10.1109/tvcg.2006.134;10.1109/visual.2005.1532850;10.1109/tvcg.2006.116;10.1109/visual.2005.1532841;10.1109/visual.2003.1250363;10.1109/visual.1998.745295;10.1109/visual.2004.80;10.1109/visual.1998.745294;10.1109/visual.2005.1532832;10.1109/visual.1999.809905;10.1109/visual.2000.885690;10.1109/visual.1994.346326",
                "AuthorKeywords": "Aasymmetric tensor fields, vector fields, glyph packing, hyperstreamline placement, view-dependent",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 728,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1665,
                "i": [
                    1665
                ]
            }
        },
        {
            "name": "Zhongzang Lin",
            "value": 22,
            "numPapers": 17,
            "cluster": "11",
            "visible": 1,
            "index": 1575,
            "x": -326.21812759804044,
            "y": -226.12326997995717,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Asymmetric Tensor field Visualization for Surfaces",
                "DOI": "10.1109/tvcg.2011.170",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.170",
                "FirstPage": 1979,
                "LastPage": 1988,
                "PaperType": "J",
                "Abstract": "Asymmetric tensor field visualization can provide important insight into fluid flows and solid deformations. Existing techniques for asymmetric tensor fields focus on the analysis, and simply use evenly-spaced hyperstreamlines on surfaces following eigenvectors and dual-eigenvectors in the tensor field. In this paper, we describe a hybrid visualization technique in which hyperstreamlines and elliptical glyphs are used in real and complex domains, respectively. This enables a more faithful representation of flow behaviors inside complex domains. In addition, we encode tensor magnitude, an important quantity in tensor field analysis, using the density of hyperstreamlines and sizes of glyphs. This allows colors to be used to encode other important tensor quantities. To facilitate quick visual exploration of the data from different viewpoints and at different resolutions, we employ an efficient image-space approach in which hyperstreamlines and glyphs are generated quickly in the image plane. The combination of these techniques leads to an efficient tensor field visualization system for domain scientists. We demonstrate the effectiveness of our visualization technique through applications to complex simulated engine fluid flow and earthquake deformation data. Feedback from domain expert scientists, who are also co-authors, is provided.",
                "AuthorNamesDeduped": "Darrel Palke;Zhongzang Lin;Guoning Chen;Harry Yeh;Paul Vincent;Robert S. Laramee;Eugene Zhang",
                "AuthorNames": "Darrel Palke;Zhongzang Lin;Guoning Chen;Harry Yeh;Paul Vincent;Robert Laramee;Eugene Zhang",
                "AuthorAffiliation": "SCI, University of Utah, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Swansea University, UK;Oregon State University, USA",
                "InternalReferences": "0.1109/visual.2003.1250379;10.1109/tvcg.2010.199;10.1109/visual.2004.105;10.1109/visual.1993.398849;10.1109/visual.2005.1532773;10.1109/visual.2005.1532770;10.1109/tvcg.2006.134;10.1109/visual.2005.1532850;10.1109/tvcg.2006.116;10.1109/visual.2005.1532841;10.1109/visual.2003.1250363;10.1109/visual.1998.745295;10.1109/visual.2004.80;10.1109/visual.1998.745294;10.1109/visual.2005.1532832;10.1109/visual.1999.809905;10.1109/visual.2000.885690;10.1109/visual.1994.346326",
                "AuthorKeywords": "Aasymmetric tensor fields, vector fields, glyph packing, hyperstreamline placement, view-dependent",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 728,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1665,
                "i": [
                    1665
                ]
            }
        },
        {
            "name": "Paul Vincent",
            "value": 22,
            "numPapers": 17,
            "cluster": "11",
            "visible": 1,
            "index": 1576,
            "x": 393.4119625810154,
            "y": -53.63793152381731,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Asymmetric Tensor field Visualization for Surfaces",
                "DOI": "10.1109/tvcg.2011.170",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.170",
                "FirstPage": 1979,
                "LastPage": 1988,
                "PaperType": "J",
                "Abstract": "Asymmetric tensor field visualization can provide important insight into fluid flows and solid deformations. Existing techniques for asymmetric tensor fields focus on the analysis, and simply use evenly-spaced hyperstreamlines on surfaces following eigenvectors and dual-eigenvectors in the tensor field. In this paper, we describe a hybrid visualization technique in which hyperstreamlines and elliptical glyphs are used in real and complex domains, respectively. This enables a more faithful representation of flow behaviors inside complex domains. In addition, we encode tensor magnitude, an important quantity in tensor field analysis, using the density of hyperstreamlines and sizes of glyphs. This allows colors to be used to encode other important tensor quantities. To facilitate quick visual exploration of the data from different viewpoints and at different resolutions, we employ an efficient image-space approach in which hyperstreamlines and glyphs are generated quickly in the image plane. The combination of these techniques leads to an efficient tensor field visualization system for domain scientists. We demonstrate the effectiveness of our visualization technique through applications to complex simulated engine fluid flow and earthquake deformation data. Feedback from domain expert scientists, who are also co-authors, is provided.",
                "AuthorNamesDeduped": "Darrel Palke;Zhongzang Lin;Guoning Chen;Harry Yeh;Paul Vincent;Robert S. Laramee;Eugene Zhang",
                "AuthorNames": "Darrel Palke;Zhongzang Lin;Guoning Chen;Harry Yeh;Paul Vincent;Robert Laramee;Eugene Zhang",
                "AuthorAffiliation": "SCI, University of Utah, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Oregon State University, USA;Swansea University, UK;Oregon State University, USA",
                "InternalReferences": "0.1109/visual.2003.1250379;10.1109/tvcg.2010.199;10.1109/visual.2004.105;10.1109/visual.1993.398849;10.1109/visual.2005.1532773;10.1109/visual.2005.1532770;10.1109/tvcg.2006.134;10.1109/visual.2005.1532850;10.1109/tvcg.2006.116;10.1109/visual.2005.1532841;10.1109/visual.2003.1250363;10.1109/visual.1998.745295;10.1109/visual.2004.80;10.1109/visual.1998.745294;10.1109/visual.2005.1532832;10.1109/visual.1999.809905;10.1109/visual.2000.885690;10.1109/visual.1994.346326",
                "AuthorKeywords": "Aasymmetric tensor fields, vector fields, glyph packing, hyperstreamline placement, view-dependent",
                "AminerCitationCount": 34,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 47,
                "DownloadsXplore": 728,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1665,
                "i": [
                    1665
                ]
            }
        },
        {
            "name": "Allen R. Sanderson",
            "value": 16,
            "numPapers": 26,
            "cluster": "11",
            "visible": 1,
            "index": 1577,
            "x": -253.93833566853888,
            "y": 305.39371584545165,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Analysis of Recurrent Patterns in Toroidal Magnetic fields",
                "DOI": "10.1109/tvcg.2010.133",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.133",
                "FirstPage": 1431,
                "LastPage": 1440,
                "PaperType": "J",
                "Abstract": "In the development of magnetic confinement fusion which will potentially be a future source for low cost power, physicists must be able to analyze the magnetic field that confines the burning plasma. While the magnetic field can be described as a vector field, traditional techniques for analyzing the field's topology cannot be used because of its Hamiltonian nature. In this paper we describe a technique developed as a collaboration between physicists and computer scientists that determines the topology of a toroidal magnetic field using fieldlines with near minimal lengths. More specifically, we analyze the Poincaré map of the sampled fieldlines in a Poincaré section including identifying critical points and other topological features of interest to physicists. The technique has been deployed into an interactiveparallel visualization tool which physicists are using to gain new insight into simulations of magnetically confined burning plasmas.",
                "AuthorNamesDeduped": "Allen R. Sanderson;Guoning Chen;Xavier Tricoche;David Pugmire;Scott Kruger;Joshua A. Breslau",
                "AuthorNames": "Allen Sanderson;Guoning Chen;Xavier Tricoche;David Pugmire;Scott Kruger;Joshua Breslau",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Purdue University, USA;Oak Ridge National Laboratory, USA;Tech-X Corporation, USA;Princeton Plasma Physics Laboratory, USA",
                "InternalReferences": "0.1109/visual.2005.1532842;10.1109/visual.2001.964507;10.1109/visual.2003.1250376;10.1109/visual.1997.663858;10.1109/visual.1998.745296",
                "AuthorKeywords": "Confined magnetic fusion, magnetic field visualization, Poincare map, periodic magnetic fieldlines, recurrent patterns",
                "AminerCitationCount": 49,
                "CitationCountCrossRef": 27,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 334,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1793,
                "i": [
                    1793
                ]
            }
        },
        {
            "name": "Alyn P. Rockwood",
            "value": 35,
            "numPapers": 1,
            "cluster": "11",
            "visible": 1,
            "index": 1578,
            "x": -19.050300569982628,
            "y": -396.84642627620235,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "Visualization of higher order singularities in vector fields",
                "DOI": "10.1109/visual.1997.663858",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663858",
                "FirstPage": 67,
                "LastPage": 74,
                "PaperType": "C",
                "Abstract": "Presents an algorithm for the visualization of vector field topology based on Clifford algebra. It allows the detection of higher-order singularities. This is accomplished by first analysing the possible critical points and then choosing a suitable polynomial approximation, because conventional methods based on piecewise linear or bilinear approximation do not allow higher-order critical points and destroy the topology in such cases. The algorithm is still very fast, because of using linear approximation outside the areas with several critical points.",
                "AuthorNamesDeduped": "Gerik Scheuermann;Hans Hagen;Heinz Krüger;Martin Menzel;Alyn P. Rockwood",
                "AuthorNames": "G. Scheuermann;H. Hagen;H. Kruger;M. Menzel;A. Rockwood",
                "AuthorAffiliation": "Department of Computer Science, University of Kaiserslautern, Germany;Department of Computer Science, University of Kaiserslautern, Germany;Department of Physics, University of Kaiserslautern, Germany;Department of Physics, University of Kaiserslautern, Germany;Department of Computer Science, Arizona State University, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 110,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 7,
                "DownloadsXplore": 150,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3254,
                "i": [
                    3254
                ]
            }
        },
        {
            "name": "Lambertus Hesselink",
            "value": 151,
            "numPapers": 5,
            "cluster": "11",
            "visible": 1,
            "index": 1579,
            "x": 282.2023549767708,
            "y": 279.84251079055997,
            "vy": 0,
            "vx": 0,
            "r": 1.1738629821531377,
            "node": {
                "Conference": "Vis",
                "Year": 1990,
                "Title": "Surface representations of two- and three-dimensional fluid flow topology",
                "DOI": "10.1109/visual.1990.146359",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1990.146359",
                "FirstPage": 6,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "The use of critical point analysis to generate representations of the vector field topology of numerical flow data sets is discussed. Critical points are located and characterized in a two-dimensional domain, which may be either a two-dimensional flow field or the tangential velocity field near a three-dimensional body. Tangent curves are then integrated out along the principal directions of certain classes of critical points. The points and curves are linked to form a skeleton representing the two-dimensional vector field topology. When generated from the tangential velocity field near a body in a three-dimensional flow, the skeleton includes the critical points and curves which provide a basis for analyzing the three-dimensional structure of the flow separation.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "James Helman;Lambertus Hesselink",
                "AuthorNames": "J.L. Helman;L. Hesselink",
                "AuthorAffiliation": "Department of Applied Physics, University of Stanford, Stanford, CA, USA;Departments of Aeronautics/Astronautics and Electrical Engineering, University of Stanford, Stanford, CA, USA",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 115,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 488,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3705,
                "i": [
                    3705
                ]
            }
        },
        {
            "name": "Ramsay Dyer",
            "value": 16,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1580,
            "x": -397.243824027434,
            "y": -15.727182584972534,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Linear and cubic box splines for the body centered cubic lattice",
                "DOI": "10.1109/visual.2004.65",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.65",
                "FirstPage": 11,
                "LastPage": 18,
                "PaperType": "C",
                "Abstract": "We derive piecewise linear and piecewise cubic box spline reconstruction filters for data sampled on the body centered cubic (BCC) lattice. We analytically derive a time domain representation of these reconstruction filters and using the Fourier slice-projection theorem we derive their frequency responses. The quality of these filters, when used in reconstructing BCC sampled volumetric data, is discussed and is demonstrated with a raycaster. Moreover, to demonstrate the superiority of the BCC sampling, the resulting reconstructions are compared with those produced from similar filters applied to data sampled on the Cartesian lattice.",
                "AuthorNamesDeduped": "Alireza Entezari;Ramsay Dyer;Torsten Möller",
                "AuthorNames": "A. Entezari;R. Dyer;T. Moller",
                "AuthorAffiliation": "Graphics, Usability, and Visualization (GrUVi) Laboratory, Simon Fraser University, Canada;Graphics, Usability, and Visualization (GrUVi) Laboratory, Simon Fraser University, Canada;Graphics, Usability, and Visualization (GrUVi) Laboratory, Simon Fraser University, Canada",
                "InternalReferences": "0.1109/visual.1993.398851;10.1109/visual.2001.964498;10.1109/visual.1997.663848;10.1109/visual.1994.346331;10.1109/visual.2001.964499",
                "AuthorKeywords": "Body Centered Cubic Lattice, Reconstruction, Optimal Regular Sampling",
                "AminerCitationCount": 91,
                "CitationCountCrossRef": 28,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 172,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2530,
                "i": [
                    2530
                ]
            }
        },
        {
            "name": "Alfred Kobsa",
            "value": 51,
            "numPapers": 7,
            "cluster": "5",
            "visible": 1,
            "index": 1581,
            "x": 303.6348030461949,
            "y": -256.81882014194053,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "User Experiments with Tree Visualization Systems",
                "DOI": "10.1109/infvis.2004.70",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.70",
                "FirstPage": 9,
                "LastPage": 16,
                "PaperType": "C",
                "Abstract": "This paper describes a comparative experiment with five well-known tree visualization systems, and Windows Explorer as a baseline system. Subjects performed tasks relating to the structure of a directory hierarchy, and to attributes of files and directories. Task completion times, correctness and user satisfaction were measured, and video recordings of subjects' interaction with the systems were made. Significant system and task type effects and an interaction between system and task type were found. Qualitative analyses of the video recordings were thereupon conducted to determine reasons for the observed differences, resulting in several findings and design recommendations as well as implications for future experiments with tree visualization systems",
                "AuthorNamesDeduped": "Alfred Kobsa",
                "AuthorNames": "A. Kobsa",
                "AuthorAffiliation": "University of California, Irvine, USA",
                "InternalReferences": "0.1109/visual.1991.175815;10.1109/infvis.2002.1173148;10.1109/infvis.2001.963285;10.1109/infvis.1999.801860;10.1109/infvis.2001.963289;10.1109/infvis.2001.963290;10.1109/infvis.2002.1173153",
                "AuthorKeywords": "information visualization, experimental comparison, task performance, accuracy, user satisfaction, user interaction, design recommendations",
                "AminerCitationCount": 168,
                "CitationCountCrossRef": 46,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 694,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2461,
                "i": [
                    2461
                ]
            }
        },
        {
            "name": "Guy Melançon",
            "value": 42,
            "numPapers": 9,
            "cluster": "4",
            "visible": 1,
            "index": 1582,
            "x": -50.42816926685526,
            "y": 394.5972627178164,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "InfoVis",
                "Year": 2003,
                "Title": "Multiscale Visualization of Small World Networks",
                "DOI": "10.1109/infvis.2003.1249011",
                "Link": "http://doi.ieeecomputersociety.org/10.1109/INFVIS.2003.1249011",
                "FirstPage": 75,
                "LastPage": 84,
                "PaperType": "C",
                "Abstract": "Many networks under study in information visualization are \"small world\" networks. These networks first appeared in the study of social networks and were shown to be relevant models in other application domains such as software reverse engineering and biology. Furthermore, many of these networks actually have a multiscale nature: they can be viewed as a network of groups that are themselves small world networks. We describe a metric that has been designed in order to identify the weakest edges in a small world network leading to an easy and low cost filtering procedure that breaks up a graph into smaller and highly connected components. We show how this metric can be exploited through an interactive navigation of the network based on semantic zooming. Once the network is decomposed into a hierarchy of sub-networks, a user can easily find groups and subgroups of actors and understand their dynamics.",
                "AuthorNamesDeduped": "David Auber;Yves Chiricota;Fabien Jourdan;Guy Melançon",
                "AuthorNames": "D. Auber;Y. Chiricota;F. Jourdan;G. Melancon",
                "AuthorAffiliation": "LaBRI, Bordeaux, France;Univ. Québec à Chicoutimi, Canada;LIRMM, Montpellier, France;LIRMM, Montpellier, France",
                "InternalReferences": null,
                "AuthorKeywords": "Small world networks, multiscale graphs,clustering metric, semantic zooming",
                "AminerCitationCount": 310,
                "CitationCountCrossRef": 91,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 434,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2624,
                "i": [
                    2624
                ]
            }
        },
        {
            "name": "Eric T. Ahrens",
            "value": 96,
            "numPapers": 0,
            "cluster": "11",
            "visible": 1,
            "index": 1583,
            "x": -229.43491545873204,
            "y": -325.1301578882595,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Visualizing diffusion tensor images of the mouse spinal cord",
                "DOI": "10.1109/visual.1998.745294",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745294",
                "FirstPage": 127,
                "LastPage": 134,
                "PaperType": "C",
                "Abstract": "Within biological systems, water molecules undergo continuous stochastic Brownian motion. The diffusion rate can give clues to the structure of the underlying tissues. In some tissues, the rate is anisotropic. Diffusion-rate images can be calculated from diffusion-weighted MRI. A 2D diffusion tensor image (DTI) and an associated anatomical scalar field define seven values at each spatial location. We present two new methods for visually representing DTIs. The first method displays an array of ellipsoids, where the shape of each ellipsoid represents one tensor value. The ellipsoids are all normalized to approximately the same size so that they can be displayed simultaneously in context. The second method uses concepts from oil painting to represent the seven-valued data with multiple layers of varying brush strokes. Both methods successfully display most or all of the information in DTIs and provide exploratory methods for understanding them. The ellipsoid method has a simpler interpretation and explanation than the painting-motivated method; the painting-motivated method displays more of the information and is easier to read quantatively. We demonstrate the methods on images of the mouse spinal cord. The visualizations show significant differences between spinal cords from mice suffering from experimental allergic encephalomyelitis and spinal cords from wild-type mice. The differences are consistent with differences shown histologically and suggest that our new non-invasive imaging methodology and visualization of the results could have early diagnostic value for neurodegenerative diseases.",
                "AuthorNamesDeduped": "David H. Laidlaw;Eric T. Ahrens;David Kremers;Matthew J. Avalos;Russell E. Jacobs;Carol Readhead",
                "AuthorNames": "D.H. Laidlaw;E.T. Ahrens;D. Kremers;M.J. Avalos;R.E. Jacobs;C. Readhead",
                "AuthorAffiliation": "California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;Cedars Sinai Medical Center, Los Angeles, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235201",
                "AuthorKeywords": "multi-valued visualization, tensor field visualization,oil painting",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3146,
                "i": [
                    3146
                ]
            }
        },
        {
            "name": "David Kremers",
            "value": 96,
            "numPapers": 0,
            "cluster": "11",
            "visible": 1,
            "index": 1584,
            "x": 388.92317901417033,
            "y": 84.7865604061847,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Visualizing diffusion tensor images of the mouse spinal cord",
                "DOI": "10.1109/visual.1998.745294",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745294",
                "FirstPage": 127,
                "LastPage": 134,
                "PaperType": "C",
                "Abstract": "Within biological systems, water molecules undergo continuous stochastic Brownian motion. The diffusion rate can give clues to the structure of the underlying tissues. In some tissues, the rate is anisotropic. Diffusion-rate images can be calculated from diffusion-weighted MRI. A 2D diffusion tensor image (DTI) and an associated anatomical scalar field define seven values at each spatial location. We present two new methods for visually representing DTIs. The first method displays an array of ellipsoids, where the shape of each ellipsoid represents one tensor value. The ellipsoids are all normalized to approximately the same size so that they can be displayed simultaneously in context. The second method uses concepts from oil painting to represent the seven-valued data with multiple layers of varying brush strokes. Both methods successfully display most or all of the information in DTIs and provide exploratory methods for understanding them. The ellipsoid method has a simpler interpretation and explanation than the painting-motivated method; the painting-motivated method displays more of the information and is easier to read quantatively. We demonstrate the methods on images of the mouse spinal cord. The visualizations show significant differences between spinal cords from mice suffering from experimental allergic encephalomyelitis and spinal cords from wild-type mice. The differences are consistent with differences shown histologically and suggest that our new non-invasive imaging methodology and visualization of the results could have early diagnostic value for neurodegenerative diseases.",
                "AuthorNamesDeduped": "David H. Laidlaw;Eric T. Ahrens;David Kremers;Matthew J. Avalos;Russell E. Jacobs;Carol Readhead",
                "AuthorNames": "D.H. Laidlaw;E.T. Ahrens;D. Kremers;M.J. Avalos;R.E. Jacobs;C. Readhead",
                "AuthorAffiliation": "California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;Cedars Sinai Medical Center, Los Angeles, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235201",
                "AuthorKeywords": "multi-valued visualization, tensor field visualization,oil painting",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3146,
                "i": [
                    3146
                ]
            }
        },
        {
            "name": "Matthew J. Avalos",
            "value": 96,
            "numPapers": 0,
            "cluster": "11",
            "visible": 1,
            "index": 1585,
            "x": -344.1608977941823,
            "y": 200.25802463197934,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Visualizing diffusion tensor images of the mouse spinal cord",
                "DOI": "10.1109/visual.1998.745294",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745294",
                "FirstPage": 127,
                "LastPage": 134,
                "PaperType": "C",
                "Abstract": "Within biological systems, water molecules undergo continuous stochastic Brownian motion. The diffusion rate can give clues to the structure of the underlying tissues. In some tissues, the rate is anisotropic. Diffusion-rate images can be calculated from diffusion-weighted MRI. A 2D diffusion tensor image (DTI) and an associated anatomical scalar field define seven values at each spatial location. We present two new methods for visually representing DTIs. The first method displays an array of ellipsoids, where the shape of each ellipsoid represents one tensor value. The ellipsoids are all normalized to approximately the same size so that they can be displayed simultaneously in context. The second method uses concepts from oil painting to represent the seven-valued data with multiple layers of varying brush strokes. Both methods successfully display most or all of the information in DTIs and provide exploratory methods for understanding them. The ellipsoid method has a simpler interpretation and explanation than the painting-motivated method; the painting-motivated method displays more of the information and is easier to read quantatively. We demonstrate the methods on images of the mouse spinal cord. The visualizations show significant differences between spinal cords from mice suffering from experimental allergic encephalomyelitis and spinal cords from wild-type mice. The differences are consistent with differences shown histologically and suggest that our new non-invasive imaging methodology and visualization of the results could have early diagnostic value for neurodegenerative diseases.",
                "AuthorNamesDeduped": "David H. Laidlaw;Eric T. Ahrens;David Kremers;Matthew J. Avalos;Russell E. Jacobs;Carol Readhead",
                "AuthorNames": "D.H. Laidlaw;E.T. Ahrens;D. Kremers;M.J. Avalos;R.E. Jacobs;C. Readhead",
                "AuthorAffiliation": "California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;Cedars Sinai Medical Center, Los Angeles, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235201",
                "AuthorKeywords": "multi-valued visualization, tensor field visualization,oil painting",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3146,
                "i": [
                    3146
                ]
            }
        },
        {
            "name": "Russell E. Jacobs",
            "value": 96,
            "numPapers": 0,
            "cluster": "11",
            "visible": 1,
            "index": 1586,
            "x": 118.53854749533899,
            "y": -380.2612427761937,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Visualizing diffusion tensor images of the mouse spinal cord",
                "DOI": "10.1109/visual.1998.745294",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745294",
                "FirstPage": 127,
                "LastPage": 134,
                "PaperType": "C",
                "Abstract": "Within biological systems, water molecules undergo continuous stochastic Brownian motion. The diffusion rate can give clues to the structure of the underlying tissues. In some tissues, the rate is anisotropic. Diffusion-rate images can be calculated from diffusion-weighted MRI. A 2D diffusion tensor image (DTI) and an associated anatomical scalar field define seven values at each spatial location. We present two new methods for visually representing DTIs. The first method displays an array of ellipsoids, where the shape of each ellipsoid represents one tensor value. The ellipsoids are all normalized to approximately the same size so that they can be displayed simultaneously in context. The second method uses concepts from oil painting to represent the seven-valued data with multiple layers of varying brush strokes. Both methods successfully display most or all of the information in DTIs and provide exploratory methods for understanding them. The ellipsoid method has a simpler interpretation and explanation than the painting-motivated method; the painting-motivated method displays more of the information and is easier to read quantatively. We demonstrate the methods on images of the mouse spinal cord. The visualizations show significant differences between spinal cords from mice suffering from experimental allergic encephalomyelitis and spinal cords from wild-type mice. The differences are consistent with differences shown histologically and suggest that our new non-invasive imaging methodology and visualization of the results could have early diagnostic value for neurodegenerative diseases.",
                "AuthorNamesDeduped": "David H. Laidlaw;Eric T. Ahrens;David Kremers;Matthew J. Avalos;Russell E. Jacobs;Carol Readhead",
                "AuthorNames": "D.H. Laidlaw;E.T. Ahrens;D. Kremers;M.J. Avalos;R.E. Jacobs;C. Readhead",
                "AuthorAffiliation": "California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;Cedars Sinai Medical Center, Los Angeles, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235201",
                "AuthorKeywords": "multi-valued visualization, tensor field visualization,oil painting",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3146,
                "i": [
                    3146
                ]
            }
        },
        {
            "name": "Carol Readhead",
            "value": 96,
            "numPapers": 0,
            "cluster": "11",
            "visible": 1,
            "index": 1587,
            "x": 169.5095402453838,
            "y": 360.57803006533646,
            "vy": 0,
            "vx": 0,
            "r": 1.1105354058721935,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Visualizing diffusion tensor images of the mouse spinal cord",
                "DOI": "10.1109/visual.1998.745294",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745294",
                "FirstPage": 127,
                "LastPage": 134,
                "PaperType": "C",
                "Abstract": "Within biological systems, water molecules undergo continuous stochastic Brownian motion. The diffusion rate can give clues to the structure of the underlying tissues. In some tissues, the rate is anisotropic. Diffusion-rate images can be calculated from diffusion-weighted MRI. A 2D diffusion tensor image (DTI) and an associated anatomical scalar field define seven values at each spatial location. We present two new methods for visually representing DTIs. The first method displays an array of ellipsoids, where the shape of each ellipsoid represents one tensor value. The ellipsoids are all normalized to approximately the same size so that they can be displayed simultaneously in context. The second method uses concepts from oil painting to represent the seven-valued data with multiple layers of varying brush strokes. Both methods successfully display most or all of the information in DTIs and provide exploratory methods for understanding them. The ellipsoid method has a simpler interpretation and explanation than the painting-motivated method; the painting-motivated method displays more of the information and is easier to read quantatively. We demonstrate the methods on images of the mouse spinal cord. The visualizations show significant differences between spinal cords from mice suffering from experimental allergic encephalomyelitis and spinal cords from wild-type mice. The differences are consistent with differences shown histologically and suggest that our new non-invasive imaging methodology and visualization of the results could have early diagnostic value for neurodegenerative diseases.",
                "AuthorNamesDeduped": "David H. Laidlaw;Eric T. Ahrens;David Kremers;Matthew J. Avalos;Russell E. Jacobs;Carol Readhead",
                "AuthorNames": "D.H. Laidlaw;E.T. Ahrens;D. Kremers;M.J. Avalos;R.E. Jacobs;C. Readhead",
                "AuthorAffiliation": "California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;California Institute of Technology, Pasadena, CA, USA;Cedars Sinai Medical Center, Los Angeles, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235201",
                "AuthorKeywords": "multi-valued visualization, tensor field visualization,oil painting",
                "AminerCitationCount": 223,
                "CitationCountCrossRef": 61,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 281,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3146,
                "i": [
                    3146
                ]
            }
        },
        {
            "name": "Thierry Delmarcelle",
            "value": 90,
            "numPapers": 3,
            "cluster": "11",
            "visible": 1,
            "index": 1588,
            "x": -368.67408213849905,
            "y": -151.42463854781133,
            "vy": 0,
            "vx": 0,
            "r": 1.1036269430051813,
            "node": {
                "Conference": "Vis",
                "Year": 1994,
                "Title": "The topology of symmetric, second-order tensor fields",
                "DOI": "10.1109/visual.1994.346326",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1994.346326",
                "FirstPage": 140,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "We study the topology of symmetric, second-order tensor fields. The goal is to represent their complex structure by a simple set of carefully chosen points and lines analogous to vector field topology. We extract topological skeletons of the eigenvector fields, and we track their evolution over time. We study tensor topological transitions and correlate tensor and vector data. The basic constituents of tensor topology are the degenerate points, or points where eigenvalues are equal to each other. Degenerate points play a similar role as critical points in vector fields. We identify two kinds of elementary degenerate points, which we call wedges and trisectors. They can combine to form more familiar singularities-such as saddles, nodes, centers, or foci. However, these are generally unstable structures in tensor fields. Finally, we show a topological rule that puts a constraint on the topology of tensor fields defined across surfaces, extending to tensor fields the Poincare-Hopf theorem for vector fields.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Thierry Delmarcelle;Lambertus Hesselink",
                "AuthorNames": "T. Delmarcelle;L. Hesselink",
                "AuthorAffiliation": "Department of Applied Physics, University of Stanford, Stanford, CA, USA;Department of Electrical Engineering, University of Stanford, Stanford, CA, USA",
                "InternalReferences": "0.1109/visual.1991.175773",
                "AuthorKeywords": null,
                "AminerCitationCount": 229,
                "CitationCountCrossRef": 58,
                "PubsCitedCrossRef": 11,
                "DownloadsXplore": 310,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3472,
                "i": [
                    3472
                ]
            }
        },
        {
            "name": "Samuel Gerber",
            "value": 79,
            "numPapers": 5,
            "cluster": "11",
            "visible": 1,
            "index": 1589,
            "x": 374.25241295515985,
            "y": -137.42318362358102,
            "vy": 0,
            "vx": 0,
            "r": 1.0909614277489925,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Visual Exploration of High Dimensional Scalar Functions",
                "DOI": "10.1109/tvcg.2010.213",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.213",
                "FirstPage": 1271,
                "LastPage": 1280,
                "PaperType": "J",
                "Abstract": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.",
                "AuthorNamesDeduped": "Samuel Gerber;Peer-Timo Bremer;Valerio Pascucci;Ross T. Whitaker",
                "AuthorNames": "Samuel Gerber;Peer-Timo Bremer;Valerio Pascucci;Ross Whitaker",
                "AuthorAffiliation": "Scientific Computing and Imaging Institute, University of Utah, USA;Center of Applied Scientific Computing CASC, Lawrence Livemore National Laboratory, USA;Scientific Computing and Imaging Institute, University of Utah, USA;Scientific Computing and Imaging Institute, University of Utah, USA",
                "InternalReferences": "0.1109/visual.2004.96;10.1109/tvcg.2007.70603;10.1109/tvcg.2006.186;10.1109/tvcg.2007.70552;10.1109/tvcg.2007.70601;10.1109/visual.2005.1532839",
                "AuthorKeywords": "Morse theory, High-dimensional visualization, Morse-Smale complex",
                "AminerCitationCount": 127,
                "CitationCountCrossRef": 63,
                "PubsCitedCrossRef": 60,
                "DownloadsXplore": 1445,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1773,
                "i": [
                    1773
                ]
            }
        },
        {
            "name": "Maxim Lazarov",
            "value": 5,
            "numPapers": 8,
            "cluster": "7",
            "visible": 1,
            "index": 1590,
            "x": -183.19165349788412,
            "y": 354.2468321505658,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "A Scalable Distributed Paradigm for Multi-User Interaction with Tiled Rear Projection Display Walls",
                "DOI": "10.1109/tvcg.2010.128",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.128",
                "FirstPage": 1623,
                "LastPage": 1632,
                "PaperType": "J",
                "Abstract": "We present the first distributed paradigm for multiple users to interact simultaneously with large tiled rear projection display walls. Unlike earlier works, our paradigm allows easy scalability across different applications, interaction modalities, displays and users. The novelty of the design lies in its distributed nature allowing well-compartmented, application independent, and application specific modules. This enables adapting to different 2D applications and interaction modalities easily by changing a few application specific modules. We demonstrate four challenging 2D applications on a nine projector display to demonstrate the application scalability of our method: map visualization, virtual graffiti, virtual bulletin board and an emergency management system. We demonstrate the scalability of our method to multiple interaction modalities by showing both gesture-based and laser-based user interfaces. Finally, we improve earlier distributed methods to register multiple projectors. Previous works need multiple patterns to identify the neighbors, the configuration of the display and the registration across multiple projectors in logarithmic time with respect to the number of projectors in the display. We propose a new approach that achieves this using a single pattern based on specially augmented QR codes in constant time. Further, previous distributed registration algorithms are prone to large misregistrations. We propose a novel radially cascading geometric registration technique that yields significantly better accuracy. Thus, our improvements allow a significantly more efficient and accurate technique for distributed self-registration of multi-projector display walls.",
                "AuthorNamesDeduped": "Pablo Roman;Maxim Lazarov;Aditi Majumder",
                "AuthorNames": "Pablo Roman;Maxim Lazarov;Aditi Majumder",
                "AuthorAffiliation": "Computer Science Department, University of California, Irvine, USA;Computer Science Department, University of California, Irvine, USA;Computer Science Department, University of California, Irvine, USA",
                "InternalReferences": "0.1109/tvcg.2006.121;10.1109/tvcg.2007.70586;10.1109/visual.2002.1183793;10.1109/tvcg.2009.124",
                "AuthorKeywords": "Tiled Displays, Human-Computer Interaction, Gesture-Based Interaction, Multi-user interaction, Distributed algorithms",
                "AminerCitationCount": 45,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 38,
                "DownloadsXplore": 729,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1789,
                "i": [
                    1789
                ]
            }
        },
        {
            "name": "Aditi Majumder",
            "value": 48,
            "numPapers": 27,
            "cluster": "7",
            "visible": 1,
            "index": 1591,
            "x": -104.24322801677751,
            "y": -385.07577100181487,
            "vy": 0,
            "vx": 0,
            "r": 1.0552677029360968,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Registration Techniques for Using Imperfect and Par tially Calibrated Devices in Planar Multi-Projector Displays",
                "DOI": "10.1109/tvcg.2007.70586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70586",
                "FirstPage": 1368,
                "LastPage": 1375,
                "PaperType": "J",
                "Abstract": "Multi-projector displays today are automatically registered, both geometrically and photometrically, using cameras. Existing registration techniques assume pre-calibrated projectors and cameras that are devoid of imperfections such as lens distortion. In practice, however, these devices are usually imperfect and uncalibrated. Registration of each of these devices is often more challenging than the multi-projector display registration itself. To make tiled projection-based displays accessible to a layman user we should allow the use of uncalibrated inexpensive devices that are prone to imperfections. In this paper, we make two important advances in this direction. First, we present a new geometric registration technique that can achieve geometric alignment in the presence of severe projector lens distortion using a relatively inexpensive low-resolution camera. This is achieved via a closed-form model that relates the projectors to cameras, in planar multi-projector displays, using rational Bezier patches. This enables us to geometrically calibrate a 3000 times 2500 resolution planar multi-projector display made of 3 times 3 array of nine severely distorted projectors using a low resolution (640 times 480) VGA camera. Second, we present a photometric self-calibration technique for a projector-camera pair. This allows us to photometrically calibrate the same display made of nine projectors using a photometrically uncalibrated camera. To the best of our knowledge, this is the first work that allows geometrically imperfect projectors and photometrically uncalibrated cameras in calibrating multi-projector displays.",
                "AuthorNamesDeduped": "Ezekiel S. Bhasker;Ray Juang;Aditi Majumder",
                "AuthorNames": "Ezekiel Bhasker;Ray Juang;Aditi Majumder",
                "AuthorAffiliation": "University of California, Irvine, USA;University of California, Irvine, USA;University of California, Irvine, USA",
                "InternalReferences": "0.1109/visual.2001.964508;10.1109/tvcg.2006.121;10.1109/visual.2000.885685;10.1109/visual.2002.1183793;10.1109/visual.2000.885684;10.1109/visual.1999.809883",
                "AuthorKeywords": "Geometric calibration, photometric calibration, tiled displays",
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 351,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2197,
                "i": [
                    2197
                ]
            }
        },
        {
            "name": "Ezekiel S. Bhasker",
            "value": 22,
            "numPapers": 9,
            "cluster": "7",
            "visible": 1,
            "index": 1592,
            "x": 337.0865102159112,
            "y": 213.59467369402833,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Registration Techniques for Using Imperfect and Par tially Calibrated Devices in Planar Multi-Projector Displays",
                "DOI": "10.1109/tvcg.2007.70586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70586",
                "FirstPage": 1368,
                "LastPage": 1375,
                "PaperType": "J",
                "Abstract": "Multi-projector displays today are automatically registered, both geometrically and photometrically, using cameras. Existing registration techniques assume pre-calibrated projectors and cameras that are devoid of imperfections such as lens distortion. In practice, however, these devices are usually imperfect and uncalibrated. Registration of each of these devices is often more challenging than the multi-projector display registration itself. To make tiled projection-based displays accessible to a layman user we should allow the use of uncalibrated inexpensive devices that are prone to imperfections. In this paper, we make two important advances in this direction. First, we present a new geometric registration technique that can achieve geometric alignment in the presence of severe projector lens distortion using a relatively inexpensive low-resolution camera. This is achieved via a closed-form model that relates the projectors to cameras, in planar multi-projector displays, using rational Bezier patches. This enables us to geometrically calibrate a 3000 times 2500 resolution planar multi-projector display made of 3 times 3 array of nine severely distorted projectors using a low resolution (640 times 480) VGA camera. Second, we present a photometric self-calibration technique for a projector-camera pair. This allows us to photometrically calibrate the same display made of nine projectors using a photometrically uncalibrated camera. To the best of our knowledge, this is the first work that allows geometrically imperfect projectors and photometrically uncalibrated cameras in calibrating multi-projector displays.",
                "AuthorNamesDeduped": "Ezekiel S. Bhasker;Ray Juang;Aditi Majumder",
                "AuthorNames": "Ezekiel Bhasker;Ray Juang;Aditi Majumder",
                "AuthorAffiliation": "University of California, Irvine, USA;University of California, Irvine, USA;University of California, Irvine, USA",
                "InternalReferences": "0.1109/visual.2001.964508;10.1109/tvcg.2006.121;10.1109/visual.2000.885685;10.1109/visual.2002.1183793;10.1109/visual.2000.885684;10.1109/visual.1999.809883",
                "AuthorKeywords": "Geometric calibration, photometric calibration, tiled displays",
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 351,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2197,
                "i": [
                    2197
                ]
            }
        },
        {
            "name": "Ray Juang",
            "value": 12,
            "numPapers": 5,
            "cluster": "7",
            "visible": 1,
            "index": 1593,
            "x": -392.96155158681347,
            "y": 70.22263861807043,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Registration Techniques for Using Imperfect and Par tially Calibrated Devices in Planar Multi-Projector Displays",
                "DOI": "10.1109/tvcg.2007.70586",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70586",
                "FirstPage": 1368,
                "LastPage": 1375,
                "PaperType": "J",
                "Abstract": "Multi-projector displays today are automatically registered, both geometrically and photometrically, using cameras. Existing registration techniques assume pre-calibrated projectors and cameras that are devoid of imperfections such as lens distortion. In practice, however, these devices are usually imperfect and uncalibrated. Registration of each of these devices is often more challenging than the multi-projector display registration itself. To make tiled projection-based displays accessible to a layman user we should allow the use of uncalibrated inexpensive devices that are prone to imperfections. In this paper, we make two important advances in this direction. First, we present a new geometric registration technique that can achieve geometric alignment in the presence of severe projector lens distortion using a relatively inexpensive low-resolution camera. This is achieved via a closed-form model that relates the projectors to cameras, in planar multi-projector displays, using rational Bezier patches. This enables us to geometrically calibrate a 3000 times 2500 resolution planar multi-projector display made of 3 times 3 array of nine severely distorted projectors using a low resolution (640 times 480) VGA camera. Second, we present a photometric self-calibration technique for a projector-camera pair. This allows us to photometrically calibrate the same display made of nine projectors using a photometrically uncalibrated camera. To the best of our knowledge, this is the first work that allows geometrically imperfect projectors and photometrically uncalibrated cameras in calibrating multi-projector displays.",
                "AuthorNamesDeduped": "Ezekiel S. Bhasker;Ray Juang;Aditi Majumder",
                "AuthorNames": "Ezekiel Bhasker;Ray Juang;Aditi Majumder",
                "AuthorAffiliation": "University of California, Irvine, USA;University of California, Irvine, USA;University of California, Irvine, USA",
                "InternalReferences": "0.1109/visual.2001.964508;10.1109/tvcg.2006.121;10.1109/visual.2000.885685;10.1109/visual.2002.1183793;10.1109/visual.2000.885684;10.1109/visual.1999.809883",
                "AuthorKeywords": "Geometric calibration, photometric calibration, tiled displays",
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 29,
                "PubsCitedCrossRef": 39,
                "DownloadsXplore": 351,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2197,
                "i": [
                    2197
                ]
            }
        },
        {
            "name": "Han Chen",
            "value": 15,
            "numPapers": 2,
            "cluster": "7",
            "visible": 1,
            "index": 1594,
            "x": 242.39893047238618,
            "y": -317.3212229048718,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Scalable alignment of large-format multi-projector displays using camera homography trees",
                "DOI": "10.1109/visual.2002.1183793",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183793",
                "FirstPage": 339,
                "LastPage": 346,
                "PaperType": "C",
                "Abstract": "This paper presents a vision-based geometric alignment system for aligning the projectors in an arbitrarily large display wall. Existing algorithms typically rely on a single camera view and degrade in accuracy as the display resolution exceeds the camera resolution by several orders of magnitude. Naive approaches to integrating multiple zoomed camera views fail since small errors in aligning adjacent views propagate quickly over the display surface to create glaring discontinuities. Our algorithm builds and refines a camera homography tree to automatically register any number of uncalibrated camera images; the resulting system is both faster and significantly more accurate than competing approaches, reliably achieving alignment errors of 0.55 pixels on a 24-projector display in under 9 minutes. Detailed experiments compare our system to two recent display wall alignment algorithms, both on our 18 Megapixel display wall and in simulation. These results indicate that our approach achieves sub-pixel accuracy even on displays with hundreds of projectors.",
                "AuthorNamesDeduped": "Han Chen;Rahul Sukthankar;Grant Wallace;Kai Li 0001",
                "AuthorNames": "Han Chen;R. Sukthankar;G. Wallace;Kai Li",
                "AuthorAffiliation": "Computer Science, Princeton University, USA;HP Laboratories CRL, The Robotics Institute, CMU, USA;Computer Science, Princeton University, USA;Computer Science, Princeton University, USA",
                "InternalReferences": "0.1109/visual.1999.809883;10.1109/visual.2001.964508;10.1109/visual.2000.885685",
                "AuthorKeywords": "large-format tiled projection display, display wall, camera-projector systems, camera-based registration and calibration, automatic alignment, scalability, simulation, evaluation",
                "AminerCitationCount": 238,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 13,
                "DownloadsXplore": 367,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2767,
                "i": [
                    2767
                ]
            }
        },
        {
            "name": "Rahul Sukthankar",
            "value": 15,
            "numPapers": 2,
            "cluster": "7",
            "visible": 1,
            "index": 1595,
            "x": 35.621143578631724,
            "y": 397.84561594939123,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Scalable alignment of large-format multi-projector displays using camera homography trees",
                "DOI": "10.1109/visual.2002.1183793",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183793",
                "FirstPage": 339,
                "LastPage": 346,
                "PaperType": "C",
                "Abstract": "This paper presents a vision-based geometric alignment system for aligning the projectors in an arbitrarily large display wall. Existing algorithms typically rely on a single camera view and degrade in accuracy as the display resolution exceeds the camera resolution by several orders of magnitude. Naive approaches to integrating multiple zoomed camera views fail since small errors in aligning adjacent views propagate quickly over the display surface to create glaring discontinuities. Our algorithm builds and refines a camera homography tree to automatically register any number of uncalibrated camera images; the resulting system is both faster and significantly more accurate than competing approaches, reliably achieving alignment errors of 0.55 pixels on a 24-projector display in under 9 minutes. Detailed experiments compare our system to two recent display wall alignment algorithms, both on our 18 Megapixel display wall and in simulation. These results indicate that our approach achieves sub-pixel accuracy even on displays with hundreds of projectors.",
                "AuthorNamesDeduped": "Han Chen;Rahul Sukthankar;Grant Wallace;Kai Li 0001",
                "AuthorNames": "Han Chen;R. Sukthankar;G. Wallace;Kai Li",
                "AuthorAffiliation": "Computer Science, Princeton University, USA;HP Laboratories CRL, The Robotics Institute, CMU, USA;Computer Science, Princeton University, USA;Computer Science, Princeton University, USA",
                "InternalReferences": "0.1109/visual.1999.809883;10.1109/visual.2001.964508;10.1109/visual.2000.885685",
                "AuthorKeywords": "large-format tiled projection display, display wall, camera-projector systems, camera-based registration and calibration, automatic alignment, scalability, simulation, evaluation",
                "AminerCitationCount": 238,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 13,
                "DownloadsXplore": 367,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2767,
                "i": [
                    2767
                ]
            }
        },
        {
            "name": "Grant Wallace",
            "value": 15,
            "numPapers": 2,
            "cluster": "7",
            "visible": 1,
            "index": 1596,
            "x": -295.09921001502886,
            "y": -269.3816182453916,
            "vy": 0,
            "vx": 0,
            "r": 1.0172711571675301,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Scalable alignment of large-format multi-projector displays using camera homography trees",
                "DOI": "10.1109/visual.2002.1183793",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183793",
                "FirstPage": 339,
                "LastPage": 346,
                "PaperType": "C",
                "Abstract": "This paper presents a vision-based geometric alignment system for aligning the projectors in an arbitrarily large display wall. Existing algorithms typically rely on a single camera view and degrade in accuracy as the display resolution exceeds the camera resolution by several orders of magnitude. Naive approaches to integrating multiple zoomed camera views fail since small errors in aligning adjacent views propagate quickly over the display surface to create glaring discontinuities. Our algorithm builds and refines a camera homography tree to automatically register any number of uncalibrated camera images; the resulting system is both faster and significantly more accurate than competing approaches, reliably achieving alignment errors of 0.55 pixels on a 24-projector display in under 9 minutes. Detailed experiments compare our system to two recent display wall alignment algorithms, both on our 18 Megapixel display wall and in simulation. These results indicate that our approach achieves sub-pixel accuracy even on displays with hundreds of projectors.",
                "AuthorNamesDeduped": "Han Chen;Rahul Sukthankar;Grant Wallace;Kai Li 0001",
                "AuthorNames": "Han Chen;R. Sukthankar;G. Wallace;Kai Li",
                "AuthorAffiliation": "Computer Science, Princeton University, USA;HP Laboratories CRL, The Robotics Institute, CMU, USA;Computer Science, Princeton University, USA;Computer Science, Princeton University, USA",
                "InternalReferences": "0.1109/visual.1999.809883;10.1109/visual.2001.964508;10.1109/visual.2000.885685",
                "AuthorKeywords": "large-format tiled projection display, display wall, camera-projector systems, camera-based registration and calibration, automatic alignment, scalability, simulation, evaluation",
                "AminerCitationCount": 238,
                "CitationCountCrossRef": 55,
                "PubsCitedCrossRef": 13,
                "DownloadsXplore": 367,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2767,
                "i": [
                    2767
                ]
            }
        },
        {
            "name": "Kai Li 0001",
            "value": 34,
            "numPapers": 2,
            "cluster": "7",
            "visible": 1,
            "index": 1597,
            "x": 399.686759148219,
            "y": -0.7032507330998957,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Automatic alignment of high-resolution multi-projector displays using an uncalibrated camera",
                "DOI": "10.1109/visual.2000.885685",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885685",
                "FirstPage": 125,
                "LastPage": 130,
                "PaperType": "C",
                "Abstract": "A scalable, high-resolution display may be constructed by tiling many projected images over a single display surface. One fundamental challenge for such a display is to avoid visible seams due to misalignment among the projectors. Traditional methods for avoiding seams involve sophisticated mechanical devices and expensive CRT projectors, coupled with extensive human effort for fine-tuning the projectors. The paper describes an automatic alignment method that relies on an inexpensive, uncalibrated camera to measure the relative mismatches between neighboring projectors, and then correct the projected imagery to avoid seams without significant human effort.",
                "AuthorNamesDeduped": "Yuqun Chen;Douglas W. Clark;Adam Finkelstein;Timothy C. Housel;Kai Li 0001",
                "AuthorNames": "Yuqun Chen;D.W. Clark;A. Finkelstein;T.C. Housel;Kai Li",
                "AuthorAffiliation": "Department of Computer Science, Princeton University, USA;Department of Computer Science, Princeton University, USA;Department of Computer Science, Princeton University, USA;Department of Computer Science, Princeton University, USA;Department of Computer Science, Princeton University, USA",
                "InternalReferences": "0.1109/visual.1999.809883",
                "AuthorKeywords": "seamless tiling, automatic alignment, projective mapping, simulated annealing ",
                "AminerCitationCount": 157,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 237,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2989,
                "i": [
                    2989
                ]
            }
        },
        {
            "name": "Gregory M. Nielson",
            "value": 152,
            "numPapers": 12,
            "cluster": "11",
            "visible": 1,
            "index": 1598,
            "x": -294.33361810161637,
            "y": 270.58773300948417,
            "vy": 0,
            "vx": 0,
            "r": 1.1750143926309728,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Dual marching cubes",
                "DOI": "10.1109/visual.2004.28",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.28",
                "FirstPage": 489,
                "LastPage": 496,
                "PaperType": "C",
                "Abstract": "We present the definition and computational algorithms for a new class of surfaces which are dual to the isosurface produced by the widely used marching cubes (MC) algorithm. These new isosurfaces have the same separating properties as the MC surfaces but they are comprised of quad patches that tend to eliminate the common negative aspect of poorly shaped triangles of the MC isosurfaces. Based upon the concept of this new dual operator, we describe a simple, but rather effective iterative scheme for producing smooth separating surfaces for binary, enumerated volumes which are often produced by segmentation algorithms. Both the dual surface algorithm and the iterative smoothing scheme are easily implemented.",
                "AuthorNamesDeduped": "Gregory M. Nielson",
                "AuthorNames": "G.M. Nielson",
                "AuthorAffiliation": "Arizona State University, USA",
                "InternalReferences": "0.1109/visual.2002.1183807;10.1109/visual.1991.175782",
                "AuthorKeywords": "Marching Cubes, isosurfaces, triangular mesh, dual graph, segmented data, smoothing",
                "AminerCitationCount": 154,
                "CitationCountCrossRef": 48,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 686,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2516,
                "i": [
                    2516
                ]
            }
        },
        {
            "name": "Tom Bobach",
            "value": 42,
            "numPapers": 4,
            "cluster": "11",
            "visible": 1,
            "index": 1599,
            "x": 34.26377481754101,
            "y": -398.46705476770956,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "A tetrahedra-based stream surface algorithm",
                "DOI": "10.1109/visual.2001.964506",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964506",
                "FirstPage": 151,
                "LastPage": 158,
                "PaperType": "C",
                "Abstract": "This paper presents a new algorithm for the calculation of stream surfaces for tetrahedral grids. It propagates the surface through the tetrahedra, one at a time, calculating the intersections with the tetrahedral faces. The method allows us to incorporate topological information from the cells, e.g. critical points. The calculations are based on barycentric coordinates, since this simplifies the theory and the algorithm. The stream surfaces are ruled surfaces inside each cell, and their construction starts with line segments on the faces. Our method supports the analysis of velocity fields resulting from computational fluid dynamics (CFD) simulations.",
                "AuthorNamesDeduped": "Gerik Scheuermann;Tom Bobach;Hans Hagen;Karim Mahrous;Bernd Hamann;Kenneth I. Joy;Wolfgang Kollmann",
                "AuthorNames": "G. Scheuermann;T. Bobach;H. Hagen;K. Mahrous;B. Hamann;K.I. Joy;W. Kollmann",
                "AuthorAffiliation": "Computer Science Department, University of Kaiserslautern, Kaiserslautern, Germany and Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Computer Science Department, University of Kaiserslautern, Kaiserslautern, Germany;Computer Science Department, University of Kaiserslautern, Kaiserslautern, Germany;Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Department of mechanaical and Aeronanutical Engineering, University of California, Davis, CA",
                "InternalReferences": "0.1109/visual.1993.398875;10.1109/visual.1992.235211;10.1109/visual.1995.485145;10.1109/visual.1999.809896;10.1109/visual.1997.663910",
                "AuthorKeywords": "vector field visualization, flow visualization, tetrahedral grid, unstructured grid, flow surface",
                "AminerCitationCount": 85,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 13,
                "DownloadsXplore": 164,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2887,
                "i": [
                    2887
                ]
            }
        },
        {
            "name": "Karim Mahrous",
            "value": 42,
            "numPapers": 4,
            "cluster": "11",
            "visible": 1,
            "index": 1600,
            "x": 243.9718161662112,
            "y": 317.0611185821441,
            "vy": 0,
            "vx": 0,
            "r": 1.0483592400690847,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "A tetrahedra-based stream surface algorithm",
                "DOI": "10.1109/visual.2001.964506",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964506",
                "FirstPage": 151,
                "LastPage": 158,
                "PaperType": "C",
                "Abstract": "This paper presents a new algorithm for the calculation of stream surfaces for tetrahedral grids. It propagates the surface through the tetrahedra, one at a time, calculating the intersections with the tetrahedral faces. The method allows us to incorporate topological information from the cells, e.g. critical points. The calculations are based on barycentric coordinates, since this simplifies the theory and the algorithm. The stream surfaces are ruled surfaces inside each cell, and their construction starts with line segments on the faces. Our method supports the analysis of velocity fields resulting from computational fluid dynamics (CFD) simulations.",
                "AuthorNamesDeduped": "Gerik Scheuermann;Tom Bobach;Hans Hagen;Karim Mahrous;Bernd Hamann;Kenneth I. Joy;Wolfgang Kollmann",
                "AuthorNames": "G. Scheuermann;T. Bobach;H. Hagen;K. Mahrous;B. Hamann;K.I. Joy;W. Kollmann",
                "AuthorAffiliation": "Computer Science Department, University of Kaiserslautern, Kaiserslautern, Germany and Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Computer Science Department, University of Kaiserslautern, Kaiserslautern, Germany;Computer Science Department, University of Kaiserslautern, Kaiserslautern, Germany;Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Center for image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis, CA;Department of mechanaical and Aeronanutical Engineering, University of California, Davis, CA",
                "InternalReferences": "0.1109/visual.1993.398875;10.1109/visual.1992.235211;10.1109/visual.1995.485145;10.1109/visual.1999.809896;10.1109/visual.1997.663910",
                "AuthorKeywords": "vector field visualization, flow visualization, tetrahedral grid, unstructured grid, flow surface",
                "AminerCitationCount": 85,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 13,
                "DownloadsXplore": 164,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2887,
                "i": [
                    2887
                ]
            }
        },
        {
            "name": "Alvin J. Law",
            "value": 0,
            "numPapers": 5,
            "cluster": "7",
            "visible": 1,
            "index": 1601,
            "x": -394.19202150571186,
            "y": -69.0119567990967,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Projector Placement Planning for High Quality Visualizations on Real-World Colored Objects",
                "DOI": "10.1109/tvcg.2010.189",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.189",
                "FirstPage": 1633,
                "LastPage": 1641,
                "PaperType": "J",
                "Abstract": "Many visualization applications benefit from displaying content on real-world objects rather than on a traditional display (e.g., a monitor). This type of visualization display is achieved by projecting precisely controlled illumination from multiple projectors onto the real-world colored objects. For such a task, the placement of the projectors is critical in assuring that the desired visualization is possible. Using ad hoc projector placement may cause some appearances to suffer from color shifting due to insufficient projector light radiance being exposed onto the physical surface. This leads to an incorrect appearance and ultimately to a false and potentially misleading visualization. In this paper, we present a framework to discover the optimal position and orientation of the projectors for such projection-based visualization displays. An optimal projector placement should be able to achieve the desired visualization with minimal projector light radiance. When determining optimal projector placement, object visibility, surface reflectance properties, and projector-surface distance and orientation need to be considered. We first formalize a theory for appearance editing image formation and construct a constrained linear system of equations that express when a desired novel appearance or visualization is possible given a geometric and surface reflectance model of the physical surface. Then, we show how to apply this constrained system in an adaptive search to efficiently discover the optimal projector placement which achieves the desired appearance. Constraints can be imposed on the maximum radiance allowed by the projectors and the projectors' placement to support specific goals of various visualization applications. We perform several real-world and simulated appearance edits and visualizations to demonstrate the improvement obtained by our discovered projector placement over ad hoc projector placement.",
                "AuthorNamesDeduped": "Alvin J. Law;Daniel G. Aliaga;Aditi Majumder",
                "AuthorNames": "Alvin J. Law;Daniel G. Aliaga;Aditi Majumder",
                "AuthorAffiliation": "Department of Computer Science, Purdue University, USA;Department of Computer Science, Purdue University, USA;Department of Computer Science, University of California, Irvine, CA, USA",
                "InternalReferences": "0.1109/tvcg.2009.124;10.1109/visual.2002.1183793;10.1109/tvcg.2006.121;10.1109/tvcg.2007.70586;10.1109/visual.2000.885684;10.1109/tvcg.2009.166",
                "AuthorKeywords": "Large and High-resolution Displays, Interaction Design, Mobile and Ubiquitous Visualization ",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 662,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1808,
                "i": [
                    1808
                ]
            }
        },
        {
            "name": "Daniel G. Aliaga",
            "value": 22,
            "numPapers": 13,
            "cluster": "7",
            "visible": 1,
            "index": 1602,
            "x": 337.38712107115725,
            "y": -215.45284991226345,
            "vy": 0,
            "vx": 0,
            "r": 1.0253310305123777,
            "node": {
                "Conference": "Vis",
                "Year": 2010,
                "Title": "Projector Placement Planning for High Quality Visualizations on Real-World Colored Objects",
                "DOI": "10.1109/tvcg.2010.189",
                "Link": "http://dx.doi.org/10.1109/TVCG.2010.189",
                "FirstPage": 1633,
                "LastPage": 1641,
                "PaperType": "J",
                "Abstract": "Many visualization applications benefit from displaying content on real-world objects rather than on a traditional display (e.g., a monitor). This type of visualization display is achieved by projecting precisely controlled illumination from multiple projectors onto the real-world colored objects. For such a task, the placement of the projectors is critical in assuring that the desired visualization is possible. Using ad hoc projector placement may cause some appearances to suffer from color shifting due to insufficient projector light radiance being exposed onto the physical surface. This leads to an incorrect appearance and ultimately to a false and potentially misleading visualization. In this paper, we present a framework to discover the optimal position and orientation of the projectors for such projection-based visualization displays. An optimal projector placement should be able to achieve the desired visualization with minimal projector light radiance. When determining optimal projector placement, object visibility, surface reflectance properties, and projector-surface distance and orientation need to be considered. We first formalize a theory for appearance editing image formation and construct a constrained linear system of equations that express when a desired novel appearance or visualization is possible given a geometric and surface reflectance model of the physical surface. Then, we show how to apply this constrained system in an adaptive search to efficiently discover the optimal projector placement which achieves the desired appearance. Constraints can be imposed on the maximum radiance allowed by the projectors and the projectors' placement to support specific goals of various visualization applications. We perform several real-world and simulated appearance edits and visualizations to demonstrate the improvement obtained by our discovered projector placement over ad hoc projector placement.",
                "AuthorNamesDeduped": "Alvin J. Law;Daniel G. Aliaga;Aditi Majumder",
                "AuthorNames": "Alvin J. Law;Daniel G. Aliaga;Aditi Majumder",
                "AuthorAffiliation": "Department of Computer Science, Purdue University, USA;Department of Computer Science, Purdue University, USA;Department of Computer Science, University of California, Irvine, CA, USA",
                "InternalReferences": "0.1109/tvcg.2009.124;10.1109/visual.2002.1183793;10.1109/tvcg.2006.121;10.1109/tvcg.2007.70586;10.1109/visual.2000.885684;10.1109/tvcg.2009.166",
                "AuthorKeywords": "Large and High-resolution Displays, Interaction Design, Mobile and Ubiquitous Visualization ",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 662,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1808,
                "i": [
                    1808
                ]
            }
        },
        {
            "name": "Zhu He",
            "value": 18,
            "numPapers": 1,
            "cluster": "7",
            "visible": 1,
            "index": 1603,
            "x": -103.27466186589571,
            "y": 386.8906101425633,
            "vy": 0,
            "vx": 0,
            "r": 1.0207253886010363,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Achieving color uniformity across multi-projector displays",
                "DOI": "10.1109/visual.2000.885684",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885684",
                "FirstPage": 117,
                "LastPage": 124,
                "PaperType": "C",
                "Abstract": "Large area tiled displays are gaining popularity for use in collaborative immersive virtual environments and scientific visualization. While recent work has addressed the issues of geometric registration, rendering architectures, and human interfaces, there has been relatively little work on photometric calibration in general, and photometric non-uniformity in particular. For example, as a result of differences in the photometric characteristics of projectors, the color and intensity of a large area display varies from place to place. Further, the imagery typically appears brighter at the regions of overlap between adjacent projectors. We analyze and classify the causes of photometric non-uniformity in a tiled display. We then propose a methodology for determining corrections designed to achieve uniformity, that can correct for the photometric variations across a tiled projector display in real time using per channel color look-up-tables (LUT).",
                "AuthorNamesDeduped": "Aditi Majumder;Zhu He;Herman Towles;Greg Welch",
                "AuthorNames": "A. Majumder;Zhu He;H. Towles;G. Welch",
                "AuthorAffiliation": "Department of Computer Science, University of North Carolina, Chapel Hill, USA;Department of Computer Science, University of North Carolina, Chapel Hill, USA;Department of Computer Science, University of North Carolina, Chapel Hill, USA;Department of Computer Science, University of North Carolina, Chapel Hill, USA",
                "InternalReferences": "0.1109/visual.1999.809890;10.1109/visual.1999.809883",
                "AuthorKeywords": "large area display, tiled displays, projector graphics, color calibration",
                "AminerCitationCount": 166,
                "CitationCountCrossRef": 56,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 357,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2962,
                "i": [
                    2962
                ]
            }
        },
        {
            "name": "Herman Towles",
            "value": 60,
            "numPapers": 5,
            "cluster": "7",
            "visible": 1,
            "index": 1604,
            "x": -185.24706688588094,
            "y": -355.1528181081743,
            "vy": 0,
            "vx": 0,
            "r": 1.0690846286701208,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multi-projector displays using camera-based registration",
                "DOI": "10.1109/visual.1999.809883",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809883",
                "FirstPage": 161,
                "LastPage": 522,
                "PaperType": "C",
                "Abstract": "Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.",
                "AuthorNamesDeduped": "Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs",
                "AuthorNames": "R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs",
                "AuthorAffiliation": "Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USA",
                "InternalReferences": null,
                "AuthorKeywords": "display, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration",
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1087,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3050,
                "i": [
                    3050
                ]
            }
        },
        {
            "name": "Greg Welch",
            "value": 53,
            "numPapers": 2,
            "cluster": "7",
            "visible": 1,
            "index": 1605,
            "x": 376.61501076785135,
            "y": 136.78864596278154,
            "vy": 0,
            "vx": 0,
            "r": 1.0610247553252734,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multi-projector displays using camera-based registration",
                "DOI": "10.1109/visual.1999.809883",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809883",
                "FirstPage": 161,
                "LastPage": 522,
                "PaperType": "C",
                "Abstract": "Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.",
                "AuthorNamesDeduped": "Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs",
                "AuthorNames": "R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs",
                "AuthorAffiliation": "Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USA",
                "InternalReferences": null,
                "AuthorKeywords": "display, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration",
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1087,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3050,
                "i": [
                    3050
                ]
            }
        },
        {
            "name": "Stephen R. Marschner",
            "value": 74,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1606,
            "x": -370.2188338899472,
            "y": 153.58390225921377,
            "vy": 0,
            "vx": 0,
            "r": 1.0852043753598157,
            "node": {
                "Conference": "Vis",
                "Year": 1994,
                "Title": "An evaluation of reconstruction filters for volume rendering",
                "DOI": "10.1109/visual.1994.346331",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1994.346331",
                "FirstPage": 100,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "To render images from a three-dimensional array of sample values, it is necessary to interpolate between the samples. This paper is concerned with interpolation methods that are equivalent to convolving the samples with a reconstruction filter; this covers all commonly used schemes, including trilinear and cubic interpolation. We first outline the formal basis of interpolation in three-dimensional signal processing theory. We then propose numerical metrics that can be used to measure filter characteristics that are relevant to the appearance of images generated using that filter. We apply those metrics to several previously used filters and relate the results to isosurface images of the interpolations. We show that the choice of interpolation scheme can have a dramatic effect on image quality, and we discuss the cost/benefit tradeoff inherent in choosing a filter.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Stephen R. Marschner;Richard Lobb",
                "AuthorNames": "S.R. Marschner;R.J. Lobb",
                "AuthorAffiliation": "Cornell University, Ithaca, NY, USA;Department of Computer Science, University of Auckland, Auckland, New Zealand and Cornell University, Ithaca, NY, USA",
                "InternalReferences": "0.1109/visual.1993.398851",
                "AuthorKeywords": null,
                "AminerCitationCount": 435,
                "CitationCountCrossRef": 121,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 188,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3471,
                "i": [
                    3471
                ]
            }
        },
        {
            "name": "Richard Lobb",
            "value": 74,
            "numPapers": 0,
            "cluster": "6",
            "visible": 1,
            "index": 1607,
            "x": 169.2960772388645,
            "y": -363.4402815202691,
            "vy": 0,
            "vx": 0,
            "r": 1.0852043753598157,
            "node": {
                "Conference": "Vis",
                "Year": 1994,
                "Title": "An evaluation of reconstruction filters for volume rendering",
                "DOI": "10.1109/visual.1994.346331",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1994.346331",
                "FirstPage": 100,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "To render images from a three-dimensional array of sample values, it is necessary to interpolate between the samples. This paper is concerned with interpolation methods that are equivalent to convolving the samples with a reconstruction filter; this covers all commonly used schemes, including trilinear and cubic interpolation. We first outline the formal basis of interpolation in three-dimensional signal processing theory. We then propose numerical metrics that can be used to measure filter characteristics that are relevant to the appearance of images generated using that filter. We apply those metrics to several previously used filters and relate the results to isosurface images of the interpolations. We show that the choice of interpolation scheme can have a dramatic effect on image quality, and we discuss the cost/benefit tradeoff inherent in choosing a filter.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Stephen R. Marschner;Richard Lobb",
                "AuthorNames": "S.R. Marschner;R.J. Lobb",
                "AuthorAffiliation": "Cornell University, Ithaca, NY, USA;Department of Computer Science, University of Auckland, Auckland, New Zealand and Cornell University, Ithaca, NY, USA",
                "InternalReferences": "0.1109/visual.1993.398851",
                "AuthorKeywords": null,
                "AminerCitationCount": 435,
                "CitationCountCrossRef": 121,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 188,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3471,
                "i": [
                    3471
                ]
            }
        },
        {
            "name": "Sara Johansson",
            "value": 130,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1608,
            "x": 120.70425075676505,
            "y": 382.4663172741463,
            "vy": 0,
            "vx": 0,
            "r": 1.1496833621185953,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics",
                "DOI": "10.1109/tvcg.2009.153",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.153",
                "FirstPage": 993,
                "LastPage": 1000,
                "PaperType": "J",
                "Abstract": "Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.",
                "AuthorNamesDeduped": null,
                "AuthorNames": "Sara Johansson;Jimmy Johansson",
                "AuthorAffiliation": "Norrköping Visualization and Interaction Studio (NVIS), Linköping University, Sweden;Norrköping Visualization and Interaction Studio (NVIS), Linköping University, Sweden",
                "InternalReferences": "0.1109/infvis.2005.1532142;10.1109/infvis.2003.1249015;10.1109/infvis.1998.729559;10.1109/tvcg.2006.161;10.1109/infvis.2004.60;10.1109/infvis.2004.3;10.1109/infvis.2004.71;10.1109/tvcg.2008.138;10.1109/infvis.2004.15",
                "AuthorKeywords": "dimensionality reduction, interactivity, quality metrics, variable ordering",
                "AminerCitationCount": 174,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1824,
                "i": [
                    1824
                ]
            }
        },
        {
            "name": "Jimmy Johansson",
            "value": 130,
            "numPapers": 8,
            "cluster": "6",
            "visible": 1,
            "index": 1609,
            "x": -347.46380146705326,
            "y": -200.5465199649801,
            "vy": 0,
            "vx": 0,
            "r": 1.1496833621185953,
            "node": {
                "Conference": "InfoVis",
                "Year": 2009,
                "Title": "Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics",
                "DOI": "10.1109/tvcg.2009.153",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.153",
                "FirstPage": 993,
                "LastPage": 1000,
                "PaperType": "J",
                "Abstract": "Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.",
                "AuthorNamesDeduped": null,
                "AuthorNames": "Sara Johansson;Jimmy Johansson",
                "AuthorAffiliation": "Norrköping Visualization and Interaction Studio (NVIS), Linköping University, Sweden;Norrköping Visualization and Interaction Studio (NVIS), Linköping University, Sweden",
                "InternalReferences": "0.1109/infvis.2005.1532142;10.1109/infvis.2003.1249015;10.1109/infvis.1998.729559;10.1109/tvcg.2006.161;10.1109/infvis.2004.60;10.1109/infvis.2004.3;10.1109/infvis.2004.71;10.1109/tvcg.2008.138;10.1109/infvis.2004.15",
                "AuthorKeywords": "dimensionality reduction, interactivity, quality metrics, variable ordering",
                "AminerCitationCount": 174,
                "CitationCountCrossRef": 100,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 1476,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1824,
                "i": [
                    1824
                ]
            }
        },
        {
            "name": "Christian Panse",
            "value": 37,
            "numPapers": 5,
            "cluster": "3",
            "visible": 1,
            "index": 1610,
            "x": 391.7978787167298,
            "y": -86.85863361273128,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Exploring and Visualizing the History of InfoVis",
                "DOI": "10.1109/infvis.2004.22",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.22",
                "FirstPage": null,
                "LastPage": null,
                "PaperType": "M",
                "Abstract": null,
                "AuthorNamesDeduped": "Daniel A. Keim;Helmut Barro;Christian Panse;Jörn Schneidewind;Mike Sips",
                "AuthorNames": "D.A. Keim;H. Barro;C. Panse;J. Schneidewind;M. Sips",
                "AuthorAffiliation": "University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany;University of Konstanz, Germany",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 21,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 2,
                "DownloadsXplore": 152,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2483,
                "i": [
                    2483
                ]
            }
        },
        {
            "name": "Eric B. Lum",
            "value": 93,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1611,
            "x": -230.29886418677026,
            "y": 328.8045516021388,
            "vy": 0,
            "vx": 0,
            "r": 1.1070811744386875,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "A novel interface for higher-dimensional classification of volume data",
                "DOI": "10.1109/visual.2003.1250413",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250413",
                "FirstPage": 505,
                "LastPage": 512,
                "PaperType": "C",
                "Abstract": "In the traditional volume visualization paradigm, the user specifies a transfer function that assigns each scalar value to a color and opacity by defining an opacity and a color map function. The transfer function has two limitations. First, the user must define curves based on histogram and value rather than seeing and working with the volume itself. Second, the transfer function is inflexible in classifying regions of interest, where values at a voxel such as intensity and gradient are used to differentiate material, not talking into account additional properties such as texture and position. We describe an intuitive user interface for specifying the classification functions that consists of the users painting directly on sample slices of the volume. These painted regions are used to automatically define high-dimensional classification functions that can be implemented in hardware for interactive rendering. The classification of the volume is iteratively improved as the user paints samples, allowing intuitive and efficient viewing of materials of interest.",
                "AuthorNamesDeduped": "Fan-Yin Tzeng;Eric B. Lum;Kwan-Liu Ma",
                "AuthorNames": "Fan-Yin Tzeng;E.B. Lum;Kwan-Liu Ma",
                "AuthorAffiliation": "Department of Computer Science, University of California Davis, USA;Department of Computer Science, University of California,슠Davis, USA;Department of Computer Science, University of California Davis, USA",
                "InternalReferences": "0.1109/visual.1998.745319;10.1109/visual.2001.964519;10.1109/visual.1999.809932;10.1109/visual.1997.663875;10.1109/visual.1996.568113",
                "AuthorKeywords": "classification, graphics hardware, interactive visualization, multidimensional transfer function, neural network, user interface design, volume visualization",
                "AminerCitationCount": 193,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 254,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2675,
                "i": [
                    2675
                ]
            }
        },
        {
            "name": "Behzad Sajadi",
            "value": 8,
            "numPapers": 9,
            "cluster": "7",
            "visible": 1,
            "index": 1612,
            "x": -52.30528941784554,
            "y": -398.13836376179006,
            "vy": 0,
            "vx": 0,
            "r": 1.0092112838226828,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Color Seamlessness in Multi-Projector Displays Using Constrained Gamut Morphing",
                "DOI": "10.1109/tvcg.2009.124",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.124",
                "FirstPage": 1317,
                "LastPage": 1326,
                "PaperType": "J",
                "Abstract": "Multi-projector displays show significant spatial variation in 3D color gamut due to variation in the chromaticity gamuts across the projectors, vignetting effect of each projector and also overlap across adjacent projectors. In this paper we present a new constrained gamut morphing algorithm that removes all these variations and results in true color seamlessness across tiled multi-projector displays. Our color morphing algorithm adjusts the intensities of light from each pixel of each projector precisely to achieve a smooth morphing from one projector's gamut to the other's through the overlap region. This morphing is achieved by imposing precise constraints on the perceptual difference between the gamuts of two adjacent pixels. In addition, our gamut morphing assures a C1 continuity yielding visually pleasing appearance across the entire display. We demonstrate our method successfully on a planar and a curved display using both low and high-end projectors. Our approach is completely scalable, efficient and automatic. We also demonstrate the real-time performance of our image correction algorithm on GPUs for interactive applications. To the best of our knowledge, this is the first work that presents a scalable method with a strong foundation in perception and realizes, for the first time, a truly seamless display where the number of projectors cannot be deciphered.",
                "AuthorNamesDeduped": "Behzad Sajadi;Maxim Lazarov;M. Gopi 0001;Aditi Majumder",
                "AuthorNames": "Behzad Sajadi;Maxim Lazarov;M. Gopi;Aditi Majumder",
                "AuthorAffiliation": "Computer Science Department, University of California, Irvine, USA;Computer Science Department, University of California, Irvine, USA;Computer Science Department, University of California, Irvine, USA;Computer Science Department, University of California, Irvine, USA",
                "InternalReferences": "0.1109/visual.2001.964508;10.1109/visual.2002.1183793;10.1109/visual.2000.885684;10.1109/visual.1999.809883;10.1109/tvcg.2007.70586;10.1109/tvcg.2006.121",
                "AuthorKeywords": "Color Calibration, Multi-Projector Displays, Tiled Displays",
                "AminerCitationCount": 87,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 731,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1927,
                "i": [
                    1927
                ]
            }
        },
        {
            "name": "Ramesh Raskar",
            "value": 25,
            "numPapers": 0,
            "cluster": "7",
            "visible": 1,
            "index": 1613,
            "x": 307.6022292659317,
            "y": 258.32318624279384,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multi-projector displays using camera-based registration",
                "DOI": "10.1109/visual.1999.809883",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809883",
                "FirstPage": 161,
                "LastPage": 522,
                "PaperType": "C",
                "Abstract": "Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.",
                "AuthorNamesDeduped": "Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs",
                "AuthorNames": "R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs",
                "AuthorAffiliation": "Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USA",
                "InternalReferences": null,
                "AuthorKeywords": "display, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration",
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1087,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3050,
                "i": [
                    3050
                ]
            }
        },
        {
            "name": "Michael S. Brown",
            "value": 32,
            "numPapers": 12,
            "cluster": "7",
            "visible": 1,
            "index": 1614,
            "x": -401.4354569476918,
            "y": 17.308203407572687,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multi-projector displays using camera-based registration",
                "DOI": "10.1109/visual.1999.809883",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809883",
                "FirstPage": 161,
                "LastPage": 522,
                "PaperType": "C",
                "Abstract": "Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.",
                "AuthorNamesDeduped": "Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs",
                "AuthorNames": "R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs",
                "AuthorAffiliation": "Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USA",
                "InternalReferences": null,
                "AuthorKeywords": "display, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration",
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1087,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3050,
                "i": [
                    3050
                ]
            }
        },
        {
            "name": "Ruigang Yang",
            "value": 32,
            "numPapers": 3,
            "cluster": "7",
            "visible": 1,
            "index": 1615,
            "x": 284.40252578917864,
            "y": -284.01620257431716,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multi-projector displays using camera-based registration",
                "DOI": "10.1109/visual.1999.809883",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809883",
                "FirstPage": 161,
                "LastPage": 522,
                "PaperType": "C",
                "Abstract": "Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.",
                "AuthorNamesDeduped": "Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs",
                "AuthorNames": "R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs",
                "AuthorAffiliation": "Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USA",
                "InternalReferences": null,
                "AuthorKeywords": "display, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration",
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1087,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3050,
                "i": [
                    3050
                ]
            }
        },
        {
            "name": "Wei-Chao Chen",
            "value": 35,
            "numPapers": 1,
            "cluster": "7",
            "visible": 1,
            "index": 1616,
            "x": -17.86490976619416,
            "y": 401.66011128695084,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multi-projector displays using camera-based registration",
                "DOI": "10.1109/visual.1999.809883",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809883",
                "FirstPage": 161,
                "LastPage": 522,
                "PaperType": "C",
                "Abstract": "Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.",
                "AuthorNamesDeduped": "Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs",
                "AuthorNames": "R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs",
                "AuthorAffiliation": "Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USA",
                "InternalReferences": null,
                "AuthorKeywords": "display, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration",
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1087,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3050,
                "i": [
                    3050
                ]
            }
        },
        {
            "name": "W. Brent Seales",
            "value": 25,
            "numPapers": 4,
            "cluster": "7",
            "visible": 1,
            "index": 1617,
            "x": -258.22431268024513,
            "y": -308.3345655952556,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Multi-projector displays using camera-based registration",
                "DOI": "10.1109/visual.1999.809883",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809883",
                "FirstPage": 161,
                "LastPage": 522,
                "PaperType": "C",
                "Abstract": "Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.",
                "AuthorNamesDeduped": "Ramesh Raskar;Michael S. Brown;Ruigang Yang;Wei-Chao Chen;Greg Welch;Herman Towles;W. Brent Seales;Henry Fuchs",
                "AuthorNames": "R. Raskar;M.S. Brown;Ruigang Yang;Wei-Chao Chen;G. Welch;H. Towles;B. Scales;H. Fuchs",
                "AuthorAffiliation": "Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA;Department of Computer Science, North Carolina State University, Chapel Hill, USA and University of North Carolina at Asheville, Asheville, NC, US;;Department of Computer Science, North Carolina State University, Chapel Hill, USA",
                "InternalReferences": null,
                "AuthorKeywords": "display, projection, spatially immersive display, panoramic image display, virtual environments, intensity blending, image-based modeling, depth, calibration, auto-calibration, structured light, camera-based registration",
                "AminerCitationCount": 519,
                "CitationCountCrossRef": 156,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1087,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3050,
                "i": [
                    3050
                ]
            }
        },
        {
            "name": "Henry Fuchs",
            "value": 104,
            "numPapers": 5,
            "cluster": "7",
            "visible": 1,
            "index": 1618,
            "x": 398.8067998097005,
            "y": 52.944654362319305,
            "vy": 0,
            "vx": 0,
            "r": 1.1197466896948762,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Illustrating transparent surfaces with curvature-directed strokes",
                "DOI": "10.1109/visual.1996.568110",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568110",
                "FirstPage": 211,
                "LastPage": 218,
                "PaperType": "C",
                "Abstract": "Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for \"texturing\" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose.",
                "AuthorNamesDeduped": "Victoria Interrante;Henry Fuchs;Stephen M. Pizer",
                "AuthorNames": "V. Interrante;H. Fuchs;S. Pizer",
                "AuthorAffiliation": "ICASE, NASA-Langley Research Center, USA;North Carolina State University, Chapel Hill, USA;North Carolina State University, Chapel Hill, USA",
                "InternalReferences": "0.1109/visual.1995.480795;10.1109/visual.1990.146395;10.1109/visual.1996.568111",
                "AuthorKeywords": null,
                "AminerCitationCount": 89,
                "CitationCountCrossRef": 23,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 88,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3345,
                "i": [
                    3345
                ]
            }
        },
        {
            "name": "Ming-Yuen Chan",
            "value": 16,
            "numPapers": 28,
            "cluster": "6",
            "visible": 1,
            "index": 1619,
            "x": -329.93320110372593,
            "y": 230.42153286845476,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Perception-Based Transparency Optimization for Direct Volume Rendering",
                "DOI": "10.1109/tvcg.2009.172",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.172",
                "FirstPage": 1283,
                "LastPage": 1290,
                "PaperType": "J",
                "Abstract": "The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However, obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other rendering parameters. Furthermore, the visual quality of layers also depends on various perceptual factors. In this paper, we propose an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the effectiveness and robustness of our method.",
                "AuthorNamesDeduped": "Ming-Yuen Chan;Yingcai Wu;Wai-Ho Mak;Wei Chen 0001;Huamin Qu",
                "AuthorNames": "Ming-Yuen Chan;Yingcai Wu;Wai-Ho Mak;Wei Chen;Huamin Qu",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;State Key Laboratory of CAD&CG, University of Zhejiang, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/visual.1998.745319;10.1109/visual.2000.885694;10.1109/tvcg.2008.118;10.1109/visual.2003.1250414;10.1109/tvcg.2007.70591;10.1109/visual.2004.62;10.1109/tvcg.2008.162;10.1109/tvcg.2006.183;10.1109/tvcg.2008.159;10.1109/tvcg.2006.148",
                "AuthorKeywords": "Direct volume rendering, image enhancement, layer perception",
                "AminerCitationCount": 69,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 743,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1934,
                "i": [
                    1934
                ]
            }
        },
        {
            "name": "Wai-Ho Mak",
            "value": 16,
            "numPapers": 24,
            "cluster": "6",
            "visible": 1,
            "index": 1620,
            "x": 87.66201730127165,
            "y": -392.8935870215643,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Perception-Based Transparency Optimization for Direct Volume Rendering",
                "DOI": "10.1109/tvcg.2009.172",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.172",
                "FirstPage": 1283,
                "LastPage": 1290,
                "PaperType": "J",
                "Abstract": "The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However, obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other rendering parameters. Furthermore, the visual quality of layers also depends on various perceptual factors. In this paper, we propose an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the effectiveness and robustness of our method.",
                "AuthorNamesDeduped": "Ming-Yuen Chan;Yingcai Wu;Wai-Ho Mak;Wei Chen 0001;Huamin Qu",
                "AuthorNames": "Ming-Yuen Chan;Yingcai Wu;Wai-Ho Mak;Wei Chen;Huamin Qu",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China;State Key Laboratory of CAD&CG, University of Zhejiang, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, China",
                "InternalReferences": "0.1109/visual.1998.745319;10.1109/visual.2000.885694;10.1109/tvcg.2008.118;10.1109/visual.2003.1250414;10.1109/tvcg.2007.70591;10.1109/visual.2004.62;10.1109/tvcg.2008.162;10.1109/tvcg.2006.183;10.1109/tvcg.2008.159;10.1109/tvcg.2006.148",
                "AuthorKeywords": "Direct volume rendering, image enhancement, layer perception",
                "AminerCitationCount": 69,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 743,
                "Award": "HM",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1934,
                "i": [
                    1934
                ]
            }
        },
        {
            "name": "Mark C. Miller",
            "value": 97,
            "numPapers": 8,
            "cluster": "3",
            "visible": 1,
            "index": 1621,
            "x": 200.8184945934779,
            "y": 349.0299875787312,
            "vy": 0,
            "vx": 0,
            "r": 1.1116868163500289,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "ROAMing terrain: Real-time Optimally Adapting Meshes",
                "DOI": "10.1109/visual.1997.663860",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663860",
                "FirstPage": 81,
                "LastPage": 88,
                "PaperType": "C",
                "Abstract": "Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.",
                "AuthorNamesDeduped": "Mark A. Duchaineau;Murray Wolinsky;David E. Sigeti;Mark C. Miller;Charles Aldrich;Mark B. Mineev-Weinstein",
                "AuthorNames": "M. Duchaineau;M. Wolinsky;D.E. Sigeti;M.C. Miller;C. Aldrich;M.B. Mineev-Weinstein",
                "AuthorAffiliation": "Los Alamos National Laboratory, USA and Lawrence Livemore National Laboratory;Los Alamos National Laboratory, USA;Los Alamos National Laboratory, USA;;Los Alamos National Laboratory, USA;Los Alamos National Laboratory, USA",
                "InternalReferences": "0.1109/visual.1996.567600;10.1109/visual.1996.568126;10.1109/visual.1996.568125;10.1109/visual.1995.480813;10.1109/visual.1995.480805",
                "AuthorKeywords": "triangle bintree, view-dependent mesh, frame-to-frame coherence, greedy algorithms",
                "AminerCitationCount": 1425,
                "CitationCountCrossRef": 204,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 583,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3232,
                "i": [
                    3232
                ]
            }
        },
        {
            "name": "Alan Chu",
            "value": 0,
            "numPapers": 6,
            "cluster": "2",
            "visible": 1,
            "index": 1622,
            "x": -383.96201950588915,
            "y": -121.75043152678845,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "GL4D: A GPU-based Architecture for Interactive 4D Visualization",
                "DOI": "10.1109/tvcg.2009.147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.147",
                "FirstPage": 1587,
                "LastPage": 1594,
                "PaperType": "J",
                "Abstract": "This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user.",
                "AuthorNamesDeduped": "Alan Chu;Chi-Wing Fu;Andrew J. Hanson;Pheng-Ann Heng",
                "AuthorNames": "Alan Chu;Chi-Wing Fu;Andrew Hanson;Pheng-Ann Heng",
                "AuthorAffiliation": "Chinese University of Hong Kong, Hong Kong, China;Nanyang Technological University, Singapore, Singapore;Indiana University, Bloomington, USA;Chinese University of Hong Kong, Hong Kong, China",
                "InternalReferences": "0.1109/visual.1994.346318;10.1109/visual.2000.885704;10.1109/visual.1992.235222;10.1109/visual.2005.1532804;10.1109/tvcg.2007.70593;10.1109/visual.1994.346324;10.1109/visual.1993.398869",
                "AuthorKeywords": "Mathematical visualization, four-dimensional visualization, graphics hardware, interactive illumination",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 1006,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1948,
                "i": [
                    1948
                ]
            }
        },
        {
            "name": "Pheng-Ann Heng",
            "value": 30,
            "numPapers": 11,
            "cluster": "2",
            "visible": 1,
            "index": 1623,
            "x": 365.475453589485,
            "y": -169.6398916044224,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "GL4D: A GPU-based Architecture for Interactive 4D Visualization",
                "DOI": "10.1109/tvcg.2009.147",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.147",
                "FirstPage": 1587,
                "LastPage": 1594,
                "PaperType": "J",
                "Abstract": "This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user.",
                "AuthorNamesDeduped": "Alan Chu;Chi-Wing Fu;Andrew J. Hanson;Pheng-Ann Heng",
                "AuthorNames": "Alan Chu;Chi-Wing Fu;Andrew Hanson;Pheng-Ann Heng",
                "AuthorAffiliation": "Chinese University of Hong Kong, Hong Kong, China;Nanyang Technological University, Singapore, Singapore;Indiana University, Bloomington, USA;Chinese University of Hong Kong, Hong Kong, China",
                "InternalReferences": "0.1109/visual.1994.346318;10.1109/visual.2000.885704;10.1109/visual.1992.235222;10.1109/visual.2005.1532804;10.1109/tvcg.2007.70593;10.1109/visual.1994.346324;10.1109/visual.1993.398869",
                "AuthorKeywords": "Mathematical visualization, four-dimensional visualization, graphics hardware, interactive illumination",
                "AminerCitationCount": 43,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 1006,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1948,
                "i": [
                    1948
                ]
            }
        },
        {
            "name": "Robert A. Cross",
            "value": 29,
            "numPapers": 4,
            "cluster": "2",
            "visible": 1,
            "index": 1624,
            "x": -154.9478231490721,
            "y": 372.076836287028,
            "vy": 0,
            "vx": 0,
            "r": 1.033390903857225,
            "node": {
                "Conference": "Vis",
                "Year": 1994,
                "Title": "Virtual reality performance for virtual geometry",
                "DOI": "10.1109/visual.1994.346324",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1994.346324",
                "FirstPage": 156,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "We describe the theoretical and practical visualization issues solved in the implementation of an interactive real-time four-dimensional geometry interface for the CAVE, an immersive virtual reality environment. While our specific task is to produce a \"virtual geometry\" experience by approximating physically correct rendering of manifolds embedded in four dimensions, the general principles exploited by our approach reflect requirements common to many immersive virtual reality applications, especially those involving volume rendering. Among the issues we address are the classification of rendering tasks, the specialized hardware support required to attain interactivity, specific techniques required to render 4D objects, and interactive methods appropriate for our 4D virtual world application.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Robert A. Cross;Andrew J. Hanson",
                "AuthorNames": "R.A. Cross;A.J. Hanson",
                "AuthorAffiliation": "Department of Computer Science, Indiana University, Bloomington, IN, USA;Department of Computer Science, Indiana University, Bloomington, IN, USA",
                "InternalReferences": "0.1109/visual.1994.346330;10.1109/visual.1993.398869;10.1109/visual.1991.175821;10.1109/visual.1992.235222",
                "AuthorKeywords": null,
                "AminerCitationCount": 22,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 116,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3498,
                "i": [
                    3498
                ]
            }
        },
        {
            "name": "Yi-Jen Chiang",
            "value": 67,
            "numPapers": 20,
            "cluster": "6",
            "visible": 1,
            "index": 1625,
            "x": -137.12277424913512,
            "y": -379.14027058889525,
            "vy": 0,
            "vx": 0,
            "r": 1.0771445020149684,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Isosurface Extraction and View-Dependent filtering from Time-Varying fields Using Persistent Time-Octree (PTOT)",
                "DOI": "10.1109/tvcg.2009.160",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.160",
                "FirstPage": 1367,
                "LastPage": 1374,
                "PaperType": "J",
                "Abstract": "We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel persistent time-octree (PTOT) indexing structure. Previously, the persistent octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the branch-on-need octree (BONO, for view-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel persistent time-octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same iso value q over m consecutive time steps, there is no additional searching overhead (except for reporting the additional active cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPU for time-varying fields larger than main memory. Our experiments on datasets as large as 192 GB (with 4 GB per time step) having no more than 870 MB of memory footprint in both preprocessing and run-time phases demonstrate the efficacy of our new technique.",
                "AuthorNamesDeduped": "Cong Wang;Yi-Jen Chiang",
                "AuthorNames": "Cong Wang;Yi-Jen Chiang",
                "AuthorAffiliation": "CSE Department, Polytechnic Institute of New York University, Brooklyn, NY, USA;CSE Department, Polytechnic Institute of New York University, Brooklyn, NY, USA",
                "InternalReferences": "0.1109/visual.2003.1250375;10.1109/visual.1998.745299;10.1109/visual.1997.663895;10.1109/visual.1998.745300;10.1109/visual.2003.1250373;10.1109/visual.1998.745713;10.1109/visual.1995.480806;10.1109/tvcg.2006.157;10.1109/visual.1996.568121;10.1109/visual.1998.745298;10.1109/tvcg.2006.188;10.1109/tvcg.2007.70566",
                "AuthorKeywords": "Isosurface extraction, time-varying fields, persistent data structure, view-dependent filtering, out-of-core methods",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1955,
                "i": [
                    1955
                ]
            }
        },
        {
            "name": "Cong Wang",
            "value": 6,
            "numPapers": 11,
            "cluster": "6",
            "visible": 1,
            "index": 1626,
            "x": 357.3255009550645,
            "y": 186.998626645257,
            "vy": 0,
            "vx": 0,
            "r": 1.0069084628670122,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Isosurface Extraction and View-Dependent filtering from Time-Varying fields Using Persistent Time-Octree (PTOT)",
                "DOI": "10.1109/tvcg.2009.160",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.160",
                "FirstPage": 1367,
                "LastPage": 1374,
                "PaperType": "J",
                "Abstract": "We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel persistent time-octree (PTOT) indexing structure. Previously, the persistent octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the branch-on-need octree (BONO, for view-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel persistent time-octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same iso value q over m consecutive time steps, there is no additional searching overhead (except for reporting the additional active cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPU for time-varying fields larger than main memory. Our experiments on datasets as large as 192 GB (with 4 GB per time step) having no more than 870 MB of memory footprint in both preprocessing and run-time phases demonstrate the efficacy of our new technique.",
                "AuthorNamesDeduped": "Cong Wang;Yi-Jen Chiang",
                "AuthorNames": "Cong Wang;Yi-Jen Chiang",
                "AuthorAffiliation": "CSE Department, Polytechnic Institute of New York University, Brooklyn, NY, USA;CSE Department, Polytechnic Institute of New York University, Brooklyn, NY, USA",
                "InternalReferences": "0.1109/visual.2003.1250375;10.1109/visual.1998.745299;10.1109/visual.1997.663895;10.1109/visual.1998.745300;10.1109/visual.2003.1250373;10.1109/visual.1998.745713;10.1109/visual.1995.480806;10.1109/tvcg.2006.157;10.1109/visual.1996.568121;10.1109/visual.1998.745298;10.1109/tvcg.2006.188;10.1109/tvcg.2007.70566",
                "AuthorKeywords": "Isosurface extraction, time-varying fields, persistent data structure, view-dependent filtering, out-of-core methods",
                "AminerCitationCount": 16,
                "CitationCountCrossRef": 9,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 285,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1955,
                "i": [
                    1955
                ]
            }
        },
        {
            "name": "Ugo Varetto",
            "value": 0,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1627,
            "x": -389.91626943821143,
            "y": 103.5147469078106,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2009,
                "Title": "Interactive Volume Rendering of Functional Representations in Quantum Chemistry",
                "DOI": "10.1109/tvcg.2009.158",
                "Link": "http://dx.doi.org/10.1109/TVCG.2009.158",
                "FirstPage": 1579,
                "LastPage": 5186,
                "PaperType": "J",
                "Abstract": "Simulation and computation in chemistry studies have been improved as computational power has increased over decades. Many types of chemistry simulation results are available, from atomic level bonding to volumetric representations of electron density. However, tools for the visualization of the results from quantum chemistry computations are still limited to showing atomic bonds and isosurfaces or isocontours corresponding to certain isovalues. In this work, we study the volumetric representations of the results from quantum chemistry computations, and evaluate and visualize the representations directly on the GPU without resampling the result in grid structures. Our visualization tool handles the direct evaluation of the approximated wavefunctions described as a combination of Gaussian-like primitive basis functions. For visualizations, we use a slice based volume rendering technique with a 2D transfer function, volume clipping, and illustrative rendering in order to reveal and enhance the quantum chemistry structure. Since there is no need of resampling the volume from the functional representations, two issues, data transfer and resampling resolution, can be ignored, therefore, it is possible to interactively explore large amount of different information in the computation results.",
                "AuthorNamesDeduped": "Yun Jang;Ugo Varetto",
                "AuthorNames": "Yun Jang;Ugo Varetto",
                "AuthorAffiliation": "ETH Zürich, Switzerland;Swiss National Supercomputing Centre, Switzerland",
                "InternalReferences": "0.1109/tvcg.2007.70614;10.1109/tvcg.2007.70517;10.1109/visual.2003.1250384;10.1109/visual.2002.1183780;10.1109/tvcg.2006.133;10.1109/visual.2005.1532811;10.1109/visual.2000.885694;10.1109/visual.2004.23;10.1109/tvcg.2007.70578;10.1109/tvcg.2006.150;10.1109/visual.2004.36;10.1109/tvcg.2006.115;10.1109/visual.2005.1532858;10.1109/visual.2004.103",
                "AuthorKeywords": "Quantum Chemistry, GTO, Volume Rendering, GPU",
                "AminerCitationCount": 15,
                "CitationCountCrossRef": 8,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 466,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1957,
                "i": [
                    1957
                ]
            }
        },
        {
            "name": "Lynn Chien",
            "value": 5,
            "numPapers": 4,
            "cluster": "3",
            "visible": 1,
            "index": 1628,
            "x": 217.65575261870376,
            "y": -339.8175589224101,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "VAST",
                "Year": 2008,
                "Title": "Grand challenge award 2008: Support for diverse analytic techniques - nSpace2 and GeoTime visual analytics",
                "DOI": "10.1109/vast.2008.4677385",
                "Link": "http://dx.doi.org/10.1109/VAST.2008.4677385",
                "FirstPage": null,
                "LastPage": null,
                "PaperType": "M",
                "Abstract": "GeoTime and nSpace2 are interactive visual analytics tools that were used to examine and interpret all four of the 2008 VAST Challenge datasets. GeoTime excels in visualizing event patterns in time and space, or in time and any abstract landscape, while nSpace2 is a web-based analytical tool designed to support every step of the analytical process. nSpace2 is an integrating analytic environment. This paper highlights the VAST analytical experience with these tools that contributed to the success of these tools and this team for the third consecutive year.",
                "AuthorNamesDeduped": "Lynn Chien;Annie Tat;Pascale Proulx;Adeel Khamisa;William Wright",
                "AuthorNames": "Lynn Chien;Annie Tat;Pascale Proulx;Adeel Khamisa;William Wright",
                "AuthorAffiliation": "Oculus Info, Inc.;Oculus Info, Inc.;Oculus Info, Inc.;Oculus Info, Inc.;Oculus Info, Inc.",
                "InternalReferences": "10.1109/vast.2008.4677355;10.1109/infvis.2004.27",
                "AuthorKeywords": null,
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 4,
                "DownloadsXplore": 140,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2019,
                "i": [
                    2019
                ]
            }
        },
        {
            "name": "Annie Tat",
            "value": 5,
            "numPapers": 4,
            "cluster": "3",
            "visible": 1,
            "index": 1629,
            "x": 69.07208226523493,
            "y": 397.7172958918742,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "VAST",
                "Year": 2008,
                "Title": "Grand challenge award 2008: Support for diverse analytic techniques - nSpace2 and GeoTime visual analytics",
                "DOI": "10.1109/vast.2008.4677385",
                "Link": "http://dx.doi.org/10.1109/VAST.2008.4677385",
                "FirstPage": null,
                "LastPage": null,
                "PaperType": "M",
                "Abstract": "GeoTime and nSpace2 are interactive visual analytics tools that were used to examine and interpret all four of the 2008 VAST Challenge datasets. GeoTime excels in visualizing event patterns in time and space, or in time and any abstract landscape, while nSpace2 is a web-based analytical tool designed to support every step of the analytical process. nSpace2 is an integrating analytic environment. This paper highlights the VAST analytical experience with these tools that contributed to the success of these tools and this team for the third consecutive year.",
                "AuthorNamesDeduped": "Lynn Chien;Annie Tat;Pascale Proulx;Adeel Khamisa;William Wright",
                "AuthorNames": "Lynn Chien;Annie Tat;Pascale Proulx;Adeel Khamisa;William Wright",
                "AuthorAffiliation": "Oculus Info, Inc.;Oculus Info, Inc.;Oculus Info, Inc.;Oculus Info, Inc.;Oculus Info, Inc.",
                "InternalReferences": "10.1109/vast.2008.4677355;10.1109/infvis.2004.27",
                "AuthorKeywords": null,
                "AminerCitationCount": 5,
                "CitationCountCrossRef": 3,
                "PubsCitedCrossRef": 4,
                "DownloadsXplore": 140,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2019,
                "i": [
                    2019
                ]
            }
        },
        {
            "name": "Markus H. Gross",
            "value": 144,
            "numPapers": 39,
            "cluster": "11",
            "visible": 1,
            "index": 1630,
            "x": -319.6838245285586,
            "y": -246.68249296371596,
            "vy": 0,
            "vx": 0,
            "r": 1.16580310880829,
            "node": {
                "Conference": "InfoVis",
                "Year": 2017,
                "Title": "Visualizing Nonlinear Narratives with Story Curves",
                "DOI": "10.1109/tvcg.2017.2744118",
                "Link": "http://dx.doi.org/10.1109/TVCG.2017.2744118",
                "FirstPage": 595,
                "LastPage": 604,
                "PaperType": "J",
                "Abstract": "In this paper, we present story curves, a visualization technique for exploring and communicating nonlinear narratives in movies. A nonlinear narrative is a storytelling device that portrays events of a story out of chronological order, e.g., in reverse order or going back and forth between past and future events. Many acclaimed movies employ unique narrative patterns which in turn have inspired other movies and contributed to the broader analysis of narrative patterns in movies. However, understanding and communicating nonlinear narratives is a difficult task due to complex temporal disruptions in the order of events as well as no explicit records specifying the actual temporal order of the underlying story. Story curves visualize the nonlinear narrative of a movie by showing the order in which events are told in the movie and comparing them to their actual chronological order, resulting in possibly meandering visual patterns in the curve. We also present Story Explorer, an interactive tool that visualizes a story curve together with complementary information such as characters and settings. Story Explorer further provides a script curation interface that allows users to specify the chronological order of events in movies. We used Story Explorer to analyze 10 popular nonlinear movies and describe the spectrum of narrative patterns that we discovered, including some novel patterns not previously described in the literature. Feedback from experts highlights potential use cases in screenplay writing and analysis, education and film production. A controlled user study shows that users with no expertise are able to understand visual patterns of nonlinear narratives using story curves.",
                "AuthorNamesDeduped": "Nam Wook Kim;Benjamin Bach;Hyejin Im;Sasha Schriber;Markus H. Gross;Hanspeter Pfister",
                "AuthorNames": "Nam Wook Kim;Benjamin Bach;Hyejin Im;Sasha Schriber;Markus Gross;Hanspeter Pfister",
                "AuthorAffiliation": "John A. Paulson School of Engineering and Applied Sciences, Harvard University;John A. Paulson School of Engineering and Applied Sciences, Harvard University;Independent scholar;Disney Research, Zürich;Disney Research, Zürich;John A. Paulson School of Engineering and Applied Sciences, Harvard University",
                "InternalReferences": "0.1109/tvcg.2016.2598920;10.1109/tvcg.2013.196;10.1109/tvcg.2015.2467811;10.1109/tvcg.2009.167;10.1109/tvcg.2012.212;10.1109/tvcg.2015.2468151",
                "AuthorKeywords": "Nonlinear narrative,storytelling,visualization",
                "AminerCitationCount": 50,
                "CitationCountCrossRef": 33,
                "PubsCitedCrossRef": 54,
                "DownloadsXplore": 3642,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 795,
                "i": [
                    795
                ]
            }
        },
        {
            "name": "Oliver G. Staadt",
            "value": 57,
            "numPapers": 3,
            "cluster": "11",
            "visible": 1,
            "index": 1631,
            "x": 402.4798982825981,
            "y": -34.05776678570467,
            "vy": 0,
            "vx": 0,
            "r": 1.0656303972366148,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Progressive tetrahedralizations",
                "DOI": "10.1109/visual.1998.745329",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745329",
                "FirstPage": 397,
                "LastPage": 402,
                "PaperType": "C",
                "Abstract": "The paper describes some fundamental issues for robust implementations of progressively refined tetrahedralizations generated through sequences of edge collapses. We address the definition of appropriate cost functions and explain on various tests which are necessary to preserve the consistency of the mesh when collapsing edges. Although considered a special case of progressive simplicial complexes (J. Popovic and H. Hoppe, 1997), the results of our method are of high practical importance and can be used in many different applications, such as finite element meshing, scattered data interpolation, or rendering of unstructured volume data.",
                "AuthorNamesDeduped": "Oliver G. Staadt;Markus H. Gross",
                "AuthorNames": "O.G. Staadt;M.H. Gross",
                "AuthorAffiliation": "Computer Graphics Research Group, Department of Computer Science, Swiss Federal Institute of Technology, Zurich, Switzerland;Computer Graphics Research Group, Department of Computer Science, Swiss Federal Institute of Technology, Zurich, Switzerland",
                "InternalReferences": "0.1109/visual.1997.663907;10.1109/visual.1997.663901;10.1109/visual.1997.663883",
                "AuthorKeywords": "mesh simplification, multiresolution, level-of-detail, unstructured meshes, mesh generation",
                "AminerCitationCount": 188,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 84,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3151,
                "i": [
                    3151
                ]
            }
        },
        {
            "name": "Paolo Cignoni",
            "value": 134,
            "numPapers": 20,
            "cluster": "11",
            "visible": 1,
            "index": 1632,
            "x": -273.85434874402057,
            "y": 297.07540402057583,
            "vy": 0,
            "vx": 0,
            "r": 1.1542890040299367,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Ambient Occlusion and Edge Cueing for Enhancing Real Time Molecular Visualization",
                "DOI": "10.1109/tvcg.2006.115",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.115",
                "FirstPage": 1237,
                "LastPage": 1244,
                "PaperType": "J",
                "Abstract": "The paper presents a set of combined techniques to enhance the real-time visualization of simple or complex molecules (up to order of 10&lt;sup&gt;6&lt;/sup&gt; atoms) space fill mode. The proposed approach includes an innovative technique for efficient computation and storage of ambient occlusion terms, a small set of GPU accelerated procedural impostors for space-fill and ball-and-stick rendering, and novel edge-cueing techniques. As a result, the user's understanding of the three-dimensional structure under inspection is strongly increased (even for'still images), while the rendering still occurs in real time",
                "AuthorNamesDeduped": "Marco Tarini;Paolo Cignoni;Claudio Montani",
                "AuthorNames": "Marco Tarini;Paolo Cignoni;Claudio Montani",
                "AuthorAffiliation": "Università dell'Insubria, Varese, Italy;I. S. T. I.-C. N. R, Pisa, Italy;I. S. T. I.-C. N. R, Pisa, Italy",
                "InternalReferences": "0.1109/visual.2000.885694;10.1109/visual.2003.1250394",
                "AuthorKeywords": null,
                "AminerCitationCount": 502,
                "CitationCountCrossRef": 312,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 1311,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2267,
                "i": [
                    2267
                ]
            }
        },
        {
            "name": "D. Constanza",
            "value": 37,
            "numPapers": 3,
            "cluster": "11",
            "visible": 1,
            "index": 1633,
            "x": 1.260507699929737,
            "y": -404.16384192594273,
            "vy": 0,
            "vx": 0,
            "r": 1.0426021876799079,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Simplification of tetrahedral meshes with accurate error evaluation",
                "DOI": "10.1109/visual.2000.885680",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885680",
                "FirstPage": 85,
                "LastPage": 92,
                "PaperType": "C",
                "Abstract": "The techniques for reducing the size of a volume dataset by preserving both the geometrical/topological shape and the information encoded in an attached scalar field are attracting growing interest. Given the framework of incremental 3D mesh simplification based on edge collapse, we propose an approach for the integrated evaluation of the error introduced by both the modification of the domain and the approximation of the field of the original volume dataset. We present and compare various techniques to evaluate the approximation error or to produce a sound prediction. A flexible simplification tool has been implemented, which provides a different degree of accuracy and computational efficiency for the selection of the edge to be collapsed. Techniques for preventing a geometric or topological degeneration of the mesh are also presented.",
                "AuthorNamesDeduped": "Paolo Cignoni;D. Constanza;Claudio Montani;Claudio Rocchini;Roberto Scopigno",
                "AuthorNames": "P. Cignoni;D. Costanza;C. Montani;C. Rocchini;R. Scopigno",
                "AuthorAffiliation": "Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy",
                "InternalReferences": "0.1109/visual.1998.745315;10.1109/visual.1997.663907;10.1109/visual.1998.745329;10.1109/visual.1998.745312",
                "AuthorKeywords": "Simplicial Complexes, Mesh Simplification,Volume Visualization, Unstructured Grids",
                "AminerCitationCount": 149,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 156,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2970,
                "i": [
                    2970
                ]
            }
        },
        {
            "name": "Claudio Montani",
            "value": 114,
            "numPapers": 4,
            "cluster": "11",
            "visible": 1,
            "index": 1634,
            "x": 272.16256170354393,
            "y": 298.96076666841196,
            "vy": 0,
            "vx": 0,
            "r": 1.1312607944732298,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Ambient Occlusion and Edge Cueing for Enhancing Real Time Molecular Visualization",
                "DOI": "10.1109/tvcg.2006.115",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.115",
                "FirstPage": 1237,
                "LastPage": 1244,
                "PaperType": "J",
                "Abstract": "The paper presents a set of combined techniques to enhance the real-time visualization of simple or complex molecules (up to order of 10&lt;sup&gt;6&lt;/sup&gt; atoms) space fill mode. The proposed approach includes an innovative technique for efficient computation and storage of ambient occlusion terms, a small set of GPU accelerated procedural impostors for space-fill and ball-and-stick rendering, and novel edge-cueing techniques. As a result, the user's understanding of the three-dimensional structure under inspection is strongly increased (even for'still images), while the rendering still occurs in real time",
                "AuthorNamesDeduped": "Marco Tarini;Paolo Cignoni;Claudio Montani",
                "AuthorNames": "Marco Tarini;Paolo Cignoni;Claudio Montani",
                "AuthorAffiliation": "Università dell'Insubria, Varese, Italy;I. S. T. I.-C. N. R, Pisa, Italy;I. S. T. I.-C. N. R, Pisa, Italy",
                "InternalReferences": "0.1109/visual.2000.885694;10.1109/visual.2003.1250394",
                "AuthorKeywords": null,
                "AminerCitationCount": 502,
                "CitationCountCrossRef": 312,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 1311,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2267,
                "i": [
                    2267
                ]
            }
        },
        {
            "name": "Claudio Rocchini",
            "value": 44,
            "numPapers": 3,
            "cluster": "11",
            "visible": 1,
            "index": 1635,
            "x": -402.75244608646693,
            "y": -36.61239095398615,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Simplification of tetrahedral meshes with accurate error evaluation",
                "DOI": "10.1109/visual.2000.885680",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885680",
                "FirstPage": 85,
                "LastPage": 92,
                "PaperType": "C",
                "Abstract": "The techniques for reducing the size of a volume dataset by preserving both the geometrical/topological shape and the information encoded in an attached scalar field are attracting growing interest. Given the framework of incremental 3D mesh simplification based on edge collapse, we propose an approach for the integrated evaluation of the error introduced by both the modification of the domain and the approximation of the field of the original volume dataset. We present and compare various techniques to evaluate the approximation error or to produce a sound prediction. A flexible simplification tool has been implemented, which provides a different degree of accuracy and computational efficiency for the selection of the edge to be collapsed. Techniques for preventing a geometric or topological degeneration of the mesh are also presented.",
                "AuthorNamesDeduped": "Paolo Cignoni;D. Constanza;Claudio Montani;Claudio Rocchini;Roberto Scopigno",
                "AuthorNames": "P. Cignoni;D. Costanza;C. Montani;C. Rocchini;R. Scopigno",
                "AuthorAffiliation": "Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy;Istituto Scienza e Tecnologia dellInformazione, Consiglio Nationale delle Ricerche, Italy",
                "InternalReferences": "0.1109/visual.1998.745315;10.1109/visual.1997.663907;10.1109/visual.1998.745329;10.1109/visual.1998.745312",
                "AuthorKeywords": "Simplicial Complexes, Mesh Simplification,Volume Visualization, Unstructured Grids",
                "AminerCitationCount": 149,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 156,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2970,
                "i": [
                    2970
                ]
            }
        },
        {
            "name": "Roberto Scopigno",
            "value": 88,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 1636,
            "x": 321.80677073104766,
            "y": -245.1334377673819,
            "vy": 0,
            "vx": 0,
            "r": 1.1013241220495107,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Planet-sized batched dynamic adaptive meshes (P-BDAM)",
                "DOI": "10.1109/visual.2003.1250366",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250366",
                "FirstPage": 147,
                "LastPage": 154,
                "PaperType": "C",
                "Abstract": "We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.",
                "AuthorNamesDeduped": "Paolo Cignoni;Fabio Ganovelli;Enrico Gobbetti;Fabio Marton;Federico Ponchio;Roberto Scopigno",
                "AuthorNames": "P. Cignoni;F. Ganovelli;E. Gobbetti;F. Marton;F. Ponchio;R. Scopigno",
                "AuthorAffiliation": "ISTI-CNR, Pisa, Italy;ISTI-CNR, Italy;CRS4, Pula, Italy;CRS4, Italy;ISTI-CNR, Italy;ISTI-CNR, Italy",
                "InternalReferences": "0.1109/visual.1997.663860;10.1109/visual.2002.1183783;10.1109/visual.1997.663902;10.1109/visual.1998.745282;10.1109/visual.2000.885699;10.1109/visual.2002.1183800;10.1109/visual.1996.567600;10.1109/visual.1998.745280;10.1109/visual.1999.809902;10.1109/visual.1996.568126",
                "AuthorKeywords": " Multiresolution, terrains, huge dataset",
                "AminerCitationCount": 222,
                "CitationCountCrossRef": 47,
                "PubsCitedCrossRef": 33,
                "DownloadsXplore": 169,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2667,
                "i": [
                    2667
                ]
            }
        },
        {
            "name": "Thomas Gerstner",
            "value": 40,
            "numPapers": 7,
            "cluster": "11",
            "visible": 1,
            "index": 1637,
            "x": -71.72694418287526,
            "y": 398.25274070392373,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Topology preserving and controlled topology simplifying multiresolution isosurface extraction",
                "DOI": "10.1109/visual.2000.885703",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885703",
                "FirstPage": 259,
                "LastPage": 266,
                "PaperType": "C",
                "Abstract": "Multiresolution methods are becoming increasingly important tools for the interactive visualization of very large data sets. Multiresolution isosurface visualization allows the user to explore volume data using simplified and coarse representations of the isosurface for overview images, and finer resolution in areas of high interest or when zooming into the data. Ideally, a coarse isosurface should have the same topological structure as the original. The topological genus of the isosurface is one important property which is often neglected in multiresolution algorithms. This results in uncontrolled topological changes which can occur whenever the level-of-detail is changed. The scope of this paper is to propose an efficient technique which allows preservation of topology as well as controlled topology simplification in multiresolution isosurface extraction.",
                "AuthorNamesDeduped": "Thomas Gerstner;Renato Pajarola",
                "AuthorNames": "T. Gerstner;R. Pajarola",
                "AuthorAffiliation": "Information and Computer Science, University of California, Irvine, USA;Department for Applied Mathematics, University of Bonn, Germany",
                "InternalReferences": "0.1109/visual.1996.568127;10.1109/visual.1997.663907;10.1109/visual.1997.663909;10.1109/visual.1998.745300;10.1109/visual.1994.346334;10.1109/visual.1997.663869",
                "AuthorKeywords": "tetrahedral grid refinement, implicit surface approximation, level-of-detail, topological genus, critical points",
                "AminerCitationCount": 157,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 91,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2967,
                "i": [
                    2967
                ]
            }
        },
        {
            "name": "Christof Rezk-Salama",
            "value": 89,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 1638,
            "x": -216.19262744486107,
            "y": -342.2144763718909,
            "vy": 0,
            "vx": 0,
            "r": 1.102475532527346,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "High-Level User Interfaces for Transfer Function Design with Semantics",
                "DOI": "10.1109/tvcg.2006.148",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.148",
                "FirstPage": 1021,
                "LastPage": 1028,
                "PaperType": "J",
                "Abstract": "Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for nonexpert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation",
                "AuthorNamesDeduped": "Christof Rezk-Salama;Maik Keller;Peter Kohlmann",
                "AuthorNames": "Christof Rezk Salama;Maik Keller;Peter Kohlmann",
                "AuthorAffiliation": "Computer Graphics and Multimedia Systems Group, University of Siegen, Germany;Computer Graphics and Multimedia Systems Group, University of Siegen, Germany;Institute of Computer Graphics and Algorithms, University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.2003.1250384;10.1109/visual.2003.1250413;10.1109/visual.2002.1183764;10.1109/visual.1998.745319;10.1109/visual.2001.964519;10.1109/visual.2003.1250412;10.1109/visual.1996.568113;10.1109/visual.1997.663875",
                "AuthorKeywords": "Volume rendering, transfer function design, semantic models",
                "AminerCitationCount": 163,
                "CitationCountCrossRef": 74,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 691,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2278,
                "i": [
                    2278
                ]
            }
        },
        {
            "name": "Jinzhu Gao",
            "value": 41,
            "numPapers": 32,
            "cluster": "6",
            "visible": 1,
            "index": 1639,
            "x": 390.69544145745505,
            "y": 106.33471692897068,
            "vy": 0,
            "vx": 0,
            "r": 1.0472078295912493,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Scalable Data Servers for Large Multivariate Volume Visualization",
                "DOI": "10.1109/tvcg.2006.175",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.175",
                "FirstPage": 1291,
                "LastPage": 1298,
                "PaperType": "J",
                "Abstract": "Volumetric datasets with multiple variables on each voxel over multiple time steps are often complex, especially when considering the exponentially large attribute space formed by the variables in combination with the spatial and temporal dimensions. It is intuitive, practical, and thus often desirable, to interactively select a subset of the data from within that high-dimensional value space for efficient visualization. This approach is straightforward to implement if the dataset is small enough to be stored entirely in-core. However, to handle datasets sized at hundreds of gigabytes and beyond, this simplistic approach becomes infeasible and thus, more sophisticated solutions are needed. In this work, we developed a system that supports efficient visualization of an arbitrary subset, selected by range-queries, of a large multivariate time-varying dataset. By employing specialized data structures and schemes of data distribution, our system can leverage a large number of networked computers as parallel data servers, and guarantees a near optimal load-balance. We demonstrate our system of scalable data servers using two large time-varying simulation datasets",
                "AuthorNamesDeduped": "Markus Glatter;Jian Huang 0007;Jinzhu Gao;Colin Mollenhour",
                "AuthorNames": "Markus Glatter;Jian Huang;Jinzhu Gao;Colin Mollenhour",
                "AuthorAffiliation": "University of Tennessee, USA;University of Tennessee, USA;University of Tennessee, USA;Oak Ridge National Laboratory, USA",
                "InternalReferences": "0.1109/visual.2005.1532792;10.1109/visual.2005.1532794;10.1109/visual.1999.809910;10.1109/visual.1996.568121;10.1109/visual.2003.1250412;10.1109/visual.2001.964519;10.1109/visual.1998.745311;10.1109/visual.2000.885698",
                "AuthorKeywords": "Parallel and distributed volume visualization, large Data Set Visualization, multi-variate Visualization, volume Visualization",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 248,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2313,
                "i": [
                    2313
                ]
            }
        },
        {
            "name": "Markus Glatter",
            "value": 38,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1640,
            "x": -360.0244753987871,
            "y": 185.55963223133455,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Scalable Data Servers for Large Multivariate Volume Visualization",
                "DOI": "10.1109/tvcg.2006.175",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.175",
                "FirstPage": 1291,
                "LastPage": 1298,
                "PaperType": "J",
                "Abstract": "Volumetric datasets with multiple variables on each voxel over multiple time steps are often complex, especially when considering the exponentially large attribute space formed by the variables in combination with the spatial and temporal dimensions. It is intuitive, practical, and thus often desirable, to interactively select a subset of the data from within that high-dimensional value space for efficient visualization. This approach is straightforward to implement if the dataset is small enough to be stored entirely in-core. However, to handle datasets sized at hundreds of gigabytes and beyond, this simplistic approach becomes infeasible and thus, more sophisticated solutions are needed. In this work, we developed a system that supports efficient visualization of an arbitrary subset, selected by range-queries, of a large multivariate time-varying dataset. By employing specialized data structures and schemes of data distribution, our system can leverage a large number of networked computers as parallel data servers, and guarantees a near optimal load-balance. We demonstrate our system of scalable data servers using two large time-varying simulation datasets",
                "AuthorNamesDeduped": "Markus Glatter;Jian Huang 0007;Jinzhu Gao;Colin Mollenhour",
                "AuthorNames": "Markus Glatter;Jian Huang;Jinzhu Gao;Colin Mollenhour",
                "AuthorAffiliation": "University of Tennessee, USA;University of Tennessee, USA;University of Tennessee, USA;Oak Ridge National Laboratory, USA",
                "InternalReferences": "0.1109/visual.2005.1532792;10.1109/visual.2005.1532794;10.1109/visual.1999.809910;10.1109/visual.1996.568121;10.1109/visual.2003.1250412;10.1109/visual.2001.964519;10.1109/visual.1998.745311;10.1109/visual.2000.885698",
                "AuthorKeywords": "Parallel and distributed volume visualization, large Data Set Visualization, multi-variate Visualization, volume Visualization",
                "AminerCitationCount": 35,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 25,
                "DownloadsXplore": 248,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2313,
                "i": [
                    2313
                ]
            }
        },
        {
            "name": "Alark Joshi",
            "value": 61,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1641,
            "x": 140.16981507987398,
            "y": -380.1347431378431,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Texture-based feature tracking for effective time-varying data visualization",
                "DOI": "10.1109/tvcg.2007.70599",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70599",
                "FirstPage": 1472,
                "LastPage": 1479,
                "PaperType": "J",
                "Abstract": "Analyzing, visualizing, and illustrating changes within time-varying volumetric data is challenging due to the dynamic changes occurring between timesteps. The changes and variations in computational fluid dynamic volumes and atmospheric 3D datasets do not follow any particular transformation. Features within the data move at different speeds and directions making the tracking and visualization of these features a difficult task. We introduce a texture-based feature tracking technique to overcome some of the current limitations found in the illustration and visualization of dynamic changes within time-varying volumetric data. Our texture-based technique tracks various features individually and then uses the tracked objects to better visualize structural changes. We show the effectiveness of our texture-based tracking technique with both synthetic and real world time-varying data. Furthermore, we highlight the specific visualization, annotation, registration, and feature isolation benefits of our technique. For instance, we show how our texture-based tracking can lead to insightful visualizations of time-varying data. Such visualizations, more than traditional visualization techniques, can assist domain scientists to explore and understand dynamic changes.",
                "AuthorNamesDeduped": "Jesus J. Caban;Alark Joshi;Penny Rheingans",
                "AuthorNames": "Jesus Caban;Alark Joshi;Penny Rheingans",
                "AuthorAffiliation": "University of Maryland, Baltimore, USA;University of Maryland, Baltimore, USA;University of Maryland, Baltimore, USA",
                "InternalReferences": "0.1109/visual.2003.1250374;10.1109/visual.2000.885694;10.1109/visual.1998.745288;10.1109/visual.1996.567807;10.1109/visual.2000.885697",
                "AuthorKeywords": "Feature tracking, texture-based analysis, flow visualization, time-varying data, visualization",
                "AminerCitationCount": 70,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 560,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2186,
                "i": [
                    2186
                ]
            }
        },
        {
            "name": "David S. Thompson",
            "value": 44,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 1642,
            "x": 153.46719489135577,
            "y": 375.0970808899725,
            "vy": 0,
            "vx": 0,
            "r": 1.0506620610247552,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Vortex Visualization for Practical Engineering Applications",
                "DOI": "10.1109/tvcg.2006.201",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.201",
                "FirstPage": 957,
                "LastPage": 964,
                "PaperType": "J",
                "Abstract": "In order to understand complex vortical flows in large data sets, we must be able to detect and visualize vortices in an automated fashion. In this paper, we present a feature-based vortex detection and visualization technique that is appropriate for large computational fluid dynamics data sets computed on unstructured meshes. In particular, we focus on the application of this technique to visualization of the flow over a serrated wing and the flow field around a spinning missile with dithering canards. We have developed a core line extraction technique based on the observation that vortex cores coincide with local extrema in certain scalar fields. We also have developed a novel technique to handle complex vortex topology that is based on k-means clustering. These techniques facilitate visualization of vortices in simulation data that may not be optimally resolved or sampled. Results are included that highlight the strengths and weaknesses of our approach. We conclude by describing how our approach can be improved to enhance robustness and expand its range of applicability",
                "AuthorNamesDeduped": "Monika Jankun-Kelly;Ming Jiang 0005;David S. Thompson;Raghu Machiraju",
                "AuthorNames": "Monika Jankun-Kelly;Ming Jiang;David Thompson;Raghu Machiraju",
                "AuthorAffiliation": "Graduate Research Assistant at the Computational Simulation and Design Center, Mississippi State University, USA;Postdoctoral Researcher at the Center for Applied Scientific Computing, Lawrence Livemore National Laboratory, USA;Department of AeroSpace Engineering, Mississippi State University, USA;Department of Computer Science and Engineering, Ohio State Uinversity, USA",
                "InternalReferences": "0.1109/visual.1997.663894;10.1109/visual.2002.1183789;10.1109/visual.2005.1532830;10.1109/visual.1998.745296;10.1109/visual.1998.745288;10.1109/visual.1999.809896",
                "AuthorKeywords": "Vortex detection, vortex visualization, feature mining",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 552,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2301,
                "i": [
                    2301
                ]
            }
        },
        {
            "name": "Hongwei Li",
            "value": 35,
            "numPapers": 23,
            "cluster": "2",
            "visible": 1,
            "index": 1643,
            "x": -366.6479326918809,
            "y": -172.97194412033974,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Visualizing Large-Scale Uncertainty in Astrophysical Data",
                "DOI": "10.1109/tvcg.2007.70530",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70530",
                "FirstPage": 1640,
                "LastPage": 1647,
                "PaperType": "J",
                "Abstract": "Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary significantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty.",
                "AuthorNamesDeduped": "Hongwei Li;Chi-Wing Fu;Yinggang Li;Andrew J. Hanson",
                "AuthorNames": "Hongwei Li;Chi-Wing Fu;Yinggang Li;Andrew Hanson",
                "AuthorAffiliation": "Hong Kong University of Science & Technology, Hong Kong, China;Hong Kong University of Science & Technology, Hong Kong, China;Indiana University, Bloomington, USA;Indiana University, Bloomington, USA",
                "InternalReferences": "0.1109/visual.2000.885679;10.1109/visual.2002.1183769;10.1109/visual.2003.1250404;10.1109/visual.2005.1532807;10.1109/tvcg.2006.155;10.1109/tvcg.2006.176;10.1109/visual.2004.25;10.1109/visual.2005.1532853;10.1109/visual.1996.568116;10.1109/visual.2002.1183824;10.1109/visual.1996.568105;10.1109/infvis.2002.1173145;10.1109/visual.2005.1532803;10.1109/visual.2004.18",
                "AuthorKeywords": "Uncertainty visualization, large spatial scale, interstellar data, astronomy",
                "AminerCitationCount": 63,
                "CitationCountCrossRef": 32,
                "PubsCitedCrossRef": 50,
                "DownloadsXplore": 614,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2189,
                "i": [
                    2189
                ]
            }
        },
        {
            "name": "Ka-Kei Chung",
            "value": 13,
            "numPapers": 21,
            "cluster": "6",
            "visible": 1,
            "index": 1644,
            "x": 387.31342235703386,
            "y": -120.15953088324643,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2008,
                "Title": "Relation-Aware Volume Exploration Pipeline",
                "DOI": "10.1109/tvcg.2008.159",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.159",
                "FirstPage": 1683,
                "LastPage": 1690,
                "PaperType": "J",
                "Abstract": "Volume exploration is an important issue in scientific visualization. Research on volume exploration has been focused on revealing hidden structures in volumetric data. While the information of individual structures or features is useful in practice, spatial relations between structures are also important in many applications and can provide further insights into the data. In this paper, we systematically study the extraction, representation,exploration, and visualization of spatial relations in volumetric data and propose a novel relation-aware visualization pipeline for volume exploration. In our pipeline, various relations in the volume are first defined and measured using region connection calculus (RCC) and then represented using a graph interface called relation graph. With RCC and the relation graph, relation query and interactive exploration can be conducted in a comprehensive and intuitive way. The visualization process is further assisted with relation-revealing viewpoint selection and color and opacity enhancement. We also introduce a quality assessment scheme which evaluates the perception of spatial relations in the rendered images. Experiments on various datasets demonstrate the practical use of our system in exploratory visualization.",
                "AuthorNamesDeduped": "Ming-Yuen Chan;Huamin Qu;Ka-Kei Chung;Wai-Ho Mak;Yingcai Wu",
                "AuthorNames": "Ming-Yuen Chan;Huamin Qu;Ka-Kei Chung;Wai-Ho Mak;Yingcai Wu",
                "AuthorAffiliation": "Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong, China;Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Kowloon, Hong Kong, China",
                "InternalReferences": "0.1109/tvcg.2007.70584;10.1109/tvcg.2007.70515;10.1109/tvcg.2006.144;10.1109/visual.1999.809871;10.1109/tvcg.2007.70535;10.1109/tvcg.2007.70576;10.1109/visual.2000.885694;10.1109/infvis.2003.1249009;10.1109/tvcg.2007.70555;10.1109/visual.2005.1532835;10.1109/visual.2005.1532788;10.1109/tvcg.2007.70591;10.1109/visual.2005.1532834;10.1109/visual.2005.1532856;10.1109/tvcg.2007.70572;10.1109/visual.2005.1532833",
                "AuthorKeywords": "Exploratory Visualization, Relation-Based Visualization, Visualization Pipeline",
                "AminerCitationCount": 26,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 253,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2080,
                "i": [
                    2080
                ]
            }
        },
        {
            "name": "Philippas Tsigas",
            "value": 81,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1645,
            "x": -204.48841192282268,
            "y": 350.33482468815737,
            "vy": 0,
            "vx": 0,
            "r": 1.0932642487046633,
            "node": {
                "Conference": "VAST",
                "Year": 2007,
                "Title": "DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data",
                "DOI": "10.1109/vast.2007.4389013",
                "Link": "http://dx.doi.org/10.1109/VAST.2007.4389013",
                "FirstPage": 187,
                "LastPage": 194,
                "PaperType": "C",
                "Abstract": "Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to outsiders. Towards this end, the DataMeadow has a direct manipulation interface for selection, filtering, and creation of sets, subsets, and data dependencies using both simple and complex mouse gestures. We have evaluated our system using a qualitative expert review involving two researchers working in the area. Results from this review are favorable for our new method.",
                "AuthorNamesDeduped": "Niklas Elmqvist;John T. Stasko;Philippas Tsigas",
                "AuthorNames": "Niklas Elmqvist;John Stasko;Philippas Tsigas",
                "AuthorAffiliation": "INRIA/LRI, University of Paris-Sud 11, France;Georgia Institute of Technology, USA;Chalmers University of Technology, Sweden",
                "InternalReferences": "0.1109/infvis.2000.885086;10.1109/visual.1990.146386;10.1109/visual.1991.175815;10.1109/infvis.2003.1249026;10.1109/vast.2006.261439;10.1109/infvis.2005.1532139;10.1109/vast.2006.261424;10.1109/vast.2006.261452;10.1109/infvis.2005.1532136;10.1109/infvis.1997.636793;10.1109/vast.2006.261422;10.1109/vast.2006.261430;10.1109/infvis.2003.1249016;10.1109/visual.1999.809866;10.1109/visual.1990.146375",
                "AuthorKeywords": "Multivariate data, visual analytics, parallel coordinates, dynamic queries, iterative analysis, starplot, small multiples",
                "AminerCitationCount": 134,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 37,
                "DownloadsXplore": 497,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2124,
                "i": [
                    2124
                ]
            }
        },
        {
            "name": "Sung-Eui Yoon",
            "value": 21,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 1646,
            "x": -85.8904696427192,
            "y": -396.57638258543983,
            "vy": 0,
            "vx": 0,
            "r": 1.0241796200345423,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Mesh Layouts for Block-Based Caches",
                "DOI": "10.1109/tvcg.2006.162",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.162",
                "FirstPage": 1213,
                "LastPage": 1220,
                "PaperType": "J",
                "Abstract": "Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block fetching whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechanism, we present novel cache-aware and cache-oblivious layouts of surface and volume meshes that improve the performance of interactive visualization and geometric processing algorithms. Based on a general I/O model, we derive new cache-aware and cache-oblivious metrics that have high correlations with the number of cache misses when accessing a mesh. In addition to guiding the layout process, our metrics can be used to quantify the quality of a layout, e.g. for comparing different layouts of the same mesh and for determining whether a given layout is amenable to significant improvement. We show that layouts of unstructured meshes optimized for our metrics result in improvements over conventional layouts in the performance of visualization applications such as isosurface extraction and view-dependent rendering. Moreover, we improve upon recent cache-oblivious mesh layouts in terms of performance, applicability, and accuracy",
                "AuthorNamesDeduped": "Sung-Eui Yoon;Peter Lindstrom 0001",
                "AuthorNames": "Sung-eui Yoon;Peter Lindstrom",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, USA;Lawrence Livemore National Laboratory, USA",
                "InternalReferences": "0.1109/visual.2004.86;10.1109/visual.2003.1250408;10.1109/visual.2001.964533;10.1109/visual.1996.568125;10.1109/visual.2005.1532800;10.1109/visual.2002.1183794;10.1109/tvcg.2006.162",
                "AuthorKeywords": "Mesh and graph layouts, cache-aware and cache-oblivious layouts, metrics for cache coherence, data locality",
                "AminerCitationCount": 58,
                "CitationCountCrossRef": 35,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 261,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2294,
                "i": [
                    2294
                ]
            }
        },
        {
            "name": "Mi Chen",
            "value": 12,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1647,
            "x": 331.317023319355,
            "y": 234.47607566402593,
            "vy": 0,
            "vx": 0,
            "r": 1.0138169257340242,
            "node": {
                "Conference": "Vis",
                "Year": 2007,
                "Title": "Illustrative Deformation for Data Exploration",
                "DOI": "10.1109/tvcg.2007.70565",
                "Link": "http://dx.doi.org/10.1109/TVCG.2007.70565",
                "FirstPage": 1320,
                "LastPage": 1327,
                "PaperType": "J",
                "Abstract": "Much of the visualization research has focused on improving the rendering quality and speed, and enhancing the perceptibility of features in the data. Recently, significant emphasis has been placed on focus+context (F+C) techniques (e.g., fisheye views and magnification lens) for data exploration in addition to viewing transformation and hierarchical navigation. However, most of the existing data exploration techniques rely on the manipulation of viewing attributes of the rendering system or optical attributes of the data objects, with users being passive viewers. In this paper, we propose a more active approach to data exploration, which attempts to mimic how we would explore data if we were able to hold it and interact with it in our hands. This involves allowing the users to physically or actively manipulate the geometry of a data object. While this approach has been traditionally used in applications, such as surgical simulation, where the original geometry of the data objects is well understood by the users, there are several challenges when this approach is generalized for applications, such as flow and information visualization, where there is no common perception as to the normal or natural geometry of a data object. We introduce a taxonomy and a set of transformations especially for illustrative deformation of general data exploration. We present combined geometric or optical illustration operators for focus+context visualization, and examine the best means for preventing the deformed context from being misperceived. We demonstrated the feasibility of this generalization with examples of flow, information and video visualization.",
                "AuthorNamesDeduped": "Carlos D. Correa;Deborah Silver;Mi Chen",
                "AuthorNames": "Carlos Correa;Debora Silver;Mi Chen",
                "AuthorAffiliation": "Department of Electrical and Computer Engineering, State University of New Jersey, USA;Department of Electrical and Computer Engineering, State University of New Jersey, USA;Department of Computer Science, University of Wales, Swansea, UK",
                "InternalReferences": "0.1109/visual.2000.885696;10.1109/tvcg.2006.144;10.1109/visual.2003.1250400;10.1109/tvcg.2006.152;10.1109/visual.2003.1250401;10.1109/infvis.2004.59;10.1109/visual.2002.1183777;10.1109/tvcg.2006.140;10.1109/visual.2001.964519;10.1109/visual.2004.48;10.1109/visual.2000.885694;10.1109/visual.2005.1532856;10.1109/visual.2005.1532818",
                "AuthorKeywords": "Volume deformation, focus+context visualization, interaction techniques",
                "AminerCitationCount": 72,
                "CitationCountCrossRef": 31,
                "PubsCitedCrossRef": 31,
                "DownloadsXplore": 441,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2192,
                "i": [
                    2192
                ]
            }
        },
        {
            "name": "Allen R. Martin",
            "value": 104,
            "numPapers": 2,
            "cluster": "6",
            "visible": 1,
            "index": 1648,
            "x": -402.811368774103,
            "y": 50.921519874543776,
            "vy": 0,
            "vx": 0,
            "r": 1.1197466896948762,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "High Dimensional Brushing for Interactive Exploration of Multivariate Data",
                "DOI": "10.1109/visual.1995.485139",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.485139",
                "FirstPage": 271,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": null,
                "AuthorNamesDeduped": "Allen R. Martin;Matthew O. Ward",
                "AuthorNames": "A.R. Martin;M.O. Ward",
                "AuthorAffiliation": "Advanced Graphics Division, Silicon Graphics, Inc., Mountain View, CA, USA;Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA",
                "InternalReferences": "0.1109/visual.1990.146386;10.1109/visual.1990.146402;10.1109/visual.1994.346302",
                "AuthorKeywords": null,
                "AminerCitationCount": 340,
                "CitationCountCrossRef": 79,
                "PubsCitedCrossRef": 9,
                "DownloadsXplore": 212,
                "Award": "TT",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3415,
                "i": [
                    3415
                ]
            }
        },
        {
            "name": "Ricardo S. Avila",
            "value": 75,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1649,
            "x": 262.7032178648721,
            "y": -309.73701639203784,
            "vy": 0,
            "vx": 0,
            "r": 1.0863557858376511,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "A hardware acceleration method for volumetric ray tracing",
                "DOI": "10.1109/visual.1995.480792",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.480792",
                "FirstPage": 27,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "We present an acceleration method for volumetric ray tracing which utilizes standard graphics hardware without compromising image accuracy. The graphics hardware is employed to identify those segments of each ray that could possibly contribute to the final image. A volumetric ray tracing algorithm is then used to compute the final image, traversing only the identified segments of the rays. This technique can be used to render volumetric isosurfaces as well as translucent volumes. In addition, this method can accelerate the traversal of shadow rays when performing recursive ray tracing.",
                "AuthorNamesDeduped": "Lisa M. Sobierajski;Ricardo S. Avila",
                "AuthorNames": "L.M. Sobierajski;R.S. Avila",
                "AuthorAffiliation": "GE Corporate Research and Development Center, Schenectady, NY, USA;GE Corporate Research and Development Center, Schenectady, NY, USA",
                "InternalReferences": "0.1109/visual.1994.346320;10.1109/visual.1995.485154;10.1109/visual.1990.146391;10.1109/visual.1993.398854;10.1109/visual.1994.346340;10.1109/visual.1992.235231",
                "AuthorKeywords": null,
                "AminerCitationCount": 68,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 94,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3433,
                "i": [
                    3433
                ]
            }
        },
        {
            "name": "Lisa M. Sobierajski",
            "value": 75,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1650,
            "x": 15.519873601399789,
            "y": 405.9669118578466,
            "vy": 0,
            "vx": 0,
            "r": 1.0863557858376511,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "A hardware acceleration method for volumetric ray tracing",
                "DOI": "10.1109/visual.1995.480792",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.480792",
                "FirstPage": 27,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "We present an acceleration method for volumetric ray tracing which utilizes standard graphics hardware without compromising image accuracy. The graphics hardware is employed to identify those segments of each ray that could possibly contribute to the final image. A volumetric ray tracing algorithm is then used to compute the final image, traversing only the identified segments of the rays. This technique can be used to render volumetric isosurfaces as well as translucent volumes. In addition, this method can accelerate the traversal of shadow rays when performing recursive ray tracing.",
                "AuthorNamesDeduped": "Lisa M. Sobierajski;Ricardo S. Avila",
                "AuthorNames": "L.M. Sobierajski;R.S. Avila",
                "AuthorAffiliation": "GE Corporate Research and Development Center, Schenectady, NY, USA;GE Corporate Research and Development Center, Schenectady, NY, USA",
                "InternalReferences": "0.1109/visual.1994.346320;10.1109/visual.1995.485154;10.1109/visual.1990.146391;10.1109/visual.1993.398854;10.1109/visual.1994.346340;10.1109/visual.1992.235231",
                "AuthorKeywords": null,
                "AminerCitationCount": 68,
                "CitationCountCrossRef": 17,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 94,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3433,
                "i": [
                    3433
                ]
            }
        },
        {
            "name": "Wei Qiao",
            "value": 51,
            "numPapers": 19,
            "cluster": "6",
            "visible": 1,
            "index": 1651,
            "x": -285.7571080505637,
            "y": -288.9513370769869,
            "vy": 0,
            "vx": 0,
            "r": 1.0587219343696028,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Projecting tetrahedra without rendering artifacts",
                "DOI": "10.1109/visual.2004.85",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.85",
                "FirstPage": 27,
                "LastPage": 34,
                "PaperType": "C",
                "Abstract": "Hardware-accelerated direct volume rendering of unstructured volumetric meshes is often based on tetrahedral cell projection, in particular, the projected tetrahedra (PT) algorithm and its variants. Unfortunately, even implementations of the most advanced variants of the PT algorithm are very prone to rendering artifacts. In this work, we identify linear interpolation in screen coordinates as a cause for significant rendering artifacts and implement the correct perspective interpolation for the PT algorithm with programmable graphics hardware. We also demonstrate how to use features of modern graphics hardware to improve the accuracy of the coloring of individual tetrahedra and the compositing of the resulting colors, in particular, by employing a logarithmic scale for the preintegrated color lookup table, using textures with high color resolution, rendering to floating-point color buffers, and alpha dithering. Combined with a correct visibility ordering, these techniques result in the first implementation of the PT algorithm without objectionable rendering artifacts. Apart from the important improvement in rendering quality, our approach also provides a test bed for different implementations of the PT algorithm that allows us to study the particular rendering artifacts introduced by these variants.",
                "AuthorNamesDeduped": "Martin Kraus 0001;Wei Qiao;David S. Ebert",
                "AuthorNames": "M. Kraus;Wei Qiao;D.S. Ebert",
                "AuthorAffiliation": "Purdue University, USA;Purdue University, USA;Purdue University, USA",
                "InternalReferences": "0.1109/visual.2000.885683;10.1109/visual.2003.1250390;10.1109/visual.2001.964514;10.1109/visual.2003.1250384",
                "AuthorKeywords": "volume visualization, volume rendering, cell projection, projected tetrahedra, perspective interpolation, dithering, programmable graphics hardware",
                "AminerCitationCount": 59,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 90,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2539,
                "i": [
                    2539
                ]
            }
        },
        {
            "name": "Gordon Erlebacher",
            "value": 61,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1652,
            "x": 406.0150892075167,
            "y": 20.04363579324369,
            "vy": 0,
            "vx": 0,
            "r": 1.0702360391479562,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Hardware-accelerated texture advection for unsteady flow visualization",
                "DOI": "10.1109/visual.2000.885689",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885689",
                "FirstPage": 155,
                "LastPage": 162,
                "PaperType": "C",
                "Abstract": "We present a novel hardware-accelerated texture advection algorithm to visualize the motion of two-dimensional unsteady flows. Making use of several proposed extensions to the OpenGL-1.2 specification, we demonstrate animations of over 65,000 particles at 2 frames/sec on an SGI Octane with EMXI graphics. High image quality is achieved by careful attention to edge effects, noise frequency, and image enhancement. We provide a detailed description of the hardware implementation, including temporal and spatial coherence techniques, dye advection techniques, and feature extraction.",
                "AuthorNamesDeduped": "Bruno Jobard;Gordon Erlebacher;M. Yousuff Hussaini",
                "AuthorNames": "B. Jobard;G. Erlebacher;M.Y. Hussaini",
                "AuthorAffiliation": "Dirac Science Library, Tallahassee, FL, USA;Dirac Science Library, Tallahassee, FL, USA;Dirac Science Library, Tallahassee, FL, USA",
                "InternalReferences": "0.1109/visual.1995.480817;10.1109/visual.1998.745324",
                "AuthorKeywords": "unsteady, vector field, pathlines, streakline, advection, texture, hardware, OpenGL",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 87,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2979,
                "i": [
                    2979
                ]
            }
        },
        {
            "name": "Yuan Zhou",
            "value": 26,
            "numPapers": 27,
            "cluster": "6",
            "visible": 1,
            "index": 1653,
            "x": -313.01683937617634,
            "y": 259.55819822719735,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Interactive Point-Based Rendering of Higher-Order Tetrahedral Data",
                "DOI": "10.1109/tvcg.2006.154",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.154",
                "FirstPage": 1229,
                "LastPage": 1236,
                "PaperType": "J",
                "Abstract": "Computational simulations frequently generate solutions defined over very large tetrahedral volume meshes containing many millions of elements. Furthermore, such solutions may often be expressed using non-linear basis functions. Certain solution techniques, such as discontinuous Galerkin methods, may even produce non-conforming meshes. Such data is difficult to visualize interactively, as it is far too large to fit in memory and many common data reduction techniques, such as mesh simplification, cannot be applied to non-conforming meshes. We introduce a point-based visualization system for interactive rendering of large, potentially non-conforming, tetrahedral meshes. We propose methods for adaptively sampling points from non-linear solution data and for decimating points at run time to fit GPU memory limits. Because these are streaming processes, memory consumption is independent of the input size. We also present an order-independent point rendering method that can efficiently render volumes on the order of 20 million tetrahedra at interactive rates",
                "AuthorNamesDeduped": "Yuan Zhou;Michael Garland",
                "AuthorNames": "Yuan Zhou;Michael Garland",
                "AuthorAffiliation": "Department of Computer Science, University of Illinois, Urbana-Champaign, USA;NVIDIA Corporation, USA",
                "InternalReferences": "0.1109/visual.2003.1250406;10.1109/visual.2005.1532796;10.1109/visual.2005.1532776;10.1109/visual.2005.1532809;10.1109/visual.2003.1250404;10.1109/visual.2002.1183757;10.1109/visual.2002.1183771;10.1109/visual.2004.91;10.1109/visual.2003.1250390;10.1109/visual.1999.809868;10.1109/visual.2004.38;10.1109/visual.2003.1250384;10.1109/visual.2000.885683;10.1109/visual.2002.1183778;10.1109/visual.2005.1532808;10.1109/visual.2003.1250389;10.1109/visual.1995.480790;10.1109/visual.2004.81;10.1109/visual.2005.1532801;10.1109/visual.2004.102",
                "AuthorKeywords": "Interactive large higher-order tetrahedral volume visualization, point-based visualization",
                "AminerCitationCount": 47,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 236,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2306,
                "i": [
                    2306
                ]
            }
        },
        {
            "name": "Michael Garland",
            "value": 140,
            "numPapers": 36,
            "cluster": "6",
            "visible": 1,
            "index": 1654,
            "x": 55.4966059200226,
            "y": -402.9517672518111,
            "vy": 0,
            "vx": 0,
            "r": 1.1611974668969487,
            "node": {
                "Conference": "InfoVis",
                "Year": 2008,
                "Title": "On the Visualization of Social and other Scale-Free Networks",
                "DOI": "10.1109/tvcg.2008.151",
                "Link": "http://dx.doi.org/10.1109/TVCG.2008.151",
                "FirstPage": 1285,
                "LastPage": 1292,
                "PaperType": "J",
                "Abstract": "This paper proposes novel methods for visualizing specifically the large power-law graphs that arise in sociology and the sciences. In such cases a large portion of edges can be shown to be less important and removed while preserving component connectedness and other features (e.g. cliques) to more clearly reveal the networkpsilas underlying connection pathways. This simplification approach deterministically filters (instead of clustering) the graph to retain important node and edge semantics, and works both automatically and interactively. The improved graph filtering and layout is combined with a novel computer graphics anisotropic shading of the dense crisscrossing array of edges to yield a full social network and scale-free graph visualization system. Both quantitative analysis and visual results demonstrate the effectiveness of this approach.",
                "AuthorNamesDeduped": "Yuntao Jia;Jared Hoberock;Michael Garland;John C. Hart",
                "AuthorNames": "Yuntao Jia;Jared Hoberock;Michael Garland;John Hart",
                "AuthorAffiliation": "University of Illinois, USA;University of Illinois, USA;NVidia Corporation;University of Illinois, USA",
                "InternalReferences": "0.1109/visual.2005.1532819;10.1109/tvcg.2006.193;10.1109/infvis.2003.1249011",
                "AuthorKeywords": "Scale-free network, edge filtering, betweenness centrality, anisotropic shading",
                "AminerCitationCount": 135,
                "CitationCountCrossRef": 58,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 1260,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1977,
                "i": [
                    1977
                ]
            }
        },
        {
            "name": "Qingmin Shi",
            "value": 13,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1655,
            "x": 231.3384179961205,
            "y": 334.71261756774607,
            "vy": 0,
            "vx": 0,
            "r": 1.0149683362118596,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Isosurface Extraction and Spatial filtering using Persistent Octree (POT)",
                "DOI": "10.1109/tvcg.2006.157",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.157",
                "FirstPage": 1283,
                "LastPage": 1290,
                "PaperType": "J",
                "Abstract": "We propose a novel persistent octree (POT) indexing structure for accelerating isosurface extraction and spatial filtering from volumetric data. This data structure efficiently handles a wide range of visualization problems such as the generation of view-dependent isosurfaces, ray tracing, and isocontour slicing for high dimensional data. POT can be viewed as a hybrid data structure between the interval tree and the branch-on-need octree (BONO) in the sense that it achieves the asymptotic bound of the interval tree for identifying the active cells corresponding to an isosurface and is more efficient than BONO for handling spatial queries. We encode a compact octree for each isovalue. Each such octree contains only the corresponding active cells, in such a way that the combined structure has linear space. The inherent hierarchical structure associated with the active cells enables very fast filtering of the active cells based on spatial constraints. We demonstrate the effectiveness of our approach by performing view-dependent isosurfacing on a wide variety of volumetric data sets and 4D isocontour slicing on the time-varying Richtmyer-Meshkov instability dataset",
                "AuthorNamesDeduped": "Qingmin Shi;Joseph F. JáJá",
                "AuthorNames": "Qingmin Shi;Joseph JaJa",
                "AuthorAffiliation": "Institute for Advanced Computer Studies and the Department of Electrical and Computer Engineering, University of Maryland, College Park, USA;Institute for Advanced Computer Studies and the Department of Electrical and Computer Engineering, University of Maryland, College Park, USA",
                "InternalReferences": "0.1109/visual.1998.745713;10.1109/visual.1991.175780;10.1109/visual.1998.745299;10.1109/visual.1996.568121;10.1109/visual.1999.809910;10.1109/visual.2002.1183810;10.1109/visual.1998.745298;10.1109/visual.1999.809879;10.1109/visual.2004.52;10.1109/visual.1998.745300;10.1109/visual.2003.1250373",
                "AuthorKeywords": "scientific visualization, isosurface extraction, indexing",
                "AminerCitationCount": 29,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 30,
                "DownloadsXplore": 260,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2319,
                "i": [
                    2319
                ]
            }
        },
        {
            "name": "Joseph F. JáJá",
            "value": 32,
            "numPapers": 25,
            "cluster": "6",
            "visible": 1,
            "index": 1656,
            "x": -396.79666194235364,
            "y": -90.56715227611788,
            "vy": 0,
            "vx": 0,
            "r": 1.036845135290731,
            "node": {
                "Conference": "SciVis",
                "Year": 2012,
                "Title": "Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms",
                "DOI": "10.1109/tvcg.2012.231",
                "Link": "http://dx.doi.org/10.1109/TVCG.2012.231",
                "FirstPage": 2355,
                "LastPage": 2363,
                "PaperType": "J",
                "Abstract": "Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensity-gradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner.",
                "AuthorNamesDeduped": "Cheuk Yiu Ip;Amitabh Varshney;Joseph F. JáJá",
                "AuthorNames": "Cheuk Yiu Ip;Amitabh Varshney;Joseph JaJa",
                "AuthorAffiliation": "Institute for Advanced Computer Studies, University of Maryland, College Park, USA;Institute for Advanced Computer Studies, University of Maryland, College Park, USA;Institute for Advanced Computer Studies, University of Maryland, College Park, USA",
                "InternalReferences": "0.1109/tvcg.2010.132;10.1109/tvcg.2009.185;10.1109/visual.1999.809932;10.1109/visual.2005.1532795;10.1109/visual.2003.1250370;10.1109/tvcg.2010.208;10.1109/tvcg.2008.162;10.1109/tvcg.2011.248;10.1109/tvcg.2011.173;10.1109/tvcg.2006.174;10.1109/tvcg.2011.231;10.1109/tvcg.2007.70590;10.1109/tvcg.2009.197;10.1109/tvcg.2006.148;10.1109/tvcg.2009.120;10.1109/visual.2003.1250369",
                "AuthorKeywords": "Volume exploration, volume classification, normalized cut, Information-guided exploration",
                "AminerCitationCount": 79,
                "CitationCountCrossRef": 47,
                "PubsCitedCrossRef": 46,
                "DownloadsXplore": 711,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1442,
                "i": [
                    1442
                ]
            }
        },
        {
            "name": "Elena Fanea",
            "value": 30,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1657,
            "x": 353.86950585425376,
            "y": -201.31163112564124,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "InfoVis",
                "Year": 2005,
                "Title": "An interactive 3D integration of parallel coordinates and star glyphs",
                "DOI": "10.1109/infvis.2005.1532141",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2005.1532141",
                "FirstPage": 149,
                "LastPage": 156,
                "PaperType": "C",
                "Abstract": "Parallel coordinates are a powerful method for visualizing multidimensional data but, when applied to large data sets, they become cluttered and difficult to read. Star glyphs, on the other hand, can be used to display either the attributes of a data item or the values across all items for a single attribute. Star glyphs may readily provide a quick impression; however, since the full data set require multiple glyphs, overall readings are more difficult. We present parallel glyphs, an interactive integration of the visual representations of parallel coordinates and star glyphs that utilizes the advantages of both representations to offset the disadvantages they have separately. We discuss the role of uniform and stepped colour scales in the visual comparison of non-adjacent items and star glyphs. Parallel glyphs provide capabilities for focus-in-context exploration using two types of lenses and interactions specific to the 3D space.",
                "AuthorNamesDeduped": "Elena Fanea;Sheelagh Carpendale;Tobias Isenberg 0001",
                "AuthorNames": "E. Fanea;S. Carpendale;T. Isenberg",
                "AuthorAffiliation": "Dept. of Comput. Sci., Calgary Univ., Alta., Canada;Department of Computer Science, University of Calgary, Canada;Dept. of Comput. Sci., Calgary Univ., Alta., Canada",
                "InternalReferences": "0.1109/visual.1995.485139;10.1109/infvis.2003.1249024;10.1109/infvis.2003.1249008;10.1109/infvis.2002.1173157;10.1109/infvis.2004.71;10.1109/infvis.2003.1249015;10.1109/infvis.2004.15;10.1109/infvis.2004.68;10.1109/visual.1999.809866;10.1109/infvis.2002.1173151;10.1109/visual.1994.346302;10.1109/visual.1997.663866;10.1109/visual.1990.146402",
                "AuthorKeywords": "Parallel Glyphs, parallel coordinates, star glyphs, multi-dimensional data sets, 3D visualization",
                "AminerCitationCount": 124,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 835,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2344,
                "i": [
                    2344
                ]
            }
        },
        {
            "name": "Bruno Jobard",
            "value": 78,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 1658,
            "x": -124.98599368438717,
            "y": 387.59321637862337,
            "vy": 0,
            "vx": 0,
            "r": 1.0898100172711571,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Image space based visualization of unsteady flow on surfaces",
                "DOI": "10.1109/visual.2003.1250364",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250364",
                "FirstPage": 131,
                "LastPage": 138,
                "PaperType": "C",
                "Abstract": "We present a technique for direct visualization of unsteady flow on surfaces from computational fluid dynamics. The method generates dense representations of time-dependent vector fields with high spatio-temporal correlation using both Lagrangian-Eulerian advection and image based flow visualization as its foundation. While the 3D vector fields are associated with arbitrary triangular surface meshes, the generation and advection of texture properties is confined to image space. Frame rates of up to 20 frames per second are realized by exploiting graphics card hardware. We apply this algorithm to unsteady flow on boundary surfaces of, large, complex meshes from computational fluid dynamics composed of more than 250,000 polygons, dynamic meshes with time-dependent geometry and topology, as well as medical data.",
                "AuthorNamesDeduped": "Robert S. Laramee;Bruno Jobard;Helwig Hauser",
                "AuthorNames": "R.S. Laramee;B. Jobard;H. Hauser",
                "AuthorAffiliation": "VRVis Research Center, Austria;University of Pau, France;VRVis Research Center, Austria",
                "InternalReferences": "0.1109/visual.2001.964493;10.1109/visual.1994.346313;10.1109/visual.1995.480817",
                "AuthorKeywords": "Unsteady flow visualization, computational fluid dynamics (CFD), surface representation, texture mapping",
                "AminerCitationCount": 160,
                "CitationCountCrossRef": 45,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 327,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2668,
                "i": [
                    2668
                ]
            }
        },
        {
            "name": "David M. Weinstein",
            "value": 95,
            "numPapers": 9,
            "cluster": "11",
            "visible": 1,
            "index": 1659,
            "x": -169.70581316186193,
            "y": -370.33759865705133,
            "vy": 0,
            "vx": 0,
            "r": 1.109383995394358,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Hue-balls and lit-tensors for direct volume rendering of diffusion tensor fields",
                "DOI": "10.1109/visual.1999.809886",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809886",
                "FirstPage": 183,
                "LastPage": 524,
                "PaperType": "C",
                "Abstract": "With the development of magnetic resonance imaging techniques for acquiring diffusion tensor data from biological tissue, visualization of tensor data has become a new research focus. The diffusion tensor describes the directional dependence of water molecules' diffusion and can be represented by a three-by-three symmetric matrix. Visualization of second-order tensor fields is difficult because the data values have many degrees of freedom. Existing visualization techniques are best at portraying the tensor's properties over a two-dimensional field, or over a small subset of locations within a three-dimensional field. A means of visualizing the global structure in measured diffusion tensor data is needed. We propose the use of direct volume rendering, with novel approaches for the tensors' coloring, lighting, and opacity assignment. Hue-balls use a two-dimensional colormap on the unit sphere to illustrate the tensor's action as a linear operator. Lit-tensors provide a lighting model for tensors which includes as special cases both lit-lines (from streamline vector visualization) and standard Phong surface lighting. Together with an opacity assignment based on a novel two-dimensional barycentric space of anisotropy, these methods are shown to produce informative renderings of measured diffusion tensor data from the human brain.",
                "AuthorNamesDeduped": "Gordon L. Kindlmann;David M. Weinstein",
                "AuthorNames": "G. Kindlmann;D. Weinstein",
                "AuthorAffiliation": "Scientific Computing and Imaging, Department of Computer Science, University of Utah, USA;Scientific Computing and Imaging, Department of Computer Science, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1990.146373;10.1109/visual.1992.235193;10.1109/visual.1996.567777;10.1109/visual.1998.745294",
                "AuthorKeywords": null,
                "AminerCitationCount": 143,
                "CitationCountCrossRef": 47,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 109,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3060,
                "i": [
                    3060
                ]
            }
        },
        {
            "name": "Peter Hastreiter",
            "value": 76,
            "numPapers": 30,
            "cluster": "11",
            "visible": 1,
            "index": 1660,
            "x": 375.4082963113266,
            "y": 158.48852028026255,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Hybrid Visualization for White Matter Tracts using Triangle Strips and Point Sprites",
                "DOI": "10.1109/tvcg.2006.151",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.151",
                "FirstPage": 1181,
                "LastPage": 1188,
                "PaperType": "J",
                "Abstract": "Diffusion tensor imaging is of high value in neurosurgery, providing information about the location of white matter tracts in the human brain. For their reconstruction, streamline techniques commonly referred to as fiber tracking model the underlying fiber structures and have therefore gained interest. To meet the requirements of surgical planning and to overcome the visual limitations of line representations, a new real-time visualization approach of high visual quality is introduced. For this purpose, textured triangle strips and point sprites are combined in a hybrid strategy employing GPU programming. The triangle strips follow the fiber streamlines and are textured to obtain a tube-like appearance. A vertex program is used to orient the triangle strips towards the camera. In order to avoid triangle flipping in case of fiber segments where the viewing and segment direction are parallel, a correct visual representation is achieved in these areas by chains of point sprites. As a result, high quality visualization similar to tubes is provided allowing for interactive multimodal inspection. Overall, the presented approach is faster than existing techniques of similar visualization quality and at the same time allows for real-time rendering of dense bundles encompassing a high number of fibers, which is of high importance for diagnosis and surgical planning",
                "AuthorNamesDeduped": "Dorit Merhof;Markus Sonntag;Frank Enders;Christopher Nimsky;Peter Hastreiter;Günther Greiner",
                "AuthorNames": "Dorit Merhof;Markus Sonntag;Frank Enders;Christopher Nimsky;Peter Hastreiter;Guenther Greiner",
                "AuthorAffiliation": "Department of Neurosurgery, University of Erlangen, Germany;Computer Graphics Group, University of Erlangen, Germany;Computer Graphics Group, Univ. Erlangen, and Dept. of Neurosurgery, Univ. Erlangen;Department of Neurosurgery, University of Erlangen, Germany;Computer Graphics Group, Univ. Erlangen, and Dept. of Neurosurgery, Univ. Erlangen;Computer Graphics Group, University of Erlangen, Germany",
                "InternalReferences": "0.1109/visual.2005.1532859;10.1109/visual.2005.1532772;10.1109/visual.2002.1183799;10.1109/visual.2005.1532773;10.1109/visual.2005.1532778;10.1109/visual.1996.567777;10.1109/visual.2005.1532779;10.1109/visual.2004.30",
                "AuthorKeywords": "Diffusion tensor data, fiber tracking, streamline visualization",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 439,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2293,
                "i": [
                    2293
                ]
            }
        },
        {
            "name": "Balázs Csébfalvi",
            "value": 16,
            "numPapers": 10,
            "cluster": "6",
            "visible": 1,
            "index": 1661,
            "x": -383.9874233116411,
            "y": 136.7613203302985,
            "vy": 0,
            "vx": 0,
            "r": 1.0184225676453655,
            "node": {
                "Conference": "Vis",
                "Year": 2003,
                "Title": "Monte Carlo volume rendering",
                "DOI": "10.1109/visual.2003.1250406",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2003.1250406",
                "FirstPage": 449,
                "LastPage": 456,
                "PaperType": "C",
                "Abstract": "In this paper a novel volume-rendering technique based on Monte Carlo integration is presented. As a result of a preprocessing, a point cloud of random samples is generated using a normalized continuous reconstruction of the volume as a probability density function. This point cloud is projected onto the image plane, and to each pixel an intensity value is assigned which is proportional to the number of samples projected onto the corresponding pixel area. In such a way a simulated X-ray image of the volume can be obtained. Theoretically, for a fixed image resolution, there exists an M number of samples such that the average standard deviation of the estimated pixel intensities us under the level of quantization error regardless of the number of voxels. Therefore Monte Carlo Volume Rendering (MCVR) is mainly proposed to efficiently visualize large volume data sets. Furthermore, network applications are also supported, since the trade-off between image quality and interactivity can be adapted to the bandwidth of the client/server connection by using progressive refinement.",
                "AuthorNamesDeduped": "Balázs Csébfalvi;László Szirmay-Kalos",
                "AuthorNames": "B. Csebfalvi;L. Szirmay-Kalos",
                "AuthorAffiliation": "Department of Control Engineering and Information Technology, Technical University of Budapest, Hungary;Department of Control Engineering and Information Technology, Technical University of Budapest, Hungary",
                "InternalReferences": "0.1109/visual.2002.1183757;10.1109/visual.2001.964490;10.1109/visual.2002.1183777",
                "AuthorKeywords": " X-ray volume rendering, Monte Carlo integration, importance sampling, progressive refinement",
                "AminerCitationCount": 88,
                "CitationCountCrossRef": 20,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 278,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2689,
                "i": [
                    2689
                ]
            }
        },
        {
            "name": "M. Yousuff Hussaini",
            "value": 35,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1662,
            "x": 190.81682816217963,
            "y": -360.3317056409624,
            "vy": 0,
            "vx": 0,
            "r": 1.040299366724237,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Hardware-accelerated texture advection for unsteady flow visualization",
                "DOI": "10.1109/visual.2000.885689",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885689",
                "FirstPage": 155,
                "LastPage": 162,
                "PaperType": "C",
                "Abstract": "We present a novel hardware-accelerated texture advection algorithm to visualize the motion of two-dimensional unsteady flows. Making use of several proposed extensions to the OpenGL-1.2 specification, we demonstrate animations of over 65,000 particles at 2 frames/sec on an SGI Octane with EMXI graphics. High image quality is achieved by careful attention to edge effects, noise frequency, and image enhancement. We provide a detailed description of the hardware implementation, including temporal and spatial coherence techniques, dye advection techniques, and feature extraction.",
                "AuthorNamesDeduped": "Bruno Jobard;Gordon Erlebacher;M. Yousuff Hussaini",
                "AuthorNames": "B. Jobard;G. Erlebacher;M.Y. Hussaini",
                "AuthorAffiliation": "Dirac Science Library, Tallahassee, FL, USA;Dirac Science Library, Tallahassee, FL, USA;Dirac Science Library, Tallahassee, FL, USA",
                "InternalReferences": "0.1109/visual.1995.480817;10.1109/visual.1998.745324",
                "AuthorKeywords": "unsteady, vector field, pathlines, streakline, advection, texture, hardware, OpenGL",
                "AminerCitationCount": 93,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 18,
                "DownloadsXplore": 87,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2979,
                "i": [
                    2979
                ]
            }
        },
        {
            "name": "Yingmei Lavin",
            "value": 26,
            "numPapers": 1,
            "cluster": "11",
            "visible": 1,
            "index": 1663,
            "x": 102.72906140203261,
            "y": 394.71095746565925,
            "vy": 0,
            "vx": 0,
            "r": 1.0299366724237191,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Feature comparisons of vector fields using Earth mover's distance",
                "DOI": "10.1109/visual.1998.745291",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745291",
                "FirstPage": 103,
                "LastPage": 109,
                "PaperType": "C",
                "Abstract": "A novel approach is introduced to define a quantitative measure of closeness between vector fields. The usefulness of this measurement can be seen when comparing computational and experimental flow fields under the same conditions. Furthermore, its applicability can be extended to more cumbersome tasks, such as navigating through a large database, searching for similar topologies. This new measure relies on the use of critical points, which are a key feature in vector field topology. In order to characterize critical points, /spl alpha/ and /spl beta/ parameters are introduced. They are used to form a closed set of eight unique patterns for simple critical points. These patterns are also basic building blocks for higher-order nonlinear vector fields. In order to study and compare a given set of vector fields, a measure of distance between different patterns of critical points is introduced. The basic patterns of critical points are mapped onto a unit circle in /spl alpha/-/spl beta/ space. The concept of the \"Earth mover's distance\" is used to compute the closeness between various pairs of vector fields, and a nearest-neighbor query is thus produced to illustrate the relationship between the given set of vector fields. This approach quantitatively measures the similarity and dissimilarity between vector fields. It is ideal for data compression of a large flow field, since only the number and types of critical points along with their corresponding /spl alpha/ and /spl beta/ parameters are necessary to reconstruct the whole field. It can also be used to better quantify the changes in time-varying data sets.",
                "AuthorNamesDeduped": "Yingmei Lavin;Rajesh Batra;Lambertus Hesselink",
                "AuthorNames": "Y. Lavin;R. Batra;L. Hesselink",
                "AuthorAffiliation": "Department of Physics, University of Stanford, Stanford, CA, USA;Department of Aeronautics and Astronautics, University of Stanford, Stanford, CA, USA;Department of Electrical Engineering, University of Stanford, Stanford, CA, USA",
                "InternalReferences": "0.1109/visual.1997.663858;10.1109/visual.1997.663857",
                "AuthorKeywords": null,
                "AminerCitationCount": 97,
                "CitationCountCrossRef": 25,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 602,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3163,
                "i": [
                    3163
                ]
            }
        },
        {
            "name": "James Arthur Kohl",
            "value": 31,
            "numPapers": 25,
            "cluster": "6",
            "visible": 1,
            "index": 1664,
            "x": -342.4755254119104,
            "y": -221.72170505802058,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "Distributed data management for large volume visualization",
                "DOI": "10.1109/visual.2005.1532794",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532794",
                "FirstPage": 183,
                "LastPage": 189,
                "PaperType": "C",
                "Abstract": "We propose a distributed data management scheme for large data visualization that emphasizes efficient data sharing and access. To minimize data access time and support users with a variety of local computing capabilities, we introduce an adaptive data selection method based on an \"enhanced time-space partitioning\" (ETSP) tree that assists with effective visibility culling, as well as multiresolution data selection. By traversing the tree, our data management algorithm can quickly identify the visible regions of data, and, for each region, adaptively choose the lowest resolution satisfying user-specified error tolerances. Only necessary data elements are accessed and sent to the visualization pipeline. To further address the issue of sharing large-scale data among geographically distributed collaborative teams, we have designed an infrastructure for integrating our data management technique with a distributed data storage system provided by logistical networking (LoN). Data sets at different resolutions are generated and uploaded to LoN for wide-area access. We describe a parallel volume rendering system that verifies the effectiveness of our data storage, selection and access scheme.",
                "AuthorNamesDeduped": "Jinzhu Gao;Jian Huang 0007;C. Ryan Johnson;Scott Atchley;James Arthur Kohl",
                "AuthorNames": "J. Gao;J. Huang;C.R. Johnson;S. Atchley",
                "AuthorAffiliation": "Oak Ridge Nat. Lab., TN, USA;The Univ. of Tennessee, TN, USA;The Univ. of Tennessee, TN, USA;The Univ. of Tennessee, TN, USA;Oak Ridge Nat. Lab., TN, USA",
                "InternalReferences": "0.1109/visual.2002.1183758;10.1109/visual.2002.1183757;10.1109/visual.1999.809910;10.1109/visual.1998.745300;10.1109/visual.2004.110;10.1109/visual.2004.112;10.1109/visual.1999.809879",
                "AuthorKeywords": "large data visualization, distributed storage, logistical networking, visibility culling, volume rendering, multiresolution rendering",
                "AminerCitationCount": 48,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 23,
                "DownloadsXplore": 316,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2431,
                "i": [
                    2431
                ]
            }
        },
        {
            "name": "Rebecca M. Brannon",
            "value": 7,
            "numPapers": 17,
            "cluster": "11",
            "visible": 1,
            "index": 1665,
            "x": 402.42248318878563,
            "y": -67.86858643121698,
            "vy": 0,
            "vx": 0,
            "r": 1.0080598733448474,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Visualization of salt-induced stress perturbations",
                "DOI": "10.1109/visual.2004.115",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.115",
                "FirstPage": 369,
                "LastPage": 376,
                "PaperType": "C",
                "Abstract": "An important challenge encountered during post-processing of finite element analyses is the visualizing of three-dimensional fields of real-valued second-order tensors. Namely, as finite element meshes become more complex and detailed, evaluation and presentation of the principal stresses becomes correspondingly problematic. In this paper, we describe techniques used to visualize simulations of perturbed in-situ stress fields associated with hypothetical salt bodies in the Gulf of Mexico. We present an adaptation of the Mohr diagram, a graphical paper and pencil method used by the material mechanics community for estimating coordinate transformations for stress tensors, as a new tensor glyph for dynamically exploring tensor variables within three-dimensional finite element models. This interactive glyph can be used as either a probe or a filter through brushing and linking.",
                "AuthorNamesDeduped": "Patricia Crossno;David H. Rogers 0001;Rebecca M. Brannon;David Coblentz",
                "AuthorNames": "P. Crossno;D.H. Rogers;R.M. Brannon;D. Coblentz",
                "AuthorAffiliation": "Sandia National Laboratories, USA;Sandia National Laboratories, USA;Sandia National Laboratories, USA;Los Alamos National Laboratories, USA",
                "InternalReferences": "0.1109/visual.1997.663929;10.1109/visual.1992.235193;10.1109/visual.1998.745294;10.1109/visual.1993.398849;10.1109/visual.1995.485141;10.1109/visual.1999.809894;10.1109/visual.1999.809905;10.1109/visual.2002.1183819;10.1109/visual.2002.1183797;10.1109/visual.1997.663857;10.1109/visual.1994.346326",
                "AuthorKeywords": "tensor field visualization, Mohr's circles, visual debugging, finite element codes and simulations",
                "AminerCitationCount": 7,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 26,
                "DownloadsXplore": 86,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2589,
                "i": [
                    2589
                ]
            }
        },
        {
            "name": "Fernando Vega Higuera",
            "value": 0,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1666,
            "x": -250.9645517171294,
            "y": 321.97328116075136,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "High performance volume splatting for visualization of neurovascular data",
                "DOI": "10.1109/visual.2005.1532805",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532805",
                "FirstPage": 271,
                "LastPage": 278,
                "PaperType": "C",
                "Abstract": "A new technique is presented to increase the performance of volume splatting by using hardware accelerated point sprites. This allows creating screen aligned elliptical splats for high quality volume splatting at very low cost on the GPU. Only one vertex per splat is stored on the graphics card. GPU generated point sprite texture coordinates are used for computing splats and per-fragment 3D-texture coordinates on the fly. Thus, only 6 bytes per splat are stored on the GPU and vertex shader load is 25% in comparison to applying textured quads. For eight predefined viewing directions, depth-sorting of the splats is performed in a pre-processing step where the resulting indices are stored on the GPU. Thereby, there is no data transfer between CPU and GPU during rendering. Post-classificative two dimensional transfer functions with lighting for scalar data and tagged volumes were implemented. Thereby, we focused on the visualization of neurovascular structures, where typically no more than 2% of the voxels contribute to the resulting 3D-representation. A comparison with a 3D-texture-based slicing algorithm showed frame rates up to 11 times higher for the presented approach on current CPUs. The presented technique was evaluated with a broad medical database and its value for highly sparse volume visualization is shown.",
                "AuthorNamesDeduped": "Fernando Vega Higuera;Peter Hastreiter;Rudolf Fahlbusch;Günther Greiner",
                "AuthorNames": "F. Vega-Higuera;P. Hastreiter;R. Fahlbusch;G. Greiner",
                "AuthorAffiliation": "Neurocenter, Department of Neurosurgery and Computer Graphics Group, University of Erlangen, Germany;Neurocenter, Department of Neurosurgery and Computer Graphics Group, University of Erlangen, Germany;Neurocenter, Department of Neurosurgery and Computer Graphics Group, University of Erlangen, Germany;Dept. of Neurosurg. & Comput. Graphics Group, Univ. of Erlangen, Germany",
                "InternalReferences": "0.1109/visual.2004.38;10.1109/visual.1997.663882;10.1109/visual.2003.1250384;10.1109/visual.1996.567608;10.1109/visual.2003.1250404;10.1109/visual.2001.964519;10.1109/visual.2003.1250388;10.1109/visual.2001.964490;10.1109/visual.1999.809909;10.1109/visual.2003.1250386",
                "AuthorKeywords": "volume visualization, volume splatting, neurovascular structures, segmented data",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 226,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2441,
                "i": [
                    2441
                ]
            }
        },
        {
            "name": "Rudolf Fahlbusch",
            "value": 0,
            "numPapers": 9,
            "cluster": "6",
            "visible": 1,
            "index": 1667,
            "x": -32.446106925182356,
            "y": -407.05927104710395,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2005,
                "Title": "High performance volume splatting for visualization of neurovascular data",
                "DOI": "10.1109/visual.2005.1532805",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2005.1532805",
                "FirstPage": 271,
                "LastPage": 278,
                "PaperType": "C",
                "Abstract": "A new technique is presented to increase the performance of volume splatting by using hardware accelerated point sprites. This allows creating screen aligned elliptical splats for high quality volume splatting at very low cost on the GPU. Only one vertex per splat is stored on the graphics card. GPU generated point sprite texture coordinates are used for computing splats and per-fragment 3D-texture coordinates on the fly. Thus, only 6 bytes per splat are stored on the GPU and vertex shader load is 25% in comparison to applying textured quads. For eight predefined viewing directions, depth-sorting of the splats is performed in a pre-processing step where the resulting indices are stored on the GPU. Thereby, there is no data transfer between CPU and GPU during rendering. Post-classificative two dimensional transfer functions with lighting for scalar data and tagged volumes were implemented. Thereby, we focused on the visualization of neurovascular structures, where typically no more than 2% of the voxels contribute to the resulting 3D-representation. A comparison with a 3D-texture-based slicing algorithm showed frame rates up to 11 times higher for the presented approach on current CPUs. The presented technique was evaluated with a broad medical database and its value for highly sparse volume visualization is shown.",
                "AuthorNamesDeduped": "Fernando Vega Higuera;Peter Hastreiter;Rudolf Fahlbusch;Günther Greiner",
                "AuthorNames": "F. Vega-Higuera;P. Hastreiter;R. Fahlbusch;G. Greiner",
                "AuthorAffiliation": "Neurocenter, Department of Neurosurgery and Computer Graphics Group, University of Erlangen, Germany;Neurocenter, Department of Neurosurgery and Computer Graphics Group, University of Erlangen, Germany;Neurocenter, Department of Neurosurgery and Computer Graphics Group, University of Erlangen, Germany;Dept. of Neurosurg. & Comput. Graphics Group, Univ. of Erlangen, Germany",
                "InternalReferences": "0.1109/visual.2004.38;10.1109/visual.1997.663882;10.1109/visual.2003.1250384;10.1109/visual.1996.567608;10.1109/visual.2003.1250404;10.1109/visual.2001.964519;10.1109/visual.2003.1250388;10.1109/visual.2001.964490;10.1109/visual.1999.809909;10.1109/visual.2003.1250386",
                "AuthorKeywords": "volume visualization, volume splatting, neurovascular structures, segmented data",
                "AminerCitationCount": 53,
                "CitationCountCrossRef": 1,
                "PubsCitedCrossRef": 27,
                "DownloadsXplore": 226,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2441,
                "i": [
                    2441
                ]
            }
        },
        {
            "name": "Günther Greiner",
            "value": 19,
            "numPapers": 16,
            "cluster": "6",
            "visible": 1,
            "index": 1668,
            "x": 298.97894480024513,
            "y": 278.31922421229183,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 2006,
                "Title": "Hybrid Visualization for White Matter Tracts using Triangle Strips and Point Sprites",
                "DOI": "10.1109/tvcg.2006.151",
                "Link": "http://dx.doi.org/10.1109/TVCG.2006.151",
                "FirstPage": 1181,
                "LastPage": 1188,
                "PaperType": "J",
                "Abstract": "Diffusion tensor imaging is of high value in neurosurgery, providing information about the location of white matter tracts in the human brain. For their reconstruction, streamline techniques commonly referred to as fiber tracking model the underlying fiber structures and have therefore gained interest. To meet the requirements of surgical planning and to overcome the visual limitations of line representations, a new real-time visualization approach of high visual quality is introduced. For this purpose, textured triangle strips and point sprites are combined in a hybrid strategy employing GPU programming. The triangle strips follow the fiber streamlines and are textured to obtain a tube-like appearance. A vertex program is used to orient the triangle strips towards the camera. In order to avoid triangle flipping in case of fiber segments where the viewing and segment direction are parallel, a correct visual representation is achieved in these areas by chains of point sprites. As a result, high quality visualization similar to tubes is provided allowing for interactive multimodal inspection. Overall, the presented approach is faster than existing techniques of similar visualization quality and at the same time allows for real-time rendering of dense bundles encompassing a high number of fibers, which is of high importance for diagnosis and surgical planning",
                "AuthorNamesDeduped": "Dorit Merhof;Markus Sonntag;Frank Enders;Christopher Nimsky;Peter Hastreiter;Günther Greiner",
                "AuthorNames": "Dorit Merhof;Markus Sonntag;Frank Enders;Christopher Nimsky;Peter Hastreiter;Guenther Greiner",
                "AuthorAffiliation": "Department of Neurosurgery, University of Erlangen, Germany;Computer Graphics Group, University of Erlangen, Germany;Computer Graphics Group, Univ. Erlangen, and Dept. of Neurosurgery, Univ. Erlangen;Department of Neurosurgery, University of Erlangen, Germany;Computer Graphics Group, Univ. Erlangen, and Dept. of Neurosurgery, Univ. Erlangen;Computer Graphics Group, University of Erlangen, Germany",
                "InternalReferences": "0.1109/visual.2005.1532859;10.1109/visual.2005.1532772;10.1109/visual.2002.1183799;10.1109/visual.2005.1532773;10.1109/visual.2005.1532778;10.1109/visual.1996.567777;10.1109/visual.2005.1532779;10.1109/visual.2004.30",
                "AuthorKeywords": "Diffusion tensor data, fiber tracking, streamline visualization",
                "AminerCitationCount": 77,
                "CitationCountCrossRef": 36,
                "PubsCitedCrossRef": 40,
                "DownloadsXplore": 439,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2293,
                "i": [
                    2293
                ]
            }
        },
        {
            "name": "Ying-Huey Fua",
            "value": 145,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1669,
            "x": -408.5820886909081,
            "y": -3.2675374481323103,
            "vy": 0,
            "vx": 0,
            "r": 1.1669545192861255,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Hierarchical parallel coordinates for exploration of large datasets",
                "DOI": "10.1109/visual.1999.809866",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809866",
                "FirstPage": 43,
                "LastPage": 508,
                "PaperType": "C",
                "Abstract": "Our ability to accumulate large, complex (multivariate) data sets has far exceeded our ability to effectively process them in searching for patterns, anomalies and other interesting features. Conventional multivariate visualization techniques generally do not scale well with respect to the size of the data set. The focus of this paper is on the interactive visualization of large multivariate data sets based on a number of novel extensions to the parallel coordinates display technique. We develop a multi-resolution view of the data via hierarchical clustering, and use a variation of parallel coordinates to convey aggregation information for the resulting clusters. Users can then navigate the resulting structure until the desired focus region and level of detail is reached, using our suite of navigational and filtering tools. We describe the design and implementation of our hierarchical parallel coordinates system which is based on extending the XmdvTool system. Lastly, we show examples of the tools and techniques applied to large (hundreds of thousands of records) multivariate data sets.",
                "AuthorNamesDeduped": "Ying-Huey Fua;Matthew O. Ward;Elke A. Rundensteiner",
                "AuthorNames": "Ying-Huey Fua;M.O. Ward;E.A. Rundensteiner",
                "AuthorAffiliation": "Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA;Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA",
                "InternalReferences": "0.1109/visual.1994.346302;10.1109/infvis.1999.801858;10.1109/visual.1996.567800;10.1109/visual.1995.485140;10.1109/visual.1990.146386;10.1109/visual.1990.146402;10.1109/infvis.1998.729556;10.1109/visual.1995.485139",
                "AuthorKeywords": "Large-scale multivariate data visualization, hierarchical data exploration, parallel coordinates",
                "AminerCitationCount": 642,
                "CitationCountCrossRef": 34,
                "PubsCitedCrossRef": 32,
                "DownloadsXplore": 622,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3065,
                "i": [
                    3065
                ]
            }
        },
        {
            "name": "Matt Williams",
            "value": 40,
            "numPapers": 7,
            "cluster": "11",
            "visible": 1,
            "index": 1670,
            "x": 303.573782925163,
            "y": -273.66577849725013,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "InfoVis",
                "Year": 2004,
                "Title": "Steerable, Progressive Multidimensional Scaling",
                "DOI": "10.1109/infvis.2004.60",
                "Link": "http://dx.doi.org/10.1109/INFVIS.2004.60",
                "FirstPage": 57,
                "LastPage": 64,
                "PaperType": "C",
                "Abstract": "Current implementations of multidimensional scaling (MDS), an approach that attempts to best represent data point similarity in a low-dimensional representation, are not suited for many of today's large-scale datasets. We propose an extension to the spring model approach that allows the user to interactively explore datasets that are far beyond the scale of previous implementations of MDS. We present MDSteer, a steerable MDS computation engine and visualization tool that progressively computes an MDS layout and handles datasets of over one million points. Our technique employs hierarchical data structures and progressive layouts to allow the user to steer the computation of the algorithm to the interesting areas of the dataset. The algorithm iteratively alternates between a layout stage in which a subselection of points are added to the set of active points affected by the MDS iteration, and a binning stage which increases the depth of the bin hierarchy and organizes the currently unplaced points into separate spatial regions. This binning strategy allows the user to select onscreen regions of the layout to focus the MDS computation into the areas of the dataset that are assigned to the selected bins. We show both real and common synthetic benchmark datasets with dimensionalities ranging from 3 to 300 and cardinalities of over one million points",
                "AuthorNamesDeduped": "Matt Williams;Tamara Munzner",
                "AuthorNames": "M. Williams;T. Munzner",
                "AuthorAffiliation": "University of British Columbia, Canada;University of British Columbia, Canada",
                "InternalReferences": "0.1109/infvis.2002.1173150;10.1109/infvis.2003.1249013;10.1109/infvis.2001.963275;10.1109/infvis.2002.1173159;10.1109/infvis.2002.1173161;10.1109/visual.1996.567787;10.1109/infvis.1995.528686;10.1109/infvis.2003.1249012",
                "AuthorKeywords": "dimensionality reduction, multidimensional scaling",
                "AminerCitationCount": 160,
                "CitationCountCrossRef": 54,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 513,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2459,
                "i": [
                    2459
                ]
            }
        },
        {
            "name": "David N. Kenwright",
            "value": 75,
            "numPapers": 8,
            "cluster": "11",
            "visible": 1,
            "index": 1671,
            "x": -38.99895008249152,
            "y": 406.9755298448094,
            "vy": 0,
            "vx": 0,
            "r": 1.0863557858376511,
            "node": {
                "Conference": "Vis",
                "Year": 1997,
                "Title": "Vortex identification-applications in aerodynamics: a case study",
                "DOI": "10.1109/visual.1997.663910",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1997.663910",
                "FirstPage": 413,
                "LastPage": 416,
                "PaperType": "C",
                "Abstract": "An eigenvector method for vortex identification has been applied to recent numerical and experimental studies in external flow aerodynamics. It is shown to be an effective way to extract and visualize features such as vortex cores, spiral vortex breakdowns, vortex bursting, and vortex diffusion. Several problems are reported and illustrated. These include: disjointed line segments, detecting non-vortical flow features, and vortex core displacement. Future research and applications are discussed, such as using vortex cores to guide automatic grid refinement.",
                "AuthorNamesDeduped": "David N. Kenwright;Robert Haimes",
                "AuthorNames": "D. Kenwright;R. Haimes",
                "AuthorAffiliation": "MRJ Technology Solutions Inc., NASA Ames Research Center, USA;Massachusetts Institute of Technology, USA",
                "InternalReferences": "0.1109/visual.1996.568137;10.1109/visual.1994.346327;10.1109/visual.1991.175773",
                "AuthorKeywords": null,
                "AminerCitationCount": 74,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 16,
                "DownloadsXplore": 303,
                "Award": "BCS",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3257,
                "i": [
                    3257
                ]
            }
        },
        {
            "name": "Wim C. de Leeuw",
            "value": 158,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 1672,
            "x": -246.22502922401878,
            "y": -326.5321346875849,
            "vy": 0,
            "vx": 0,
            "r": 1.181922855497985,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Collapsing Flow Topology Using Area Metrics",
                "DOI": "10.1109/visual.1999.809907",
                "Link": "http://doi.ieeecomputersociety.org/10.1109/VISUAL.1999.809907",
                "FirstPage": 349,
                "LastPage": 354,
                "PaperType": "C",
                "Abstract": "Visualization of topological information of a vector field can provide useful information on the structure of the field. However, in turbulent flows standard critical point visualization will result in a cluttered image which is difficult to interpret. This paper presents a technique for collapsing topologies. The governing idea is to classify the importance of the critical points in the topology. By only displaying the more important critical points, a simplified depiction of the topology can be provided. Flow consistency is maintained when collapsing the topology, resulting in a visualization which is consistent with the original topology. We apply the collapsing topology technique to a turbulent flow field.",
                "AuthorNamesDeduped": "Wim C. de Leeuw;Robert van Liere",
                "AuthorNames": "W. De Leeuw;R. Van Liere",
                "AuthorAffiliation": "Center for Math. & Comput. Sci., CWI, Amsterdam, Netherlands;Center for Math. & Comput. Sci., CWI, Amsterdam, Netherlands",
                "InternalReferences": "0.1109/visual.1991.175773",
                "AuthorKeywords": "multi-level visualization techniques, flow visualization, flow topology",
                "AminerCitationCount": null,
                "CitationCountCrossRef": 51,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 48,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3059,
                "i": [
                    3059
                ]
            }
        },
        {
            "name": "Patrick J. Moran",
            "value": 52,
            "numPapers": 27,
            "cluster": "6",
            "visible": 1,
            "index": 1673,
            "x": 402.248160961993,
            "y": 74.47427074295223,
            "vy": 0,
            "vx": 0,
            "r": 1.059873344847438,
            "node": {
                "Conference": "Vis",
                "Year": 2011,
                "Title": "Visualization of AMR Data With Multi-Level Dual-Mesh Interpolation",
                "DOI": "10.1109/tvcg.2011.252",
                "Link": "http://dx.doi.org/10.1109/TVCG.2011.252",
                "FirstPage": 1862,
                "LastPage": 1871,
                "PaperType": "J",
                "Abstract": "We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C&lt;sup&gt;0&lt;/sup&gt; continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates \"stitching cells\" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level.",
                "AuthorNamesDeduped": "Patrick J. Moran;David A. Ellsworth",
                "AuthorNames": "Patrick Moran;David Ellsworth",
                "AuthorAffiliation": "NASA Ames Research Center, USA;Computer Sciences Corporation, NASA Ames, USA",
                "InternalReferences": "0.1109/visual.1991.175782;10.1109/tvcg.2009.149;10.1109/visual.2002.1183820",
                "AuthorKeywords": "Adaptive mesh refinement, AMR, Enzo, interpolation, ray casting, isosurfaces, dual meshes, stitching cells",
                "AminerCitationCount": 17,
                "CitationCountCrossRef": 14,
                "PubsCitedCrossRef": 22,
                "DownloadsXplore": 546,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 1674,
                "i": [
                    1674
                ]
            }
        },
        {
            "name": "Tobias Preußer",
            "value": 24,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1674,
            "x": -347.01555535483124,
            "y": 216.8644838183008,
            "vy": 0,
            "vx": 0,
            "r": 1.0276338514680483,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Flow field clustering via algebraic multigrid",
                "DOI": "10.1109/visual.2004.32",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.32",
                "FirstPage": 35,
                "LastPage": 42,
                "PaperType": "C",
                "Abstract": "We present a novel multiscale approach for flow visualization. We define a local alignment tensor that encodes a measure for alignment to the direction of a given flow field. This tensor induces an anisotropic differential operator on the flow domain, which is discretized with a standard finite element technique. The entries of the corresponding stiffness matrix represent the anisotropically weighted couplings of adjacent nodes of the domain mesh. We use an algebraic multigrid algorithm to generate a hierarchy of fine to coarse descriptions for the above coupling data. This hierarchy comprises a set of coarse grid nodes, a multiscale of basis functions and their corresponding supports. We use these supports to obtain a multilevel decomposition of the flow structure. Standard streamline icons are used to visualize this decomposition at any user-selected level of detail. The method provides a single framework for vector field decomposition independent on the domain dimension or mesh type. Applications are shown in 2D, for flow fields on curved surfaces, and for 3D volumetric flow fields.",
                "AuthorNamesDeduped": "Michael Griebel;Tobias Preußer;Martin Rumpf;Marc Alexander Schweitzer;Alexandru C. Telea",
                "AuthorNames": "M. Griebel;T. Preusser;M. Rumpf;M.A. Schweitzer;A. Telea",
                "AuthorAffiliation": "Institute for Numerical Simulation, University of Bonn, Bonn, Germany;Center for Complex Systems and Visualization, University of Brethemen, Bremen, Germany;Institute for Numerical Analysis and Scientific Computing, Duisburg Essen University, Duisburg, Germany;Institute for Numerical Simulation, University of Bonn, Bonn, Germany;Department of Mathematics and Computer Science, Eindhovan University of Technology, Eindhoven, Netherlands",
                "InternalReferences": "0.1109/visual.1999.809865;10.1109/visual.2003.1250372;10.1109/visual.2003.1250377;10.1109/visual.2003.1250363;10.1109/visual.1999.809863;10.1109/visual.2001.964507",
                "AuthorKeywords": "algebraic multigrid, multiscale visualization, flow visualization",
                "AminerCitationCount": 51,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 198,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2546,
                "i": [
                    2546
                ]
            }
        },
        {
            "name": "Martin Rumpf",
            "value": 31,
            "numPapers": 22,
            "cluster": "6",
            "visible": 1,
            "index": 1675,
            "x": 109.42127493193449,
            "y": -394.43248420010997,
            "vy": 0,
            "vx": 0,
            "r": 1.035693724812896,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Flow field clustering via algebraic multigrid",
                "DOI": "10.1109/visual.2004.32",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.32",
                "FirstPage": 35,
                "LastPage": 42,
                "PaperType": "C",
                "Abstract": "We present a novel multiscale approach for flow visualization. We define a local alignment tensor that encodes a measure for alignment to the direction of a given flow field. This tensor induces an anisotropic differential operator on the flow domain, which is discretized with a standard finite element technique. The entries of the corresponding stiffness matrix represent the anisotropically weighted couplings of adjacent nodes of the domain mesh. We use an algebraic multigrid algorithm to generate a hierarchy of fine to coarse descriptions for the above coupling data. This hierarchy comprises a set of coarse grid nodes, a multiscale of basis functions and their corresponding supports. We use these supports to obtain a multilevel decomposition of the flow structure. Standard streamline icons are used to visualize this decomposition at any user-selected level of detail. The method provides a single framework for vector field decomposition independent on the domain dimension or mesh type. Applications are shown in 2D, for flow fields on curved surfaces, and for 3D volumetric flow fields.",
                "AuthorNamesDeduped": "Michael Griebel;Tobias Preußer;Martin Rumpf;Marc Alexander Schweitzer;Alexandru C. Telea",
                "AuthorNames": "M. Griebel;T. Preusser;M. Rumpf;M.A. Schweitzer;A. Telea",
                "AuthorAffiliation": "Institute for Numerical Simulation, University of Bonn, Bonn, Germany;Center for Complex Systems and Visualization, University of Brethemen, Bremen, Germany;Institute for Numerical Analysis and Scientific Computing, Duisburg Essen University, Duisburg, Germany;Institute for Numerical Simulation, University of Bonn, Bonn, Germany;Department of Mathematics and Computer Science, Eindhovan University of Technology, Eindhoven, Netherlands",
                "InternalReferences": "0.1109/visual.1999.809865;10.1109/visual.2003.1250372;10.1109/visual.2003.1250377;10.1109/visual.2003.1250363;10.1109/visual.1999.809863;10.1109/visual.2001.964507",
                "AuthorKeywords": "algebraic multigrid, multiscale visualization, flow visualization",
                "AminerCitationCount": 51,
                "CitationCountCrossRef": 16,
                "PubsCitedCrossRef": 28,
                "DownloadsXplore": 198,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2546,
                "i": [
                    2546
                ]
            }
        },
        {
            "name": "Takayuki Itoh",
            "value": 19,
            "numPapers": 1,
            "cluster": "6",
            "visible": 1,
            "index": 1676,
            "x": 185.80689546959027,
            "y": 364.86408098900716,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Volume Thinning for Automatic Isosurface Propagation",
                "DOI": "10.1109/visual.1996.568123",
                "Link": "http://doi.ieeecomputersociety.org/10.1109/VISUAL.1996.568123",
                "FirstPage": 303,
                "LastPage": 310,
                "PaperType": "C",
                "Abstract": "An isosurface can be efficiently generated by visiting adjacent intersected cells in order, as if the isosurface were propagating itself. We previously proposed an extrema graph method (T. Itoh and K. Koyamada, 1995), which generates a graph connecting extremum points. The isosurface propagation starts from some of the intersected cells that are found both by visiting the cells through which arcs of the graph pass and by visiting the cells on the boundary of a volume. We propose an efficient method of searching for cells intersected by an isosurface. This method generates a volumetric skeleton. consisting of cells, like an extrema graph, by applying a thinning algorithm used in the image recognition area. Since it preserves the topological features of the volume and the connectivity of the extremum points, it necessarily intersects every isosurface. The method is more efficient than the extrema graph method, since it does not require that cells on the boundary be visited.",
                "AuthorNamesDeduped": "Takayuki Itoh;Yasushi Yamaguchi 0001;Koji Koyamada",
                "AuthorNames": "T. Itoh;Y. Yamaguchi;K. Koyamada",
                "AuthorAffiliation": "Tokyo Research Laboratory, IBM Japan;Graduate School of Arts and Sciences, The University of Tokyo;Tokyo Research Laboratory, IBM Japan",
                "InternalReferences": "0.1109/visual.1991.175780",
                "AuthorKeywords": null,
                "AminerCitationCount": 73,
                "CitationCountCrossRef": 22,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 27,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3347,
                "i": [
                    3347
                ]
            }
        },
        {
            "name": "Koji Koyamada",
            "value": 19,
            "numPapers": 6,
            "cluster": "6",
            "visible": 1,
            "index": 1677,
            "x": -383.5847167838921,
            "y": -143.57146321543607,
            "vy": 0,
            "vx": 0,
            "r": 1.0218767990788715,
            "node": {
                "Conference": "VAST",
                "Year": 2020,
                "Title": "A Visual Analytics Approach for Ecosystem Dynamics based on Empirical Dynamic Modeling",
                "DOI": "10.1109/tvcg.2020.3028956",
                "Link": "http://dx.doi.org/10.1109/TVCG.2020.3028956",
                "FirstPage": 506,
                "LastPage": 516,
                "PaperType": "J",
                "Abstract": "An important approach for scientific inquiry across many disciplines involves using observational time series data to understand the relationships between key variables to gain mechanistic insights into the underlying rules that govern the given system. In real systems, such as those found in ecology, the relationships between time series variables are generally not static; instead, these relationships are dynamical and change in a nonlinear or state-dependent manner. To further understand such systems, we investigate integrating methods that appropriately characterize these dynamics (i.e., methods that measure interactions as they change with time-varying system states) with visualization techniques that can help analyze the behavior of the system. Here, we focus on empirical dynamic modeling (EDM) as a state-of-the-art method that specifically identifies causal variables and measures changing state-dependent relationships between time series variables. Instead of using approaches centered on parametric equations, EDM is an equation-free approach that studies systems based on their dynamic attractors. We propose a visual analytics system to support the identification and mechanistic interpretation of system states using an EDM-constructed dynamic graph. This work, as detailed in four analysis tasks and demonstrated with a GUI, provides a novel synthesis of EDM and visualization techniques such as brush-link visualization and visual summarization to interpret dynamic graphs representing ecosystem dynamics. We applied our proposed system to ecological simulation data and real data from a marine mesocosm study as two key use cases. Our case studies show that our visual analytics tools support the identification and interpretation of the system state by the user, and enable us to discover both confirmatory and new findings in ecosystem dynamics. Overall, we demonstrated that our system can facilitate an understanding of how systems function beyond the intuitive analysis of high-dimensional information based on specific domain knowledge.",
                "AuthorNamesDeduped": "Hiroaki Natsukawa;Ethan R. Deyle;Gerald M. Pao;Koji Koyamada;George Sugihara",
                "AuthorNames": "Hiroaki Natsukawa;Ethan R. Deyle;Gerald M. Pao;Koji Koyamada;George Sugihara",
                "AuthorAffiliation": "Kyoto University;Boston University and Scripps Institution of Oceanography, University of California, San Diego;Salk Institution for Biological Sciences;Kyoto University;Scripps Institution of Oceanography, University of California, San Diego",
                "InternalReferences": "0.1109/tvcg.2009.181;10.1109/tvcg.2019.2934251;10.1109/tvcg.2013.198;10.1109/tvcg.2006.192;10.1109/tvcg.2015.2468078;10.1109/tvcg.2017.2745258",
                "AuthorKeywords": "Visual analytics,empirical dynamic modeling,dynamic network,exploratory data analysis",
                "AminerCitationCount": 4,
                "CitationCountCrossRef": 6,
                "PubsCitedCrossRef": 58,
                "DownloadsXplore": 1010,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 505,
                "i": [
                    505
                ]
            }
        },
        {
            "name": "Lukas Mroz",
            "value": 38,
            "numPapers": 14,
            "cluster": "6",
            "visible": 1,
            "index": 1678,
            "x": 379.9377568248014,
            "y": -153.28829354826198,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "Vis",
                "Year": 2000,
                "Title": "Two-level volume rendering - fusing MIP and DVR",
                "DOI": "10.1109/visual.2000.885697",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2000.885697",
                "FirstPage": 211,
                "LastPage": 218,
                "PaperType": "C",
                "Abstract": "Presents a two-level approach for fusing direct volume rendering (DVR) and maximum-intensity projection (MIP) within a joint rendering method. Different structures within the data set are rendered locally by either MIP or DVR on an object-by-object basis. Globally, all the results of subsequent object renderings are combined in a merging step (usually compositing in our case). This allows us to selectively choose the most suitable technique for depicting each object within the data, while keeping the amount of information contained in the image at a reasonable level. This is especially useful when inner structures should be visualized together with semi-transparent outer parts, similar to the focus-and-context approach known from information visualization. We also present an implementation of our approach which allows us to explore volumetric data using two-level rendering at interactive frame rates.",
                "AuthorNamesDeduped": "Helwig Hauser;Lukas Mroz;Gian Italo Bischi;M. Eduard Gröller",
                "AuthorNames": "H. Hauser;L. Mroz;G.-I. Bischi;M.E. Groller",
                "AuthorAffiliation": "VRVis Center Vienna, Austria;University of Technology, Vienna, Austria;University of Urbino, Italy;University of Technology, Vienna, Austria",
                "InternalReferences": "0.1109/visual.1998.745311;10.1109/visual.1999.809887;10.1109/visual.1996.568113;10.1109/visual.2000.885697",
                "AuthorKeywords": "visualization, volume rendering, dynamical systems,medical applications",
                "AminerCitationCount": 81,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 166,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2985,
                "i": [
                    2985
                ]
            }
        },
        {
            "name": "Matthias Zwicker",
            "value": 40,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1679,
            "x": -176.6621244709479,
            "y": 369.78438822834494,
            "vy": 0,
            "vx": 0,
            "r": 1.0460564191134138,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "EWA volume splatting",
                "DOI": "10.1109/visual.2001.964490",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964490",
                "FirstPage": 29,
                "LastPage": 36,
                "PaperType": "C",
                "Abstract": "In this paper we present a novel framework for direct volume rendering using a splatting approach based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter combining a reconstruction with a low-pass kernel. Because of the similarity to Heckbert's EWA (elliptical weighted average) filter for texture mapping we call our technique EWA volume splatting. It provides high image quality without aliasing artifacts or excessive blurring even with non-spherical kernels. Hence it is suitable for regular, rectilinear, and irregular volume data sets. Moreover, our framework introduces a novel approach to compute the footprint function. It facilitates efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in reconstructing surface and volume data.",
                "AuthorNamesDeduped": "Matthias Zwicker;Hanspeter Pfister;Jeroen van Baar;Markus H. Gross",
                "AuthorNames": "M. Zwicker;H. Pfister;J. van Baar;M. Gross",
                "AuthorAffiliation": "ETH Zürich, Switzerland;MERL, Cambridge, MA, USA;MERL, Cambridge, MA, USA;ETH Zürich, Switzerland",
                "InternalReferences": "0.1109/visual.1995.480796;10.1109/visual.1997.663882;10.1109/visual.1998.745309;10.1109/visual.1996.567608;10.1109/visual.1999.809909",
                "AuthorKeywords": "Volume Rendering, Splatting, Antialiasing",
                "AminerCitationCount": 170,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1385,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2875,
                "i": [
                    2875
                ]
            }
        },
        {
            "name": "Ming Wan",
            "value": 91,
            "numPapers": 18,
            "cluster": "6",
            "visible": 1,
            "index": 1680,
            "x": -119.55618960349825,
            "y": -392.1177342680287,
            "vy": 0,
            "vx": 0,
            "r": 1.1047783534830167,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Fast and reliable space leaping for interactive volume rendering",
                "DOI": "10.1109/visual.2002.1183775",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183775",
                "FirstPage": 195,
                "LastPage": 202,
                "PaperType": "C",
                "Abstract": "We present a fast and reliable space-leaping scheme to accelerate ray casting during interactive navigation in a complex volumetric scene, where we combine innovative space-leaping techniques in a number of ways. First, we derive most of the pixel depths at the current frame by exploiting the temporal coherence during navigation, where we employ a novel fast cell-based reprojection scheme that is more reliable than the traditional intersection-point based reprojection. Next, we exploit the object space coherence to quickly detect the remaining pixel depths, by using a precomputed accurate distance field that stores the Euclidean distance from each empty (background) voxel toward its nearest object boundary. In addition, we propose an effective solution to the challenging new-incoming-objects problem during navigation. Our algorithm has been implemented on a 16-processor SGI Power Challenge and reached interactive rendering rates at more than 10 Hz during the navigation inside 512/sup 3/ volume data sets acquired from both a simulation phantom and actual patients.",
                "AuthorNamesDeduped": "Ming Wan;Aamir Sadiq;Arie E. Kaufman",
                "AuthorNames": "Ming Wan;A. Sadiq;A. Kaufman",
                "AuthorAffiliation": "Boeing Company, Seattle, WA, USA;Department of Computer Science, State University of New York, Stony Brook, NY, USA;Department of Computer Science, State University of New York, Stony Brook, NY, USA",
                "InternalReferences": "10.1109/visual.2001.964519;10.1109/visual.1998.745713;10.1109/visual.1993.398852;10.1109/visual.1992.235231;10.1109/visual.1990.146377;10.1109/visual.1999.809914;10.1109/visual.1999.809911",
                "AuthorKeywords": "virtual navigation, volume visualization, ray-casting optimization, space leaping",
                "AminerCitationCount": 42,
                "CitationCountCrossRef": 0,
                "PubsCitedCrossRef": 29,
                "DownloadsXplore": 100,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2839,
                "i": [
                    2839
                ]
            }
        },
        {
            "name": "Zhengrong Liang",
            "value": 30,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1681,
            "x": 353.1337582955033,
            "y": 208.43835719966006,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Volume rendering based interactive navigation within the human colon",
                "DOI": "10.1109/visual.1999.809914",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809914",
                "FirstPage": 397,
                "LastPage": 549,
                "PaperType": "C",
                "Abstract": "We present an interactive navigation system for virtual colonoscopy, which is based solely on high performance volume rendering. Previous colonic navigation systems have employed either a surface rendering or a Z-buffer-assisted volume rendering method that depends on the surface rendering results. Our method is a fast direct volume rendering technique that exploits distance information stored in the potential field of the camera control model, and is parallelized on a multiprocessor. Experiments have been conducted on both a simulated pipe and patients' data sets acquired with a CT scanner.",
                "AuthorNamesDeduped": "Ming Wan;Qingyu Tang;Arie E. Kaufman;Zhengrong Liang;Mark Wax",
                "AuthorNames": "M. Wan;Q. Tang;A. Kaufman;Z. Liang;M. Wax",
                "AuthorAffiliation": "Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA",
                "InternalReferences": "0.1109/visual.1999.809911;10.1109/visual.1997.663915;10.1109/visual.1998.745713;10.1109/visual.1993.398852;10.1109/visual.1999.809900",
                "AuthorKeywords": null,
                "AminerCitationCount": 92,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 109,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3070,
                "i": [
                    3070
                ]
            }
        },
        {
            "name": "Mark Wax",
            "value": 30,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1682,
            "x": -401.3072073317273,
            "y": 84.86769316771851,
            "vy": 0,
            "vx": 0,
            "r": 1.0345423143350605,
            "node": {
                "Conference": "Vis",
                "Year": 1999,
                "Title": "Volume rendering based interactive navigation within the human colon",
                "DOI": "10.1109/visual.1999.809914",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1999.809914",
                "FirstPage": 397,
                "LastPage": 549,
                "PaperType": "C",
                "Abstract": "We present an interactive navigation system for virtual colonoscopy, which is based solely on high performance volume rendering. Previous colonic navigation systems have employed either a surface rendering or a Z-buffer-assisted volume rendering method that depends on the surface rendering results. Our method is a fast direct volume rendering technique that exploits distance information stored in the potential field of the camera control model, and is parallelized on a multiprocessor. Experiments have been conducted on both a simulated pipe and patients' data sets acquired with a CT scanner.",
                "AuthorNamesDeduped": "Ming Wan;Qingyu Tang;Arie E. Kaufman;Zhengrong Liang;Mark Wax",
                "AuthorNames": "M. Wan;Q. Tang;A. Kaufman;Z. Liang;M. Wax",
                "AuthorAffiliation": "Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA;Center for Visual Computing (CVC)and Departments of Computer Science and Radiology, State University of New York, Stony Brook, Stony Brook, NY, USA",
                "InternalReferences": "0.1109/visual.1999.809911;10.1109/visual.1997.663915;10.1109/visual.1998.745713;10.1109/visual.1993.398852;10.1109/visual.1999.809900",
                "AuthorKeywords": null,
                "AminerCitationCount": 92,
                "CitationCountCrossRef": 24,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 109,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3070,
                "i": [
                    3070
                ]
            }
        },
        {
            "name": "Steven G. Parker",
            "value": 56,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1683,
            "x": 238.65503334847565,
            "y": -333.75705993647233,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Interactive ray tracing for isosurface rendering",
                "DOI": "10.1109/visual.1998.745713",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745713",
                "FirstPage": 233,
                "LastPage": 238,
                "PaperType": "C",
                "Abstract": "We show that it is feasible to perform interactive isosurfacing of very large rectilinear datasets with brute-force ray tracing on a conventional (distributed) shared-memory multiprocessor machine. Rather than generate geometry representing the isosurface and render with a z-buffer, for each pixel we trace a ray through a volume and do an analytic isosurface intersection computation. Although this method has a high intrinsic computational cost, its simplicity and scalability make it ideal for large datasets on current high-end systems. Incorporating simple optimizations, such as volume bricking and a shallow hierarchy, enables interactive rendering (i.e. 10 frames per second) of the 1 GByte full resolution Visible Woman dataset on an SGI Reality Monster. The graphics capabilities of the Reality Monster are used only for display of the final color image.",
                "AuthorNamesDeduped": "Steven G. Parker;Peter Shirley;Yarden Livnat;Charles D. Hansen;Peter-Pike J. Sloan",
                "AuthorNames": "S. Parker;P. Shirley;Y. Livnat;C. Hansen;P.-P. Sloan",
                "AuthorAffiliation": "Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1997.663888;10.1109/visual.1994.346331;10.1109/visual.1994.346320;10.1109/visual.1995.485154;10.1109/visual.1998.745300",
                "AuthorKeywords": null,
                "AminerCitationCount": 526,
                "CitationCountCrossRef": 116,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 541,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3142,
                "i": [
                    3142
                ]
            }
        },
        {
            "name": "Peter Shirley",
            "value": 64,
            "numPapers": 12,
            "cluster": "6",
            "visible": 1,
            "index": 1684,
            "x": 49.48755171335027,
            "y": 407.43218113622106,
            "vy": 0,
            "vx": 0,
            "r": 1.0736902705814624,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Interactive ray tracing for isosurface rendering",
                "DOI": "10.1109/visual.1998.745713",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745713",
                "FirstPage": 233,
                "LastPage": 238,
                "PaperType": "C",
                "Abstract": "We show that it is feasible to perform interactive isosurfacing of very large rectilinear datasets with brute-force ray tracing on a conventional (distributed) shared-memory multiprocessor machine. Rather than generate geometry representing the isosurface and render with a z-buffer, for each pixel we trace a ray through a volume and do an analytic isosurface intersection computation. Although this method has a high intrinsic computational cost, its simplicity and scalability make it ideal for large datasets on current high-end systems. Incorporating simple optimizations, such as volume bricking and a shallow hierarchy, enables interactive rendering (i.e. 10 frames per second) of the 1 GByte full resolution Visible Woman dataset on an SGI Reality Monster. The graphics capabilities of the Reality Monster are used only for display of the final color image.",
                "AuthorNamesDeduped": "Steven G. Parker;Peter Shirley;Yarden Livnat;Charles D. Hansen;Peter-Pike J. Sloan",
                "AuthorNames": "S. Parker;P. Shirley;Y. Livnat;C. Hansen;P.-P. Sloan",
                "AuthorAffiliation": "Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1997.663888;10.1109/visual.1994.346331;10.1109/visual.1994.346320;10.1109/visual.1995.485154;10.1109/visual.1998.745300",
                "AuthorKeywords": null,
                "AminerCitationCount": 526,
                "CitationCountCrossRef": 116,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 541,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3142,
                "i": [
                    3142
                ]
            }
        },
        {
            "name": "Peter-Pike J. Sloan",
            "value": 56,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1685,
            "x": -311.79957281822226,
            "y": -267.0786895099909,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "Vis",
                "Year": 1998,
                "Title": "Interactive ray tracing for isosurface rendering",
                "DOI": "10.1109/visual.1998.745713",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1998.745713",
                "FirstPage": 233,
                "LastPage": 238,
                "PaperType": "C",
                "Abstract": "We show that it is feasible to perform interactive isosurfacing of very large rectilinear datasets with brute-force ray tracing on a conventional (distributed) shared-memory multiprocessor machine. Rather than generate geometry representing the isosurface and render with a z-buffer, for each pixel we trace a ray through a volume and do an analytic isosurface intersection computation. Although this method has a high intrinsic computational cost, its simplicity and scalability make it ideal for large datasets on current high-end systems. Incorporating simple optimizations, such as volume bricking and a shallow hierarchy, enables interactive rendering (i.e. 10 frames per second) of the 1 GByte full resolution Visible Woman dataset on an SGI Reality Monster. The graphics capabilities of the Reality Monster are used only for display of the final color image.",
                "AuthorNamesDeduped": "Steven G. Parker;Peter Shirley;Yarden Livnat;Charles D. Hansen;Peter-Pike J. Sloan",
                "AuthorNames": "S. Parker;P. Shirley;Y. Livnat;C. Hansen;P.-P. Sloan",
                "AuthorAffiliation": "Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA;Computer Science Department, University of Utah, USA",
                "InternalReferences": "0.1109/visual.1997.663888;10.1109/visual.1994.346331;10.1109/visual.1994.346320;10.1109/visual.1995.485154;10.1109/visual.1998.745300",
                "AuthorKeywords": null,
                "AminerCitationCount": 526,
                "CitationCountCrossRef": 116,
                "PubsCitedCrossRef": 17,
                "DownloadsXplore": 541,
                "Award": "BP",
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3142,
                "i": [
                    3142
                ]
            }
        },
        {
            "name": "Daqing Xue",
            "value": 0,
            "numPapers": 17,
            "cluster": "6",
            "visible": 1,
            "index": 1686,
            "x": 410.44206637923025,
            "y": -13.686129706660651,
            "vy": 0,
            "vx": 0,
            "r": 1,
            "node": {
                "Conference": "Vis",
                "Year": 2004,
                "Title": "Rendering implicit flow volumes",
                "DOI": "10.1109/visual.2004.90",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2004.90",
                "FirstPage": 99,
                "LastPage": 106,
                "PaperType": "C",
                "Abstract": "Traditional flow volumes construct an explicit geometrical or parametrical representation from the vector field. The geometry is updated interactively and then rendered using an unstructured volume rendering technique. Unless a detailed refinement of the flow volume is specified for the interior, information inside the underlying flow volume is lost in the linear interpolation. These disadvantages can be avoided and/or alleviated using an implicit flow model. An implicit flow is a scalar field constructed such that any point in the field is associated with a termination surface using an advection operator on the flow. We present two techniques, a slice-based three-dimensional texture mapping and an interval volume segmentation coupled with a tetrahedron projection-based renderer, to render implicit stream flows. In the first method, the implicit flow representation is loaded as a 3D texture and manipulated using a dynamic texture operation that allows the flow to be investigated interactively. In our second method, a geometric flow volume is extracted from the implicit flow using a high dimensional isocontouring or interval volume routine. This provides a very detailed flow volume or set of flow volumes that can easily change topology, while retaining accurate characteristics within the flow volume. The advantages and disadvantages of these two techniques are compared with traditional explicit flow volumes.",
                "AuthorNamesDeduped": "Daqing Xue;Caixia Zhang;Roger Crawfis",
                "AuthorNames": "D. Xue;C. Zhang;R. Crawfis",
                "AuthorAffiliation": "Department of Computer Science and Engineering, The Ohio State University, USA;Department of Computer Science and Engineering, The Ohio State University and Ohio State University, Columbus, OH, US;Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA",
                "InternalReferences": "0.1109/visual.2001.964519;10.1109/visual.1993.398846;10.1109/visual.2000.885688;10.1109/visual.1991.175789;10.1109/visual.2003.1250364;10.1109/visual.1992.235211;10.1109/visual.1996.567777;10.1109/visual.2000.885704;10.1109/visual.1999.809909;10.1109/visual.2003.1250376;10.1109/visual.2003.1250377;10.1109/visual.1993.398875;10.1109/visual.2003.1250378;10.1109/visual.1999.809892;10.1109/visual.1997.663886;10.1109/visual.1993.398877;10.1109/visual.1994.346315;10.1109/visual.1995.480807",
                "AuthorKeywords": "interval volume rendering, implicit stream flow, flow visualization, graphics hardware",
                "AminerCitationCount": 33,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 162,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2566,
                "i": [
                    2566
                ]
            }
        },
        {
            "name": "Caixia Zhang",
            "value": 5,
            "numPapers": 20,
            "cluster": "6",
            "visible": 1,
            "index": 1687,
            "x": -293.48933090785,
            "y": 287.42653434097303,
            "vy": 0,
            "vx": 0,
            "r": 1.0057570523891768,
            "node": {
                "Conference": "Vis",
                "Year": 2002,
                "Title": "Volumetric shadows using splatting",
                "DOI": "10.1109/visual.2002.1183761",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2002.1183761",
                "FirstPage": 85,
                "LastPage": 92,
                "PaperType": "C",
                "Abstract": "This paper describes an efficient algorithm to model the light attenuation due to a participating media with low albedo. The light attenuation is modeled using splatting volume renderer for both the viewer and the light source. During the rendering, a 2D shadow buffer attenuates the light for each pixel. When the contribution of a footprint is added to the image buffer, as seen from the eye, we add the contribution to the shadow buffer, as seen from the light source. We have generated shadows for point lights and parallel lights using this algorithm. The shadow algorithm has been extended to deal with multiple light sources and projective textured lights.",
                "AuthorNamesDeduped": "Caixia Zhang;Roger Crawfis",
                "AuthorNames": "Caixia Zhang;R. Crawfis",
                "AuthorAffiliation": "Department of Computer and Information Science, Ohio State Uinversity, Columbus, OH, USA;Department of Computer and Information Science, Ohio State Uinversity, Columbus, OH, USA",
                "InternalReferences": "0.1109/visual.1998.745309;10.1109/visual.1999.809909;10.1109/visual.2000.885698;10.1109/visual.2002.1183764",
                "AuthorKeywords": "visualization, volume rendering, shadows, illumination",
                "AminerCitationCount": 36,
                "CitationCountCrossRef": 2,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 105,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2834,
                "i": [
                    2834
                ]
            }
        },
        {
            "name": "Jihad El-Sana",
            "value": 17,
            "numPapers": 15,
            "cluster": "11",
            "visible": 1,
            "index": 1688,
            "x": 22.262657811719393,
            "y": -410.31009501005246,
            "vy": 0,
            "vx": 0,
            "r": 1.019573978123201,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "Integrating occlusion culling with view-dependent rendering",
                "DOI": "10.1109/visual.2001.964534",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964534",
                "FirstPage": 371,
                "LastPage": 378,
                "PaperType": "C",
                "Abstract": "We present an approach that integrates occlusion culling within the view-dependent rendering framework. View-dependent rendering provides the ability to change level of detail over the surface seamlessly and smoothly in real-time. The exclusive use of view-parameters to perform level-of-detail selection causes even occluded regions to be rendered in high level of detail. To overcome this serious drawback we have integrated occlusion culling into the level selection mechanism. Because computing exact visibility is expensive and it is currently not possible to perform this computation in real time, we use a visibility estimation technique instead. Our approach reduces dramatically the resolution at occluded regions.",
                "AuthorNamesDeduped": "Jihad El-Sana;Neta Sokolovsky;Cláudio T. Silva",
                "AuthorNames": "J. El-Sana;N. Sokolovsky;C.T. Silva",
                "AuthorAffiliation": "Department of Computer Science, Ben-Gurion University of the Negev, Beersheba, Israel;Department of Computer Science, Ben-Gurion University of the Negev, Beersheba, Israel;AT and T Research Laboratories, Florham Park, NJ, USA",
                "InternalReferences": "0.1109/visual.1999.809877;10.1109/visual.1999.809875;10.1109/visual.1997.663860;10.1109/visual.1996.568117;10.1109/visual.2000.885724;10.1109/visual.1998.745283;10.1109/visual.1995.480805",
                "AuthorKeywords": null,
                "AminerCitationCount": 99,
                "CitationCountCrossRef": 12,
                "PubsCitedCrossRef": 44,
                "DownloadsXplore": 117,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2896,
                "i": [
                    2896
                ]
            }
        },
        {
            "name": "Roger Gatti",
            "value": 25,
            "numPapers": 1,
            "cluster": "11",
            "visible": 1,
            "index": 1689,
            "x": 260.8218963069243,
            "y": 317.6821342267457,
            "vy": 0,
            "vx": 0,
            "r": 1.0287852619458837,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "Fast multiresolution surface meshing",
                "DOI": "10.1109/visual.1995.480805",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.480805",
                "FirstPage": 135,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "Presents a new method for adaptive surface meshing and triangulation which controls the local level-of-detail of the surface approximation by local spectral estimates. These estimates are determined by a wavelet representation of the surface data. The basic idea is to decompose the initial data set by means of an orthogonal or semi-orthogonal tensor product wavelet transform (WT) and to analyze the resulting coefficients. In surface regions where the partial energy of the resulting coefficients is low, the polygonal approximation of the surface can be performed with larger triangles without losing too much fine-grain detail. However, since the localization of the WT is bound by the Heisenberg principle, the meshing method has to be controlled by the detail signals rather than directly by the coefficients. The dyadic scaling of the WT stimulated us to build a hierarchical meshing algorithm which transforms the initially regular data grid into a quadtree representation by rejection of unimportant mesh vertices. The optimum triangulation of the resulting quadtree cells is carried out by selection from a look-up table. The tree grows recursively, as controlled by the detail signals, which are computed from a modified inverse WT. In order to control the local level-of-detail, we introduce a new class of wavelet space filters acting as \"magnifying glasses\" on the data.",
                "AuthorNamesDeduped": "Markus H. Gross;Roger Gatti;Oliver G. Staadt",
                "AuthorNames": "M.H. Gross;R. Gatti;O. Staadt",
                "AuthorAffiliation": "Computer Science Department, ETH Zuürich, Zurich, Switzerland;Computer Science Department, ETH Zuürich, Zurich, Switzerland;Computer Science Department, ETH Zuürich, Zurich, Switzerland",
                "InternalReferences": "0.1109/visual.1994.346333;10.1109/visual.1994.346331",
                "AuthorKeywords": null,
                "AminerCitationCount": 210,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 19,
                "DownloadsXplore": 113,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3420,
                "i": [
                    3420
                ]
            }
        },
        {
            "name": "Jeroen van Baar",
            "value": 38,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1690,
            "x": -407.0335537079149,
            "y": -58.08344132285861,
            "vy": 0,
            "vx": 0,
            "r": 1.0437535981577433,
            "node": {
                "Conference": "Vis",
                "Year": 2001,
                "Title": "EWA volume splatting",
                "DOI": "10.1109/visual.2001.964490",
                "Link": "http://dx.doi.org/10.1109/VISUAL.2001.964490",
                "FirstPage": 29,
                "LastPage": 36,
                "PaperType": "C",
                "Abstract": "In this paper we present a novel framework for direct volume rendering using a splatting approach based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter combining a reconstruction with a low-pass kernel. Because of the similarity to Heckbert's EWA (elliptical weighted average) filter for texture mapping we call our technique EWA volume splatting. It provides high image quality without aliasing artifacts or excessive blurring even with non-spherical kernels. Hence it is suitable for regular, rectilinear, and irregular volume data sets. Moreover, our framework introduces a novel approach to compute the footprint function. It facilitates efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in reconstructing surface and volume data.",
                "AuthorNamesDeduped": "Matthias Zwicker;Hanspeter Pfister;Jeroen van Baar;Markus H. Gross",
                "AuthorNames": "M. Zwicker;H. Pfister;J. van Baar;M. Gross",
                "AuthorAffiliation": "ETH Zürich, Switzerland;MERL, Cambridge, MA, USA;MERL, Cambridge, MA, USA;ETH Zürich, Switzerland",
                "InternalReferences": "0.1109/visual.1995.480796;10.1109/visual.1997.663882;10.1109/visual.1998.745309;10.1109/visual.1996.567608;10.1109/visual.1999.809909",
                "AuthorKeywords": "Volume Rendering, Splatting, Antialiasing",
                "AminerCitationCount": 170,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 21,
                "DownloadsXplore": 1385,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 2875,
                "i": [
                    2875
                ]
            }
        },
        {
            "name": "Taosong He",
            "value": 116,
            "numPapers": 4,
            "cluster": "6",
            "visible": 1,
            "index": 1691,
            "x": 339.46903610593563,
            "y": -232.18693659486306,
            "vy": 0,
            "vx": 0,
            "r": 1.1335636154289004,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Generation of Transfer Functions with Stochastic Search Technique",
                "DOI": "10.1109/visual.1996.568113",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568113",
                "FirstPage": 227,
                "LastPage": 234,
                "PaperType": "C",
                "Abstract": "This paper presents a novel approach to assist the user in exploring appropriate transfer functions for the visualization of volumetric datasets. The search for a transfer function is treated as a parameter optimization problem and addressed with stochastic search techniques. Starting from an initial population of (random or pre-defined) transfer functions, the evolution of the stochastic algorithms is controlled by either direct user selection of intermediate images or automatic fitness evaluation using user-specified objective functions. This approach essentially shields the user from the complex and tedious \"trial and error\" approach, and demonstrates effective and convenient generation of transfer functions.",
                "AuthorNamesDeduped": "Taosong He;Lichan Hong;Arie E. Kaufman;Hanspeter Pfister",
                "AuthorNames": "Taosong He;Lichan Hong;A. Kaufman;H. Pfister",
                "AuthorAffiliation": "Department of Computer Science\nState University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 342,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 189,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3371,
                "i": [
                    3371
                ]
            }
        },
        {
            "name": "Lichan Hong",
            "value": 110,
            "numPapers": 13,
            "cluster": "6",
            "visible": 1,
            "index": 1692,
            "x": -93.50150655914888,
            "y": 400.63383315837103,
            "vy": 0,
            "vx": 0,
            "r": 1.1266551525618882,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "Generation of Transfer Functions with Stochastic Search Technique",
                "DOI": "10.1109/visual.1996.568113",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568113",
                "FirstPage": 227,
                "LastPage": 234,
                "PaperType": "C",
                "Abstract": "This paper presents a novel approach to assist the user in exploring appropriate transfer functions for the visualization of volumetric datasets. The search for a transfer function is treated as a parameter optimization problem and addressed with stochastic search techniques. Starting from an initial population of (random or pre-defined) transfer functions, the evolution of the stochastic algorithms is controlled by either direct user selection of intermediate images or automatic fitness evaluation using user-specified objective functions. This approach essentially shields the user from the complex and tedious \"trial and error\" approach, and demonstrates effective and convenient generation of transfer functions.",
                "AuthorNamesDeduped": "Taosong He;Lichan Hong;Arie E. Kaufman;Hanspeter Pfister",
                "AuthorNames": "Taosong He;Lichan Hong;A. Kaufman;H. Pfister",
                "AuthorAffiliation": "Department of Computer Science\nState University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY;State University of New York at Stony Brook, Stony Brook, NY",
                "InternalReferences": null,
                "AuthorKeywords": null,
                "AminerCitationCount": 342,
                "CitationCountCrossRef": 5,
                "PubsCitedCrossRef": 0,
                "DownloadsXplore": 189,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3371,
                "i": [
                    3371
                ]
            }
        },
        {
            "name": "Sidney W. Wang",
            "value": 43,
            "numPapers": 3,
            "cluster": "6",
            "visible": 1,
            "index": 1693,
            "x": -201.7387363185857,
            "y": -358.680194976779,
            "vy": 0,
            "vx": 0,
            "r": 1.04951065054692,
            "node": {
                "Conference": "Vis",
                "Year": 1994,
                "Title": "Wavelet-based volume morphing",
                "DOI": "10.1109/visual.1994.346333",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1994.346333",
                "FirstPage": 85,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "This paper presents a technique for performing volume morphing between two volumetric datasets in the wavelet domain. The idea is to decompose the volumetric datasets into a set of frequency bands, apply smooth interpolation to each band, and reconstruct to form the morphed model. In addition, a technique for establishing a suitable correspondence among object voxels is presented. The combination of these two techniques results in a smooth transition between the two datasets and produces morphed volume with fewer high frequency distortions than those obtained from spatial domain volume morphing.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Taosong He;Sidney W. Wang;Arie E. Kaufman",
                "AuthorNames": "Taosong He;S. Wang;A. Kaufman",
                "AuthorAffiliation": "Department of Computer Science, State University of New York, Stony Brook, Stony Brook, NY, USA;Department of Computer Science, State University of New York, Stony Brook, Stony Brook, NY, USA;Department of Computer Science, State University of New York, Stony Brook, Stony Brook, NY, USA",
                "InternalReferences": "0.1109/visual.1993.398854",
                "AuthorKeywords": null,
                "AminerCitationCount": 199,
                "CitationCountCrossRef": 21,
                "PubsCitedCrossRef": 9,
                "DownloadsXplore": 90,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3482,
                "i": [
                    3482
                ]
            }
        },
        {
            "name": "John Riedl",
            "value": 105,
            "numPapers": 15,
            "cluster": "5",
            "visible": 1,
            "index": 1694,
            "x": 391.1562925324222,
            "y": 128.24490170096504,
            "vy": 0,
            "vx": 0,
            "r": 1.1208981001727116,
            "node": {
                "Conference": "InfoVis",
                "Year": 1997,
                "Title": "A spreadsheet approach to information visualization",
                "DOI": "10.1109/infvis.1997.636761",
                "Link": "http://dx.doi.org/10.1109/INFVIS.1997.636761",
                "FirstPage": 17,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "In information visualization, as the volume and complexity of the data increases, researchers require more powerful visualization tools that enable them to more effectively explore multidimensional datasets. We discuss the general utility of a novel visualization spreadsheet framework. Just as a numerical spreadsheet enables exploration of numbers, a visualization spreadsheet enables exploration of visual forms of information. We show that the spreadsheet approach facilitates certain information visualization tasks that are more difficult using other approaches. Unlike traditional spreadsheets, which store only simple data elements and formulas in each cell, a visualization spreadsheet cell can hold an entire complex data set, selection criteria, viewing specifications, and other information needed for a full-fledged information visualization. Similarly, inter-cell operations are far more complex, stretching beyond simple arithmetic and string operations to encompass a range of domain-specific operators. We have built two prototype systems that illustrate some of these research issues. The underlying approach in our work allows domain experts to define new data types and data operations, and enables visualization experts to incorporate new visualizations, viewing parameters, and view operations.",
                "AuthorNamesDeduped": "Ed Huai-hsin Chi;Phillip Barry;John Riedl;Joseph A. Konstan",
                "AuthorNames": "E.H.-H. Chi;P. Barry;J. Riedl;J. Konstan",
                "AuthorAffiliation": "Department of Computer Science, University of Minnesota, Minneapolis, MN, USA;Department of Computer Science, University of Minnesota, Minneapolis, MN, USA;Department of Computer Science, University of Minnesota, Minneapolis, MN, USA;Department of Computer Science, University of Minnesota, Minneapolis, MN, USA",
                "InternalReferences": "0.1109/visual.1996.567796;10.1109/visual.1996.567752;10.1109/infvis.1995.528690;10.1109/visual.1995.480794;10.1109/visual.1993.398859;10.1109/infvis.1996.559222",
                "AuthorKeywords": null,
                "AminerCitationCount": 174,
                "CitationCountCrossRef": 15,
                "PubsCitedCrossRef": 35,
                "DownloadsXplore": 532,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3219,
                "i": [
                    3219
                ]
            }
        },
        {
            "name": "Suresh K. Lodha",
            "value": 87,
            "numPapers": 19,
            "cluster": "11",
            "visible": 1,
            "index": 1695,
            "x": -375.1653148819765,
            "y": 169.70853399139187,
            "vy": 0,
            "vx": 0,
            "r": 1.1001727115716753,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "UFLOW: visualizing uncertainty in fluid flow",
                "DOI": "10.1109/visual.1996.568116",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568116",
                "FirstPage": 249,
                "LastPage": 254,
                "PaperType": "C",
                "Abstract": "Uncertainty or errors are introduced in fluid flow data as the data is acquired, transformed and rendered. Although researchers are aware of these uncertainties, little has been done to incorporate them in the existing visualization systems for fluid flow. In the absence of integrated presentation of data and its associated uncertainty, the analysis of the visualization is incomplete at best and may lead to inaccurate or incorrect conclusions. The article presents UFLOW-a system for visualizing uncertainty in fluid flow. Although there are several sources of uncertainties in fluid flow data, in this work, we focus on uncertainty arising from the use of different numerical algorithms for computing particle traces in a fluid flow. The techniques that we have employed to visualize uncertainty in fluid flow include uncertainty glyphs, flow envelopes, animations, priority sequences, twirling batons of trace viewpoints, and rakes. These techniques are effective in making the users aware of the effects of different integration methods and their sensitivity, especially near critical points in the flow field.",
                "AuthorNamesDeduped": "Suresh K. Lodha;Alex Pang;Robert E. Sheehan;Craig M. Wittenbrink",
                "AuthorNames": "S.K. Lodha;A. Pang;R.E. Sheehan;C.M. Wittenbrink",
                "AuthorAffiliation": "Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235199;10.1109/visual.1996.568105;10.1109/visual.1995.485141;10.1109/visual.1994.346315;10.1109/visual.1995.480798",
                "AuthorKeywords": "flow visualization, uncertainty glyphs, streamlines, rakes, flow envelopes, animation",
                "AminerCitationCount": 180,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 348,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3336,
                "i": [
                    3336
                ]
            }
        },
        {
            "name": "Robert E. Sheehan",
            "value": 78,
            "numPapers": 10,
            "cluster": "11",
            "visible": 1,
            "index": 1696,
            "x": 162.04652587424584,
            "y": -378.66993999007553,
            "vy": 0,
            "vx": 0,
            "r": 1.0898100172711571,
            "node": {
                "Conference": "Vis",
                "Year": 1996,
                "Title": "UFLOW: visualizing uncertainty in fluid flow",
                "DOI": "10.1109/visual.1996.568116",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1996.568116",
                "FirstPage": 249,
                "LastPage": 254,
                "PaperType": "C",
                "Abstract": "Uncertainty or errors are introduced in fluid flow data as the data is acquired, transformed and rendered. Although researchers are aware of these uncertainties, little has been done to incorporate them in the existing visualization systems for fluid flow. In the absence of integrated presentation of data and its associated uncertainty, the analysis of the visualization is incomplete at best and may lead to inaccurate or incorrect conclusions. The article presents UFLOW-a system for visualizing uncertainty in fluid flow. Although there are several sources of uncertainties in fluid flow data, in this work, we focus on uncertainty arising from the use of different numerical algorithms for computing particle traces in a fluid flow. The techniques that we have employed to visualize uncertainty in fluid flow include uncertainty glyphs, flow envelopes, animations, priority sequences, twirling batons of trace viewpoints, and rakes. These techniques are effective in making the users aware of the effects of different integration methods and their sensitivity, especially near critical points in the flow field.",
                "AuthorNamesDeduped": "Suresh K. Lodha;Alex Pang;Robert E. Sheehan;Craig M. Wittenbrink",
                "AuthorNames": "S.K. Lodha;A. Pang;R.E. Sheehan;C.M. Wittenbrink",
                "AuthorAffiliation": "Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA;Computer Science Department, University of California, Santa Cruz, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235199;10.1109/visual.1996.568105;10.1109/visual.1995.485141;10.1109/visual.1994.346315;10.1109/visual.1995.480798",
                "AuthorKeywords": "flow visualization, uncertainty glyphs, streamlines, rakes, flow envelopes, animation",
                "AminerCitationCount": 180,
                "CitationCountCrossRef": 41,
                "PubsCitedCrossRef": 34,
                "DownloadsXplore": 348,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3336,
                "i": [
                    3336
                ]
            }
        },
        {
            "name": "Craig M. Wittenbrink",
            "value": 73,
            "numPapers": 5,
            "cluster": "11",
            "visible": 1,
            "index": 1697,
            "x": 136.3399691760478,
            "y": 388.79482095968604,
            "vy": 0,
            "vx": 0,
            "r": 1.0840529648819806,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "IFS fractal interpolation for 2D and 3D visualization",
                "DOI": "10.1109/visual.1995.480798",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.480798",
                "FirstPage": 77,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "Reconstruction is used frequently in visualization of one, two, and three dimensional data. Data uncertainty is typically ignored, and a deficiency of many interpolation schemes is smoothing which may indicate features or characteristics of the data that are not there. The author investigates the use of iterated function systems (IFS's) for interpolation. He shows new derivations for fractal interpolation in two and three dimensional scalar data, and new point and polytope rendering algorithms with tremendous speed advantages over ray tracing. The interpolations may be used to give an indication of the uncertainty of the data, statistically represent the data at a variety of scales, allow tunability from the data, and may allow more accurate data analysis.",
                "AuthorNamesDeduped": "Craig M. Wittenbrink",
                "AuthorNames": "C.M. Wittenbrink",
                "AuthorAffiliation": "Computer Engineering & Information Sciences, University of California, Santa Cruz, Santa Cruz, CA, USA",
                "InternalReferences": "10.1109/visual.1994.346285",
                "AuthorKeywords": null,
                "AminerCitationCount": 70,
                "CitationCountCrossRef": 18,
                "PubsCitedCrossRef": 20,
                "DownloadsXplore": 167,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3431,
                "i": [
                    3431
                ]
            }
        },
        {
            "name": "Barry G. Becker",
            "value": 56,
            "numPapers": 21,
            "cluster": "6",
            "visible": 1,
            "index": 1698,
            "x": -363.2669314424375,
            "y": -194.64618290733418,
            "vy": 0,
            "vx": 0,
            "r": 1.0644789867587796,
            "node": {
                "Conference": "Vis",
                "Year": 1993,
                "Title": "Flow volumes for interactive vector field visualization",
                "DOI": "10.1109/visual.1993.398846",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1993.398846",
                "FirstPage": 19,
                "LastPage": 24,
                "PaperType": "C",
                "Abstract": "Flow volumes are the volumetric equivalent of stream lines. They provide more information about the vector field being visualized than do stream lines or ribbons. Presented is an efficient method for producing flow volumes, composed of transparently rendered tetrahedra, for use in an interactive system. The problems of rendering, subdivision, sorting, composing artifacts, and user interaction are dealt with. Efficiency comes from rendering only the volume of the smoke, and using hardware texturing and compositing.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Nelson L. Max;Barry G. Becker;Roger Crawfis",
                "AuthorNames": "N. Max;B. Becker;R. Crawfis",
                "AuthorAffiliation": "Lawrence Livemore National Laboratory, Livermore, CA, USA;Lawrence Livemore National Laboratory, Livermore, CA, USA;Lawrence Livemore National Laboratory, Livermore, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235210;10.1109/visual.1992.235211",
                "AuthorKeywords": null,
                "AminerCitationCount": 153,
                "CitationCountCrossRef": 37,
                "PubsCitedCrossRef": 13,
                "DownloadsXplore": 190,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3538,
                "i": [
                    3538
                ]
            }
        },
        {
            "name": "Lloyd Treinish",
            "value": 127,
            "numPapers": 21,
            "cluster": "9",
            "visible": 1,
            "index": 1699,
            "x": 399.46087740155633,
            "y": -101.88722896211674,
            "vy": 0,
            "vx": 0,
            "r": 1.1462291306850891,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "A rule-based tool for assisting colormap selection",
                "DOI": "10.1109/visual.1995.480803",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.480803",
                "FirstPage": 118,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "The paper presents an interactive approach for guiding the user's select of colormaps in visualization. PRAVDAColor, implemented as a module in the IBM Visualization Data Explorer, provides the user a selection of appropriate colormaps given the data type and spatial frequency, the user's task, and properties of the human perceptual system.",
                "AuthorNamesDeduped": "Lawrence D. Bergman;Bernice E. Rogowitz;Lloyd Treinish",
                "AuthorNames": "L.D. Bergman;B.E. Rogowitz;L.A. Treinish",
                "AuthorAffiliation": "IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA",
                "InternalReferences": "0.1109/visual.1995.480821;10.1109/visual.1993.398874",
                "AuthorKeywords": null,
                "AminerCitationCount": 329,
                "CitationCountCrossRef": 91,
                "PubsCitedCrossRef": 24,
                "DownloadsXplore": 1304,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3414,
                "i": [
                    3414
                ]
            }
        },
        {
            "name": "Bruce Lucas",
            "value": 76,
            "numPapers": 4,
            "cluster": "9",
            "visible": 1,
            "index": 1700,
            "x": -225.7925845531047,
            "y": 345.061891203345,
            "vy": 0,
            "vx": 0,
            "r": 1.0875071963154865,
            "node": {
                "Conference": "Vis",
                "Year": 1992,
                "Title": "An architecture for a scientific visualization system",
                "DOI": "10.1109/visual.1992.235219",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1992.235219",
                "FirstPage": 107,
                "LastPage": 114,
                "PaperType": "C",
                "Abstract": "The architecture of the Data Explorer, a scientific visualization system, is described. Data Explorer supports the visualization of a wide variety of data by means of a flexible set of visualization modules. A single powerful data model common to all modules allows a wide range of data types to be imported and passed between modules. There is integral support for parallelism, affecting the data model and the execution model. The visualization modules are highly interoperable, due in part to the common data model, and exemplified by the renderer. An execution model facilitates parallelization of modules and incorporates optimizations such as caching. The two-process client-server system structure consists of a user interface that communicates with an executive via a dataflow language.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Bruce Lucas;G. D. Abrams;Nancy S. Collins;D. A. Epstien;Donna L. Gresh;Kevin P. McAuliffe",
                "AuthorNames": "B. Lucas;G.D. Abram;N.S. Collins;D.A. Epstein;D.L. Gresh;K.P. McAuliffe",
                "AuthorAffiliation": "IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA",
                "InternalReferences": "0.1109/visual.1990.146397;10.1109/visual.1992.235204;10.1109/visual.1991.175818;10.1109/visual.1991.175833",
                "AuthorKeywords": null,
                "AminerCitationCount": 236,
                "CitationCountCrossRef": 58,
                "PubsCitedCrossRef": 8,
                "DownloadsXplore": 114,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3585,
                "i": [
                    3585
                ]
            }
        },
        {
            "name": "Nancy S. Collins",
            "value": 69,
            "numPapers": 3,
            "cluster": "9",
            "visible": 1,
            "index": 1701,
            "x": -66.61311137369604,
            "y": -407.07823989144345,
            "vy": 0,
            "vx": 0,
            "r": 1.079447322970639,
            "node": {
                "Conference": "Vis",
                "Year": 1992,
                "Title": "An architecture for a scientific visualization system",
                "DOI": "10.1109/visual.1992.235219",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1992.235219",
                "FirstPage": 107,
                "LastPage": 114,
                "PaperType": "C",
                "Abstract": "The architecture of the Data Explorer, a scientific visualization system, is described. Data Explorer supports the visualization of a wide variety of data by means of a flexible set of visualization modules. A single powerful data model common to all modules allows a wide range of data types to be imported and passed between modules. There is integral support for parallelism, affecting the data model and the execution model. The visualization modules are highly interoperable, due in part to the common data model, and exemplified by the renderer. An execution model facilitates parallelization of modules and incorporates optimizations such as caching. The two-process client-server system structure consists of a user interface that communicates with an executive via a dataflow language.&lt;&lt;ETX&gt;&gt;",
                "AuthorNamesDeduped": "Bruce Lucas;G. D. Abrams;Nancy S. Collins;D. A. Epstien;Donna L. Gresh;Kevin P. McAuliffe",
                "AuthorNames": "B. Lucas;G.D. Abram;N.S. Collins;D.A. Epstein;D.L. Gresh;K.P. McAuliffe",
                "AuthorAffiliation": "IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA;IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA",
                "InternalReferences": "0.1109/visual.1990.146397;10.1109/visual.1992.235204;10.1109/visual.1991.175818;10.1109/visual.1991.175833",
                "AuthorKeywords": null,
                "AminerCitationCount": 236,
                "CitationCountCrossRef": 58,
                "PubsCitedCrossRef": 8,
                "DownloadsXplore": 114,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3585,
                "i": [
                    3585
                ]
            }
        },
        {
            "name": "David A. Lane",
            "value": 34,
            "numPapers": 21,
            "cluster": "6",
            "visible": 1,
            "index": 1702,
            "x": 324.1910595322089,
            "y": 255.24528775157398,
            "vy": 0,
            "vx": 0,
            "r": 1.0391479562464019,
            "node": {
                "Conference": "Vis",
                "Year": 1995,
                "Title": "Unsteady flow volumes",
                "DOI": "10.1109/visual.1995.485146",
                "Link": "http://dx.doi.org/10.1109/VISUAL.1995.485146",
                "FirstPage": 329,
                "LastPage": null,
                "PaperType": "C",
                "Abstract": "Flow volumes are extended for use in unsteady (time-dependent) flows. The resulting unsteady flow volumes are the 3D analogs of streaklines. There are few examples where methods other than particle tracing have been used to visualize time-varying flows. Since particle paths can become convoluted in time, there are additional considerations to be made when extending any visualization technique to unsteady flows. We present some solutions to the problems which occur in subdivision, rendering and system design. We apply the unsteady flow volumes to a variety of field types, including moving multi-zoned curvilinear grids.",
                "AuthorNamesDeduped": "Barry G. Becker;Nelson L. Max;David A. Lane",
                "AuthorNames": "B.G. Becker;D.A. Lane;N.L. Max",
                "AuthorAffiliation": "Lawrence Livermore National Laboratories, Livermore, CA, USA;NASA Ames Research Center, Computer Science Corporation, CA, USA;Lawrence Livermore National Laboratories, Livermore, CA, USA",
                "InternalReferences": "0.1109/visual.1992.235226;10.1109/visual.1992.235227;10.1109/visual.1994.346311;10.1109/visual.1993.398876;10.1109/visual.1993.398875;10.1109/visual.1993.398846;10.1109/visual.1992.235211;10.1109/visual.1993.398877;10.1109/visual.1994.346312",
                "AuthorKeywords": null,
                "AminerCitationCount": 60,
                "CitationCountCrossRef": 11,
                "PubsCitedCrossRef": 14,
                "DownloadsXplore": 87,
                "Award": null,
                "GraphicsReplicabilityStamp": null,
                "cluster": 1,
                "selected": true,
                "seqId": 3438,
                "i": [
                    3438
                ]
            }
        }
    ]
}

GitHub Events

Total
  • Push event: 1
Last Year
  • Push event: 1

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 18
  • Total Committers: 2
  • Avg Commits per committer: 9.0
  • Development Distribution Score (DDS): 0.167
Past Year
  • Commits: 5
  • Committers: 1
  • Avg Commits per committer: 5.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
John Alexis Guerra Gómez j****a@g****m 15
John Alexis Guerra Gómez j****a@y****m 3
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

package.json npm
  • eslint ^8.18.0 development
  • eslint-config-prettier ^8.5.0 development
  • prettier ^2.8.8 development